SECURE GRADIENT DESCENT COMPUTATION METHOD, SECURE DEEP LEARNING METHOD, SECURE GRADIENT DESCENT COMPUTATION SYSTEM, SECURE DEEP LEARNING SYSTEM, SECURE COMPUTATION APPARATUS, AND PROGRAM

Information

  • Patent Application
  • 20220329408
  • Publication Number
    20220329408
  • Date Filed
    August 14, 2019
    4 years ago
  • Date Published
    October 13, 2022
    a year ago
Abstract
A calculation of a gradient descent method in secure computing is performed at high speed while maintaining accuracy. A secure gradient descent computation method calculates a gradient descent method while keeping a gradient and a parameter concealed. An initialization unit initializes concealed values [M], [V] of matrices M, V (S11). A gradient calculation unit determines concealed value [G] of a matrix G of a gradient g (S12). A parameter update unit calculates [M] β1 [M]+(1−β1) [G] (S13-1), calculates [V]←β2 [V]+(1−β2) [G]◯[G] (S13-2), calculates [M{circumflex over ( )}]←β{circumflex over ( )}1, t [M] (S13-3), calculates [V{circumflex over ( )}]←β{circumflex over ( )}2, t [V] (S13-4), calculates [G{circumflex over ( )}]←Adam ([V{circumflex over ( )}]) (S13-5), calculates [G{circumflex over ( )}]←[G{circumflex over ( )}]◯[M{circumflex over ( )}] (S13-6), and calculates [W]←[W]−[G{circumflex over ( )}] (S13-7).
Description
TECHNICAL FIELD

The present invention relates to a technique for calculating in a gradient descent method in secure computing.


BACKGROUND ART

The gradient descent method is a learning algorithm that is often used in machine learning, such as deep learning and logistic regression. Conventional techniques for performing machine learning using a gradient descent method in secure computing include SecureML (NPL 1) and SecureNN (NPL 2).


While the most basic gradient descent method is relatively easy to implement, problems such as tend to fall into a local solution, slow to converge, and the like, are known. In order to solve these problems, various optimization techniques to gradient descent methods have been proposed, and in particular, a technique called Adam is known to have a fast convergence.


CITATION LIST
Non Patent Literature



  • NPL 1: Payman Mohassel and Yupeng Zhang, “SecureML: A System for Scalable Privacy-Preserving Machine Learning, “In IEEE Symposium on Security and Privacy, S P 2017, pp. 19-38, 2017.

  • NPL 2: Sameer Wagh, Divya Gupta, and Nishanth Chandran, “SecureNN: 3-Party Secure Computation for Neural Network Training, “Proceedings on Privacy Enhancing Technologies, Vol. 1, p. 24, 2019.



SUMMARY OF THE INVENTION
Technical Problem

However, processing of Adam includes a calculation of a square root and division, so processing costs in secure computing are very large. Meanwhile, in conventional techniques implemented by simple gradient descent methods, there is a problem in that overall processing time is long because the number of learning times required until convergence is large.


In consideration of technical problems like those described above, an object of the present invention is to provide a technique that can perform calculation of a gradient descent method in secure computing at high speed while maintaining accuracy.


Means for Solving the Problem

In order to solve the above problem, a secure gradient descent computation method of a first aspect of the present invention is a secure gradient descent computation method performed by a secure gradient descent computation system including a plurality of secure computation apparatuses, the secure gradient descent computation method calculating a gradient descent method while at least a gradient G and a parameter W are kept concealed, wherein β1, β2, η, and ε are predetermined hyperparameters, ◯ is a product for each element, t is the number of learning times, [G] is a concealed value of the gradient G, [W] is a concealed value of the parameter W, [M], [M{circumflex over ( )}], [V], [V{circumflex over ( )}], and [G{circumflex over ( )}] are concealed values for matrices M, M{circumflex over ( )}, V, V{circumflex over ( )}, and G{circumflex over ( )} having the same number of elements as the gradient G, and β{circumflex over ( )}1, t, β{circumflex over ( )}2, t, and g{circumflex over ( )} are given by following equations,











1

1
-

β
1
t



=


β
^


1
,
t



,


1

1
-

β
2
t



=




β
^


2
,
t




η



v
^


+
ε



=

g
^







[

Math
.

7

]







Adam is a function that calculates a secure batch mapping to output the concealed value [G{circumflex over ( )}] of the matrix G{circumflex over ( )} of the value g{circumflex over ( )} with the concealed value [V{circumflex over ( )}] of the matrix V{circumflex over ( )} of a value v{circumflex over ( )} as an input, a parameter update unit of each of the secure computation apparatuses calculates [M]←β1 [M]+(1−β1) [G], the parameter update unit calculates [V]←β2 [V]+(1−β2) [G]◯[G], the parameter update unit calculates [M{circumflex over ( )}]←β{circumflex over ( )}1, t [M], the parameter update unit calculates [V{circumflex over ( )}] β{circumflex over ( )}2, t [V], the parameter update unit calculates [G{circumflex over ( )}]←Adam ([V{circumflex over ( )}]), the parameter update unit calculates [G{circumflex over ( )}]←[G{circumflex over ( )}]◯[M{circumflex over ( )}], and the parameter update unit calculates [W]←[W]−[G{circumflex over ( )}].


In order to solve the above problem, a secure deep learning method of a second aspect of the present invention is a secure deep learning method performed by a secure deep learning system including a plurality of secure computation apparatuses, the secure deep learning method learning a deep neural network while at least a feature X of learning data, and true data T and a parameter W of the learning data are kept concealed, wherein β1, β2, η, and ε are predetermined hyperparameters, is a product of matrices, ◯ is a product for each element, t is the number of learning times, [G] is a concealed value of a gradient G, [W] is a concealed value of the parameter W, [X] is a concealed value of the feature X of the learning data, [T] is a concealed value of a true label T of the learning data, [M], [M{circumflex over ( )}], [V], [V{circumflex over ( )}], [G{circumflex over ( )}], [U], [Y], and [Z] are concealed values of matrices M, M{circumflex over ( )}, V, V{circumflex over ( )}, G{circumflex over ( )}, U, Y, and Z having the same number of elements as the gradient G, and β{circumflex over ( )}1, t, β{circumflex over ( )}2, t, and g{circumflex over ( )} are given by following equations,











1

1
-

β
1
t



=


β
^


1
,
t



,


1

1
-

β
2
t



=




β
^


2
,
t




η



v
^


+
ε



=

g
^







[

Math
.

8

]







Adam is a function that calculates a secure batch mapping to output the concealed value [G{circumflex over ( )}] of the matrix G{circumflex over ( )} of the value g{circumflex over ( )} with the concealed value [V{circumflex over ( )}] of the matrix V{circumflex over ( )} of a value v{circumflex over ( )} as an input, rshift is an arithmetic right shift, m is the number of pieces of learning data used for one learning, and H′ is defined by a following equation,






H′=−└log2m┘  [Math. 9]


n is the number of hidden layers of the deep neural network, Activation is an activation function of the hidden layers, Activation2 is an activation function of an output layer of the deep neural network, Activation2′ is a loss function corresponding to the activation function Activation2, Activation′ is a derivative of the activation function Activation, a forward propagation unit of each of the secure computation apparatuses calculates [U1]←[W0]custom-character[X], the forward propagation unit calculates [V]←Activation ([U1]), the forward propagation unit calculates [Ui+1]←[Wi]custom-character[Yi] for each i greater than or equal to 1 and less than or equal to n−1, the forward propagation unit calculates [Yi+1]←Activation ([Ui+1]) for each i greater than or equal to 1 and less than or equal to n−1, the forward propagation unit calculates [Un+1]←[Wn]custom-character[Yn], the forward propagation unit calculates [Yn+1]←Activation2 ([Un+1]), a back propagation unit of each of the secure computation apparatuses calculates [Zn+1]←Activation2′ ([Yn+1], [T]), the back propagation unit calculates [Zn]←Activation′ ([Un])◯([Zn+1]custom-character[Wn]), the back propagation unit calculates [Zn−i]←Activation′ ([Un−i])◯([Zn−i+1]custom-character[Wn−i]) for each i greater than or equal to 1 and less than or equal to n−1, a gradient calculation unit of each of the secure computation apparatuses calculates [G0]←[Z1]custom-character[X], the gradient calculation unit calculates [Gi]←[Zi+1]custom-character[Yi] for each i greater than or equal to 1 and less than or equal to n−1, the gradient calculation unit calculates [Gn]←[Zn+1]custom-character[Yn], a parameter update unit of each of the secure computation apparatuses calculates [G0]←rshift ([G0], H′), the parameter update unit calculates [Gi]←rshift ([Gi], H′) for each i greater than or equal to 1 and less than or equal to n−1, the parameter update unit calculates [Gn]←rshift ([Gn], H′), and the parameter update unit learns a parameter [Wi] between an i layer and an i+1 layer using a gradient [Gi] between the i layer and the i+1 layer, in accordance with the secure gradient descent computation method according to the first aspect, for each i greater than or equal to 0 and less than or equal to n.


Effects of the Invention

According to the present invention, the calculation of the gradient descent method in the secure computing can be performed at high speed while maintaining accuracy.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a diagram illustrating a functional configuration of a secure gradient descent computation system.



FIG. 2 is a diagram illustrating a functional configuration of a secure computation apparatus.



FIG. 3 is a diagram illustrating a processing procedure of a secure gradient descent computation method.



FIG. 4 is a diagram illustrating a processing procedure of a secure gradient descent computation method.



FIG. 5 is a diagram illustrating a functional configuration of a secure deep learning system.



FIG. 6 is a diagram illustrating a functional configuration of a secure computation apparatus.



FIG. 7 is a diagram illustrating a processing procedure of a secure deep learning method.



FIG. 8 is a diagram illustrating a functional configuration of a computer.





DESCRIPTION OF EMBODIMENTS

First, a notation method and definitions of terms herein will be described.


Notation Method


In the following description, symbols “—” and “A” used in the text should originally be written directly above characters immediately before them, but are written immediately after the characters due to a limitation of text notation. In mathematical formulas, these symbols are written in their original positions, that is, directly above characters. For example, “a{right arrow over ( )}” and “a{circumflex over ( )}” is expressed by the following expression in a mathematical formula:





[Math. 10]






{right arrow over (a)},â


(underscore) in the appendix represents the subscript. For example, xy_z represents yz is the superscript to x, and xy_z represents yz is the subscript to x.


A vector is written as a{right arrow over ( )}:=(a0, . . . , an−1). a being defined by b is written as a:=b. An internal product of two vectors a{right arrow over ( )} and b{right arrow over ( )} including the same number of elements is written as a{right arrow over ( )}custom-characterb{right arrow over ( )}. A product of two matrices is written as (custom-character), and a product for each element of the two matrices or vectors is written as (◯). Those with which no operator is written represent scalar times.


[a] represents encrypted a that is encrypted by using secret sharing or the like, and is referred to as a “share”.


Secure Batch Mapping


A secure batch mappping is a function of calculating a lookup table, which is a technique that can arbitrarily define the domain of definition and range of values. The secure batch mapping performs processing in a vector unit, so the secure batch mapping has a property that it is effective in performing the same processing on a plurality of inputs. The following illustrates specific processing of the secure batch mapping.


The secure batch mapping, with the share column [a{right arrow over ( )}]:=([a0], . . . , [am−1]), the domain of definition (x0, . . . , xl−1), and the range of values (y0, . . . , yl−1) as inputs, outputs a share with each input value mapped, specifically, a share column ([b0], . . . , [bm−1]) such that xj≤ai≤xj+1 and bi=yj for 0≤i<m. See Reference Literature 1 for the details of the secure batch mapping.


Reference Literature 1: Koki Hamada, Dai Ikarashi, Koji Chida, “A Batch Mapping Algorithm for Secure Function Evaluation,” IEICE Transactions A, Vol. 96, No. 4, pp. 157-165, 2013.


Arithmetic Right Shift


With a share column [a{right arrow over ( )}]:=([a0], . . . , [am−1]) and a public value t as inputs, [b{right arrow over ( )}]:=([b0], . . . , [bm−1]) in which each element of [a{right arrow over ( )}] is arithmetic right shifted by t bits is output. Hereinafter, the right shift is represented as rshift. The arithmetic right shift is a shift that performs padding on the left side by code bits rather than 0, and uses a logical right shift rlshift to configure rshift ([A*2n], n−m)=[A*2m], as in Equations (1) to (3). Note that the details of the logical right shift rlshift are referenced in Reference Literature 2.





[Math. 11]





[A′×2n]=[2n]+2n(a≥|A|)  (1)





[A′×2m]=rlshift([A′×2n],n−m)  (2)





[2m]=[A′×2m]−2m  (3)


Reference Literature 2: Ibuki Mishina, Dai Ikarashi, Koki Hamada, Ryo Kikuchi, “Designs and Implementations of Efficient and Accurate Secret Logistic Regression,” In CSS, 2018.


Optimization Technique Adam


In a simple gradient descent method, the calculated gradient g is processed with w=w−ηg (η is a learning rate) to update the parameter w. Meanwhile, in Adam, processing of Equations (4) to (8) is performed to the gradient to update the parameter. Processing to calculate the gradient g is the same process even in the case of a simple gradient descent method and in the case of applying Adam. Note that t is a variable representing what number the learning is, and gt represents the gradient of the t-th time. m, v, m{circumflex over ( )}, and v{circumflex over ( )} are matrices of the same size as g, and are all initialized at 0. custom-charactert (the upper appendix t) represents the t-power.









[

Math
.

12

]










m

t
+
1


=



β
1



m
t


+


(

1
-

β
1


)



g
t







(
4
)













v

t
+
1


=



β
2



v
t


+


(

1
-

β
2


)




g
t



g
t








(
5
)














m
^


t
+
1


=


1

1
-

β
1
t





m

t
+
1







(
6
)














v
^


t
+
1


=


1

1
-

β
2
t





v

t
+
1







(
7
)













w

t
+
1


=


w
t

-


η




v
^


t
+
1



+
ε





m
^


t
+
1








(
8
)







Here β1, β2 is a constant close to 1, η is the learning rate, and ε is a value for preventing Equation (8) from being unable to calculate in the case of √v{circumflex over ( )}t+1=0. In the proposal article of Adam (Reference Literature 3), β1=0.9, β2=0.999, η=0.001, and ε=10−8.


Reference Literature 3: Diederik P Kingma and Jimmy Ba, “Adam: A Method for Stochastic Optimization,” arXiv preprint arXiv: 1412.6980, 2014.


In Adam, processing increases as compared to a simple gradient descent method, so the processing time required for a single learning increases. Meanwhile, the number of learning times required to converge decreases significantly, so the overall processing time required for learning is shortened.


Hereinafter, an embodiment of the present invention will be described in detail. Further, the same reference numerals are given to constituent elements having the same functions in the drawings, and repeated description will be omitted.


First Embodiment

In a first embodiment, the optimization technique Adam of a gradient descent method is implemented using a secure batch mapping while keeping a gradient, a parameter, and values during calculation concealed.


In the following description, β1, t, β2, t, g{circumflex over ( )} are defined by the following equations.











1

1
-

β
1
t



=


β
^


1
,
t



,




[

Math
.

13

]











1

1
-

β
2
t



=


β
^


2
,
t



,







η



v
^


+
ε


=

g
^





β{circumflex over ( )}1, t and β2, t are calculated in advance for each t. The calculation of g{circumflex over ( )} is achieved by using a secure batch mapping that uses v{circumflex over ( )} as an input and outputs η/(√v{circumflex over ( )}+ε). The secure batch mapping is labeled Adam (v{circumflex over ( )}). Constants β1, β2, η, and ε are plaintext. Because the calculation of g includes square root and division, the processing cost in the secure computing is large. However, using a secure batch mapping is efficient because it only requires a single process.


With reference to FIG. 1, an example of a configuration of a secure gradient descent computation system of the first embodiment will be described. The secure gradient descent computation system 100 includes N (≥2) secure computation apparatuses l1, . . . , lN, as illustrated in FIG. 1, for example. In the present embodiment, each of the secure computation apparatuses l1, . . . , lN is connected to a communication network 9. The communication network 9 is a circuit-switched or packet-switched communication network configured to enable connected apparatuses to communicate with each other, and, for example, the Internet, a local area network (LAN), a wide area network (WAN), or the like can be used. Note that the apparatuses need not necessarily be able to communicate online via the communication network 9. For example, information to be input to the secure computation apparatuses l1, . . . , lN may be stored in a portable recording medium such as a magnetic tape or a USB memory, and the information may be configured to be input from the portable recording medium to the secure computation apparatuses l1, . . . , lN offline.


With reference to FIG. 2, an example of a configuration of a secure computation apparatus 1i (i=1, . . . , N) included in the secure gradient descent computation system 100 of the first embodiment will be described. As illustrated in FIG. 2, for example, the secure computation apparatus 1i includes a parameter storage unit 10, an initialization unit 11, a gradient calculation unit 12, and a parameter update unit 13. The secure computation apparatus 1i (i=1, . . . , N) performs processing of each step described below while coordinating with other secure computation apparatuses 1i′ (i′=1, . . . , N, where i≠i′), thereby implementing the secure gradient descent computation method of the present embodiment.


The secure computation apparatus 1i is a special apparatus configured by causing, for example, a known or dedicated computer including a central processing unit (CPU), a main storage apparatus (a random access memory or RAM), and the like to read a special program. The secure computation apparatus 1i, for example, executes each processing under control of the central processing unit. Data input to the secure computation apparatus 1i and data obtained in each processing are stored in the main storage apparatus, for example, and the data stored in the main storage apparatus is read by the central processing unit as needed to be used for other processing. At least a portion of processing units of the secure computation apparatus 1i may be constituted with hardware such as an integrated circuit. Each storage unit included in the secure computation apparatus 1i may be configured by, for example, the main storage apparatus such as the random access memory (RAM), an auxiliary storage apparatus configured with a hard disk, an optical disc, or a semiconductor memory element such as a flash memory, or middleware such as a relational database or a key-value store.


With reference to FIG. 3, a processing procedure of the secure gradient descent computation method performed by the secure gradient descent computation system 100 of the first embodiment will be described.


The parameter storage unit 10 stores predetermined hyperparameters β1, β2, η, and ε. These hyperparameters may be set to the values described in Reference Literature 3, for example. The parameter storage unit 10 stores pre-calculated hyperparameters β{circumflex over ( )}1, t, and β{circumflex over ( )}2, t. Furthermore, the parameter storage unit 10 stores a secure batch mapping Adam set with a domain of definition and a range of values in advance.


At step S11, the initialization unit 11 of each secure computation apparatus 1i initializes the concealed values [M], [V] of the matrices M, V at 0. The matrices M, V are matrices of the same size as the gradient G. The initialization unit 11 outputs the concealed values [M], [V] of the matrices M, V to the parameter update unit 13.


At step S12, the gradient calculation unit 12 of each secure computation apparatus 1i calculates the concealed value [G] of the gradient G. The gradient calculation unit 12 may calculate the gradient G in a manner normally performed in processing of the subject to which a gradient descent method is applied (e.g., logistic regression, learning of neural networks, and the like). The gradient calculation unit 11 outputs the concealed value [G] of the gradient G to the parameter update unit 13.


At step S13-1, the parameter update unit 13 of each secure computation apparatus 1i calculates [M]←β1 [M]+(1−β1) [G] by using the hyperparameter pi stored in the parameter storage unit 10, and updates the concealed value [M] of the matrix M.


At step S13-2, the parameter update unit 13 of each secure computation apparatus 1i calculates [V]←β2 [V]+(1−β2) [G]◯[G] by using the hyperparameter β2 stored in the parameter storage unit 10, and updates the concealed value [V] of the matrix V.


At step S13-3, the parameter update unit 13 of each secure computation apparatus 1i calculates [M{circumflex over ( )}]←β{circumflex over ( )}1, t [M] by using the hyperparameter β{circumflex over ( )}1, t stored in the parameter storage unit 10, and generates the concealed value [M{circumflex over ( )}] of the matrix M{circumflex over ( )}. The matrix M{circumflex over ( )} is a matrix having the same number of elements as the matrix M (i.e., having the same number of elements as the gradient G).


At step S13-4, the parameter update unit 13 of each secure computation apparatus 1i calculates [V{circumflex over ( )}]←β{circumflex over ( )}2, t [V] by using the hyperparameter β{circumflex over ( )}2, t stored in the parameter storage unit 10, and generates the concealed value [V{circumflex over ( )}] of the matrix V{circumflex over ( )}. The matrix V{circumflex over ( )} is a matrix having the same number of elements as the matrix V (i.e., having the same number of elements as the gradient G).


At step S13-5, the parameter update unit 13 of each secure computation apparatus 1i calculates [G{circumflex over ( )}]←Adam ([V{circumflex over ( )}]) by using the secure batch mapping Adam, and generates the concealed value [G{circumflex over ( )}] of the matrix G{circumflex over ( )}. The matrix G{circumflex over ( )} is a matrix having the same number of elements as the matrix V{circumflex over ( )} (i.e., having the same number of elements as the gradient G).


At step S13-6, the parameter update unit 13 of each secure computation apparatus 1i calculates [G{circumflex over ( )}]←[G{circumflex over ( )}]◯[M{circumflex over ( )}] and updates the concealed value [G{circumflex over ( )}] of the gradient G{circumflex over ( )}.


At step S13-7, the parameter update unit 13 of each secure computation apparatus 1i calculates [W]←[W]−[G{circumflex over ( )}] and updates the concealed value [W] of the parameter W.


The algorithm for the parameter update performed from step S13-1 to step S13-7 by the parameter update unit 13 of the present embodiment will be indicated in Algorithm 1.












Algorithm 1: Secure Computation Adam


Algorithm Using Secure Batch Mapping

















Input 1: Gradient [G]



Input 2: Parameter [W]



Input 3: [M], [V] initialized at 0



Input 4: Hyperparameters β1, β2, β{circumflex over ( )}1, t, β{circumflex over ( )}2, t



Input 5: Number of learning times t



Output 1: Updated parameter [W]



Output 2: Updated [M], [V]



1: [M] ← β1 [M] + (1 − β1) [G]



2: [V] ← β2 [V] + (1 − β2) [G] ◯ [G]



3: [M{circumflex over ( )}] ← β{circumflex over ( )}1, t [M]



4: [V{circumflex over ( )}] ← β{circumflex over ( )}2, t [V]



5: [G{circumflex over ( )}] ← Adam ([V{circumflex over ( )}])



6: [G{circumflex over ( )}] ← [G{circumflex over ( )}] ◯ [M{circumflex over ( )}]



7: [W] ← [W] − [G{circumflex over ( )}]










Modification Example 1 of First Embodiment

In Modification Example 1, a method of creating a table including a domain of definition and a range of values in a case of configuring the secure batch mapping Adam used in the first embodiment is devised.


V{circumflex over ( )} input to the secure batch mapping Adam is always positive. The secure batch mapping Adam is a monotonically decreasing function with a very large gradient at a portion where V{circumflex over ( )} is close to 0, and has a feature that Adam (V{circumflex over ( )}) gently approaches zero when V{circumflex over ( )} increases. Because secure computing processes with fixed point numbers in terms of processing costs, very small decimal fractions such as those handled with floating point numbers are not handled. In other words, because a very small V{circumflex over ( )} is not input, the range of the values of the output of Adam (V{circumflex over ( )}) need not be set to a large value. For example, the maximum value of Adam (V{circumflex over ( )}) may be around one in a case where each of the hyperparameters is set as in Reference Literature 3 and the accuracy below the decimal point of V{circumflex over ( )} is set to 20 bits. Because the minimum value of Adam (V{circumflex over ( )}) may be determined depending on the accuracy of Adam (V{circumflex over ( )}) required, the size of the table of mapping can be determined by determining the accuracy of the input V{circumflex over ( )} and the output Adam (V{circumflex over ( )}).


Modification Example 2 of First Embodiment

In Modification Example 2, the accuracy of each variable is further set as illustrated in Table 1 in the first embodiment.












TABLE 1







VARIABLE
ACCURACY (BIT)









w
bw



β1, β2
bβ



β{circumflex over ( )}1, t
bβ{circumflex over ( )}1



β{circumflex over ( )}2, t
bβ{circumflex over ( )}2



g{circumflex over ( )}
bg{circumflex over ( )}










As illustrated in FIG. 4, the parameter update unit 13 of the present modification example performs step S13-11 after step S13-1, performs step S13-12 after step S13-2, and performs step S13-13 after step S13-6.


At step S13-11, the parameter update unit 13 of each secure computation apparatus 1i arithmetic right shifts the concealed value [M] of the matrix M by bβ bits. In other words, [M] rshift ([M], bβ) is calculated and the concealed value [M] of the matrix M is updated.


At step S13-12, the parameter update unit 13 of each secure computation apparatus 1i arithmetic right shifts the concealed value [V] of the matrix V by bβ bits. In other words, [V] rshift ([V], bβ) is calculated and the concealed value [V] of the matrix V is updated.


At step S13-13, the parameter update unit 13 of each secure computation apparatus 1i arithmetic right shifts the concealed value [G{circumflex over ( )}] of the matrix G{circumflex over ( )} by bg{circumflex over ( )}+bβ{circumflex over ( )}_1 bits. In other words, [G{circumflex over ( )}]←rshift ([G{circumflex over ( )}], bg{circumflex over ( )}+bβ{circumflex over ( )}_1) is calculated and the concealed value [G{circumflex over ( )}] of the matrix G{circumflex over ( )} is updated.


The algorithm for the parameter update performed from step S13-1 to S13-7 and from S13-11 to S13-13 by the parameter update unit 13 of the present modification example will be indicated in Algorithm 2.












Algorithm 2: Secure Computation Adam


Algorithm Using Secure Batch Mapping















Input 1: Gradient [G]


Input 2: Parameter [W]


Input 3: [M], [V] initialized at 0


Input 4: Hyperparameters β1, β2, β{circumflex over ( )}1, t, β{circumflex over ( )}2, t


Input 5: Number of learning times t


Output 1: Updated parameter [W]


Output 2: Updated [M], [V]








1: [M] ← β1 [M] + (1 − β1) [G]
(accuracy: bw + bβ)


2: [M] ← rshift ([M], bβ)
(accuracy: bw)


3: [V] ← β2 [V] + (1 − β2) [G] ◯ [G]
(accuracy: 2bw + bβ)


4: [V] ← rshift ([V], bβ)
(accuracy: 2bw)


5: [M{circumflex over ( )}] ← β{circumflex over ( )}1, t [M]
(accuracy: bw + bβ{circumflex over ( )}1)


6: [V{circumflex over ( )}] ← β{circumflex over ( )}2, t [V]
(accuracy: 2bw + bβ{circumflex over ( )}2)


7: [G{circumflex over ( )}] ← Adam ([ V{circumflex over ( )}])
(accuracy: bg{circumflex over ( )})


8: [G{circumflex over ( )}] ← [G{circumflex over ( )}] ◯ [M{circumflex over ( )}]
(accuracy: bg{circumflex over ( )} + bw + bβ{circumflex over ( )}1)


9: [G{circumflex over ( )}] ← rshift ([G{circumflex over ( )}], bg{circumflex over ( )} + bβ{circumflex over ( )}1)
(accuracy: bw)


10: [W] ← [W] − [G{circumflex over ( )}]
(accuracy: bw)









In the present modification example, the accuracy setting is devised as follows. Here, “accuracy” indicates the number of bits in the decimal point portion, and, for example, in a case where the variable w is set to the accurate by, bit, the actual value is w*2b_w. The range of values is different for each variable and thus accuracy may be determined depending on the range of values. For example, w is likely to be a small value and parameters are very important values in machine learning, so the accuracy of the decimal point portion may be increased. Meanwhile, because the hyperparameters β2, and the like are set to approximately 0.9 or 0.999 in Reference Literature 3, there is a low need to increase the accuracy of the decimal point portion. By devising in this manner, the overall number of bits can be suppressed as much as possible, and a secure computing with a large processing cost can be efficiently calculated.


In the present modification example, the following factors are devised for the right shift. For secure computing, processing at fixed point numbers rather than floating point numbers results in high speed in terms of processing costs. Here, fixed point numbers change the decimal point positions every time of multiplication, so it is necessary to adjust the decimal point positions by the right shift. However, because the right shift in the secure computing is a large cost processing, the number of right shifts may be reduced as much as possible. Because the secure batch mapping has the property of being able to arbitrarily set the range of values and the domain of definition. it is possible to adjust the number of digits as in the right shift. From such characteristics of the secure computing and the secure batch mapping, processing as in the present modification example may be more efficient.


Second Embodiment

In a second embodiment, the deep learning is performed by the optimization technique Adam implemented using the secure batch mapping. In this example, learning data, a learning label, and parameters are kept concealed to perform the deep learning. Anything may be used for an activation function used in a hidden layer and an output layer, and a shape of a model of a neural network is arbitrary. Here, it is assumed to learn a deep neural network with the number of hidden layers being n layers. In other words, in a case where L is the layer number, the input layer is L=0, and the output layer is L=n+1. According to the second embodiment, a favorable learning result can be obtained even with a small number of learning times as compared to conventional techniques using a simple gradient descent method.


With reference to FIG. 5, an example of a configuration of a secure deep learning system of the second embodiment will be described. The secure deep learning system 200 includes N (≥2) secure computation apparatuses 2i, . . . , 2N, as illustrated in FIG. 5, for example. In the present embodiment, each of the secure computation apparatuses 21, . . . , 2N is connected to a communication network 9. The communication network 9 is a circuit-switched or packet-switched communication network configured to enable connected apparatuses to communicate with each other, and, for example, the Internet, a local area network (LAN), a wide area network (WAN), or the like can be used. Note that the apparatuses need not necessarily be able to communicate online via the communication network 9. For example, information to be input to the secure computation apparatuses 2i, . . . , 2N may be stored in a portable recording medium such as a magnetic tape or a USB memory, and the information may be configured to be input from the portable recording medium to the secure computation apparatuses 2i, . . . , 2N offline.


With reference to FIG. 6, an example of a configuration of a secure computation apparatus 2i (i=1, . . . , N) included in the secure deep learning system 200 of the second embodiment will be described. As illustrated in FIG. 6, for example, the secure computation apparatus 2i includes a parameter storage unit 10, an initialization unit 11, a gradient calculation unit 12, and a parameter update unit 13, as in the first embodiment, and further includes a learning data storage unit 20, a forward propagation calculation unit 21, and a back propagation calculation unit 22. The secure computation apparatus 2i (i=1, . . . , N) performs processing of each step described below while coordinating with other secure computation apparatuses 2i′ (i′=1, . . . , N, where i≠i′), thereby implementing the secure deep learning method of the present embodiment.


With reference to FIG. 7, a processing procedure of the secure deep learning method performed by the secure deep learning system 200 of the second embodiment will be described.


The learning data storage unit 20 stores a concealed value [X] of a feature X of the learning data and a concealed value [T] of a true label T of the learning data.


At step S11, the initialization unit 11 of each secure computation apparatus 2i initializes a concealed value [W]:=([W0], . . . , [Wn]) of a parameter W. The method of initializing the parameter is selected depending on an activation function, and the like. For example, it is known that in a case where an ReLU function is used for the activation function of an intermediate layer, a favorable learning result can be obtained by using the initialization method described in Reference Literature 4.


Reference Literature 4: Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015.


At step S21, the forward propagation calculation unit 21 of each secure computation apparatus 2i calculates forward propagation by using the concealed value [X] of the feature of the learning data, and determines a concealed value [Y] of an output of each layer ([Y]:=[Y1], . . . , [Yn+1]). Specifically, the forward propagation calculation unit 21 calculates [U1]←[W0]custom-character[X], [Y1]←Activation ([U1]), calculates [Ui+1]←[Wi]custom-character[Yi], [Yi+1]←Activation ([Ui+1]) for each integer i greater than or equal to 1 and less than or equal to n−1, and calculates [Un+1]←[Wn]custom-character[Yn], [Yn+1]←Activation2 ([Un+1]). Here, Activation represents an activation function of any hidden layer, and Activation2 represents an activation function of any output layer.


At step S22, the back propagation calculation unit 22 of each secure computation apparatus 2i calculates the back propagation using the concealed value [T] of the true label of the learning data, and determines the concealed value [Z]:=([Z1], . . . , [Zn+1]) of the error of each layer. Specifically, the back propagation calculation unit 22 calculate [Zn+1]←Activation2′ ([Yn+1], [T]), [Zn]←Activation′ ([Un])◯([Zn+1]custom-character[Wn]) and calculates [Zn−i]←Activation′ ([Un−i])◯([Zn−i+1]custom-character[Wn−i]) for each integer i greater than or equal to 1 and less than or equal to n−1. Here, Activation′ represents the derivative of the activation function Activation, and Activation2′ represents a loss function corresponding to the activation function Activation2.


At step S12, the gradient calculation unit 12 of each secure computation apparatus 2i calculates the concealed value [G]:=([G0], . . . , [Gn]) of the gradient of each layer by using the concealed value [X] of the feature of the learning data, the concealed value [Z] for the error of each layer, and the concealed value [Y] of the output of each layer. Specifically, the gradient calculation unit 12 calculates [G0]←[Z1]custom-character[X], calculates [Gi]←[Zi+1]custom-character[Yi] for each integer i greater than or equal to 1 and less than or equal to n−1, and calculates [Gn]←[Zn+1]custom-character[Yn].


At step S13, the parameter update unit 13 of each secure computation apparatus 2i shifts the concealed value [G] of the gradient of each layer by the right shift by the shift amount H′, and then updates the concealed value [W]:=([W0], . . . , [Wn]) of the parameter of each layer according to the secure gradient descent computation method of the first embodiment. Specifically, the parameter update unit 13 first calculates [G0]←rshift ([G0], H′), calculates [Gi]←rshift ([Gi], H′) for each integer i greater than or equal to 1 and less than or equal to n−1, and calculates [Gn]←rshift ([Gn], H′). Next, for each integer i greater than or equal to 0 and less than or equal to n, the parameter update unit 13 calculates [Mi]←β1 [Mi]+(1−[Gi], [Vi] β2 [Vi]+(1−β2) [Gi]◯[Gi], [M{circumflex over ( )}i]←β{circumflex over ( )}1, t [Mi], [V{circumflex over ( )}i]←β{circumflex over ( )}2, t [Vi], [G{circumflex over ( )}i]←Adam ([V{circumflex over ( )}i]), [G{circumflex over ( )}i]←[G{circumflex over ( )}i]◯[M{circumflex over ( )}i], [Wi]←[Wi]−[G{circumflex over ( )}i].


The algorithm of the deep learning by Adam using the secure batch mapping performed by the secure deep learning system 200 of the present embodiment will be illustrated in Algorithm 3.












Algorithm 3: Deep Learning Algorithm


by Adam Using Secure Batch Mapping

















Input 1: Feature [X] of learning data



Input 2: True label [T] of learning data



Input 3: Parameter [W1] between 1 layer and 1 + 1



Output: Updated parameter [W1]



1: Initialize all [W]



2: (1) Calculation of forward propagation



3: [U1] ← [W0] ▪ [X]



4: [Y1] ← Activation ([U1])



5: for i = 1 to n − 1 do



6: [Ui+1] ← [Wi] ▪ [Yi]



7: [Yi+1] ← Activation ([Ui+1])



8: end for



9: [Un+1] ← [Wn] ▪ [Yn]



10: [Yn+1] Activation2 ([Un+1])



11: (2) Calculation of back propagation



12: [Zn+1] Activation2′ ([Yn+1], [T])



13: [Zn] ← Activation′ ([Un]) ◯ ([Zn+1] ▪ [Wn])



14: for i = 1 to n − 1 do



15: [Zn−i] ← Activation′ ([Un−i]) ◯ ([Zn−i+1] ▪ [Wn−i])



16: end for



17: (3) Calculation of gradient



18: [G0] ← [Z1] ▪ [X]



19: for i = 1 to n − 1 do



20: [G1] ← [Zi+1] ▪ [Y1]



21: end for



22: [Gn] ← [Zn+1] ▪ [Yn]



23: (4) Update of parameter



24: [G0] ← rshift ([G0], H′)



25: for i = 1 to n − 1 do



26: [Gi] ← rshift ([Gi], H′)



27: end for



28: [Gn] ← rshift ([Gn], H′)



29: for i = 0 to n do



30: [M1] ← β1 [Mi] + (1 − β1) [Gi]



31: [V1] ← β2 [Vi] + (1 − β2) [Gi] ◯ [Gi]



32: [M{circumflex over ( )}i] ← β{circumflex over ( )}1, t [Mi]



33: [V{circumflex over ( )}i] ← β{circumflex over ( )}2, t [Vi]



34: [G{circumflex over ( )}i] ← Adam ([V{circumflex over ( )}i])



35: [G{circumflex over ( )}i] ← [G{circumflex over ( )}i] ◯ [M{circumflex over ( )}i]



36: [Wi] ← [Wi] − [G{circumflex over ( )}i]



37: end for










In the actual deep learning, processing other than the initialization of the parameters of the procedure 1 of Algorithm 3 is performed for the preset number of learning times or until convergence, such as until the amount of change of the parameter becomes sufficiently small.


(1) In the forward propagation calculation, the input layer, the hidden layer, and the output layer are calculated in this order, and (2) in the back propagation calculation, the output layer, the hidden layer, and the input layer are calculated in this order. However, because (3) the gradient calculation and (4) the parameter update can be processed in parallel for each of the layers, the efficiency of the processing can be increased by processing together.


In the present embodiment, the activation function of the output layer and the hidden layer may be set as follows. The activation function used in the output layer is selected in accordance with the analysis to be performed. An identity function f(x)=x is used in the case of numerical prediction (regression analysis), a sigmoid function 1/(1+exp (−x)) is used in the case of a binary classification such as diagnosis of disease and spam determination, a softmax function softmax (ui)=exp (ui)/Σj=1k exp (uj) is used in the case of classification problems of ternary values or more such as an image classification, and the like. A non-linear function is chosen for the activation function used in the hidden layer, and the ReLU function ReLU (u)=max (0, u) is frequently used in recent years. The ReLU function is known to provide favorable learning results even in deep networks and is frequently used in the field of deep learning.


In the present embodiment, the batch size may be set as follows. In a case of calculating the gradient, processing the division by the batch size m with rshift is efficient. As such, the batch size m may be set to a value of a power of 2, and the amount of shift H′ at this time is determined by Equation (9). The batch size is the number of pieces of learning data used in a single learning.





[Math. 14]






H′=└log2m┘  (9)


Modification Example 1 of Second Embodiment

In the deep learning of the second embodiment, the accuracy of each value used for learning is set as illustrated in Table 2. w is a parameter between each layer, x is learning data, t is true data (teacher data) corresponding to each learning data. The output of the activation function of the hidden layer is processed to be the same as the accuracy of the true data. g{circumflex over ( )} is the value obtained by the calculation of the secure batch mapping Adam.












TABLE 2







VARIABLE
ACCURACY (BIT)









w
bw



x
bx



t
by



β1, β2
bβ



β{circumflex over ( )}1, t
bβ{circumflex over ( )}1



β{circumflex over ( )}2, t
bβ{circumflex over ( )}2



g{circumflex over ( )}
bg{circumflex over ( )}










The forward propagation calculation unit 21 of the present modification example calculates the concealed value [Yi+1] of the output of the i+1 layer for each integer i greater than or equal to 1 and less than or equal to n−1, and then shifts [Yi+1] by right shift by bw bits. In other words, [Yi+1]←rshift ([Yi+1], bw) is calculated.


The back propagation calculation unit 22 of the present modification example calculates the concealed value [Zn] of the error in the n layer, and then arithmetic right shifts [Zn] by by bits. In other words, [Zn]←rshift ([Zn], by) is calculated. The back propagation calculation unit 22 calculates the concealed value [Zn−i] of the error of the n−i layer for each integer i greater than or equal to 1 and less than or equal to n−1, and then arithmetic right shifts [Zn−i] by bw bits. In other words, [Zn−i]←rshift ([Zn−i], bw) is calculated.


In the parameter update unit 13 of the present modification example, the concealed value [G0] of the gradient between the input layer and first layer of the hidden layers is arithmetic right shifted by the shift amount bx+H′, the concealed value [G1], . . . , [Gn−1] of the gradient between the hidden layers from the first layer to the n layer is arithmetic right shifted by the shift amount bw+bx+H′, and the concealed value [Gn] of the gradient between the n layer of the hidden layers and the output layer is arithmetic right shifted by the shift amount bx+by+H′. The concealed value [W]:=([W0], . . . , [Wn]) of the parameter of each layer is updated in accordance with the secure gradient descent computation method of Modification Example 2 of the first embodiment.


The algorithm of the deep learning by Adam using the secure batch mapping performed by the secure deep learning system 200 of the present modification example will be illustrated in Algorithm 4.












Algorithm 4: Deep Learning Algorithm by Adam Using Secure Batch Mapping















Input 1: Feature [X] of learning data


Input 2: True label [T] of learning data


Input 3: Parameter [W1] between 1 layer and 1 + 1


Output: Updated parameter [W1]








1: Initialize all [W]
(accuracy: bw)







2: (1) Calculation of forward propagation








3: [U1] ← [W0] ▪ [X]
(accuracy: bw + bx)


4: [Y1] ← ReLU ([U1])
(accuracy: bw + bx)







5: for i = 1 to n − 1 do








6: [Ui+1] ← [Wi] ▪ [Yi]
(accuracy: 2bw + bx)


7: [Yi+1] ← ReLU ([Ui+1])
(accuracy: 2bw + bx)


8: [Yi+1] ← rshift ([Yi+1], bw)
(accuracy: bw + bx)







9: end for








10: [Un+1] ← [Wn] ▪ [Yn]
(accuracy: 2bw + bx)


11: [Yn+1] ← softmax ([Un+1])
(accuracy: by)







12: (2) Calculation of back propagation








13: [Zn+1] ← [Yn+1] − [T]
(accuracy: by)


14: [Zn] ← ReLU′ ([Un]) ◯ ([Zn+1] ▪ [Wn])
(accuracy: bw + by)


15: [Zn] ← rshift ([Zn], by)
(accuracy: bw)







16: for i = 1 to n − 1 do








17: [Zn−i] ← ReLU′ ([Un−i]) ◯ ([Zn−i+1] ▪ [Wn−i])
(accuracy: 2bw)


18: [Zn−i] ← rshift ([Zn−i], bw)
(accuracy: bw)


19: end for


20: (3) Calculation of gradient


21: [G0] ← [Z1] ▪ [X]
(accuracy: bw + bx)







22: for i = 1 to n − 1 do








23: [Gi] ← [Zi+1] ▪ [Yi]
(accuracy: 2bw + bx)


24: end for


25: [Gn] ← [Zn+1] ▪ [Yn]
(accuracy: bw + bx + by)


26: (4) Update of parameter


27: [G0] ← rshift ([G0], bx + H′)
(accuracy: bw)


28: for i = 1 to n − 1 do


29: [Gi] ← rshift ([Gi], bw + bx + H′)
(accuracy: bw)


30: end for


31: [Gn] ← rshift ([Gn], bx + by + H′)
(accuracy: bw)


32: for i = 0 to n do


33: [Mi] ← β1 [Mi] + (1 − β1) [Gi]
(accuracy: bw + bβ)


34: [Mi] ← rshift ([Mi], bβ)
(accuracy: bw)


35: [Vi] ← β2 [Vi] + (1 − β2) [Gi] ◯ [Gi]
(accuracy: 2bw + bβ)


36: [Vi] ← rshift ([Vi], bβ)
(accuracy: 2bw)


37: [M{circumflex over ( )}i] ← β{circumflex over ( )}1, t [Mi]
(accuracy: bw + bβ{circumflex over ( )}1)


38: [V{circumflex over ( )}i] ← β{circumflex over ( )}2, t [Vi]
(accuracy: 2bw + bβ{circumflex over ( )}2)


39: [G{circumflex over ( )}i] ← Adam ([V{circumflex over ( )}i])
(accuracy: bg{circumflex over ( )})


40: [G{circumflex over ( )}i] ← [G{circumflex over ( )}i] ◯ [M{circumflex over ( )}i]
(accuracy: bg{circumflex over ( )} + bw + bβ{circumflex over ( )}1)


41: [G{circumflex over ( )}i] ← rshift ([G{circumflex over ( )}i], bg{circumflex over ( )} + bβ{circumflex over ( )}1)
(accuracy: bw)


42: [Wi] ← [Wi] − [G{circumflex over ( )}i]
(accuracy: bw)


43: end for









Similar to the second embodiment, the deep learning can be performed by repeating the process other than the parameter initialization of the procedure 1 in Algorithm 4 until convergence or for a set number of learning times. The accuracy configuration and the positions to perform the right shift are devised in a similar manner as Modification Example 2 of the first embodiment.


(1) In the calculation of forward propagation, in a case where the accuracy bx of the feature X is not too large (for example, eight bits are sufficient in the case of pixel values of image data), the right shift is omitted because bw+bx has room in the number of bits. (4) In the calculation of the parameter update, the division by the learning rate and the batch size is approximated by the arithmetic right shift by the H′ bits, and the calculation is performed simultaneously with the arithmetic right shift for accuracy adjustment, thereby improving the efficiency.


Points of the Invention

In the present invention, processing of the optimization technique Adam is made efficiently with a single secure batch mapping by considering computations that the secure computing is not good at such as the square root and division included in the optimization technique Adam of the gradient descent method as one function. This allows learning with less times than conventional techniques that perform machine learning in the secure computing, and can reduce the overall processing time. Regardless of types of the machine learning model, this optimization technique can be applied to any model as long as it learns using a gradient descent method. For example, it can be used in various machine learning, such as neural network (deep learning), logistic regression, linear regression, and the like.


As such, according to the present invention, the optimization technique Adam of the gradient descent method is implemented in the secure computing, allowing the learning of a machine learning model with high prediction performance with a smaller number of learning times in the secure computing.


Although the embodiments of the present invention have been described above, a specific configuration is not limited to the embodiments, and appropriate changes in the design are, of course, included in the present invention within the scope of the present invention without departing from the gist of the present invention. The various kinds of processing described in the embodiments are not only executed in the described order in a time-series manner but may also be executed in parallel or separately as necessary or in accordance with a processing capability of the apparatus that performs the processing.


Program and Recording Medium


In a case in which various processing functions in each apparatus described in the foregoing embodiment are implemented by a computer, processing details of the functions that each apparatus should have are described by a program. By causing this program to be read into a storage unit 1020 of the computer illustrated in FIG. 8 and causing a control unit 1010, an input unit 1030, an output unit 1040, and the like to operate, various processing functions of each of the apparatuses described above are implemented on the computer.


The program in which the processing details are described can be recorded on a computer-readable recording medium. The computer-readable recording medium, for example, may be any type of medium such as a magnetic recording apparatus, an optical disc, a magneto-optical recording medium, or a semiconductor memory.


In addition, the program is distributed, for example, by selling, transferring, or lending a portable recording medium such as a DVD or a CD-ROM with the program recorded on it. Further, the program may be stored in a storage apparatus of a server computer and transmitted from the server computer to another computer via a network so that the program is distributed.


For example, a computer executing the program first temporarily stores the program recorded on the portable recording medium or the program transmitted from the server computer in its own storage apparatus. When executing the processing, the computer reads the program stored in its own storage apparatus and executes the processing in accordance with the read program. Further, as another execution mode of this program, the computer may directly read the program from the portable recording medium and execute processing in accordance with the program, or, further, may sequentially execute the processing in accordance with the received program each time the program is transferred from the server computer to the computer. In addition, another configuration to execute the processing through a so-called application service provider (ASP) service in which processing functions are implemented just by issuing an instruction to execute the program and obtaining results without transmitting the program from the server computer to the computer is possible. Further, the program in this mode is assumed to include information which is provided for processing of a computer and is equivalent to a program (data or the like that has characteristics of regulating processing of the computer rather than being a direct instruction to the computer).


In addition, although the apparatus is configured to execute a predetermined program on a computer in this mode, at least a part of the processing details may be implemented by hardware.

Claims
  • 1. A secure gradient descent computation method performed by a secure gradient descent computation system including a plurality of secure computation apparatuses, the secure gradient descent computation method calculating a gradient descent method while at least a gradient G and a parameter W are kept concealed, the secure gradient descent method comprising: calculating, by a parameter update circuitry of each of the secure computation apparatuses, [M]←β1 [M]+(1−β1) [G];calculating, by the parameter update circuitry, [V]←β2 [V]+(1−β2) [G]◯[G];calculating, by the parameter update circuitry, [M{circumflex over ( )}]←β{circumflex over ( )}1, t [M];calculating, by the parameter update circuitry, [V{circumflex over ( )}]←β{circumflex over ( )}2, t [V];calculating, by the parameter update circuitry, [G{circumflex over ( )}]←Adam ([V{circumflex over ( )}]);calculating, by the parameter update circuitry, [G{circumflex over ( )}]←[G{circumflex over ( )}]◯[M{circumflex over ( )}]; andcalculating, by the parameter update circuitry, [W]←[W]−[G{circumflex over ( )}],where β1, β2, η, and ε are predetermined hyperparameters, ◯ is a product for each element, t is the number of learning times, [G] is a concealed value of the gradient G, [W] is a concealed value of the parameter W, [M], [M{circumflex over ( )}], [V], [V{circumflex over ( )}], and [G{circumflex over ( )}] are concealed values for matrices M, M{circumflex over ( )}, V, V{circumflex over ( )}, and G{circumflex over ( )} having the same number of elements as the gradient G, and β{circumflex over ( )}1, t, β2, t, and g{circumflex over ( )} are given by following equations, and
  • 2. The secure gradient descent computation method according to claim 1, wherein in a case where rshift denotes an arithmetic right shift, by denotes an accuracy of β1 and β2, bβ{circumflex over ( )}_t denotes an accuracy of β{circumflex over ( )}1, t, and bg{circumflex over ( )} denotes an accuracy of g{circumflex over ( )},the parameter update circuitry calculates [M]←β1 [M]+(1−β1) [G] and then calculates [M]←rshift ([M], bβ),the parameter update circuitry calculates [V]←β2 [V]+(1−β2) [G]◯[G] and then calculates [V]←rshift ([V], bβ), andthe parameter update circuitry calculates [G{circumflex over ( )}]←[G{circumflex over ( )}]◯[M{circumflex over ( )}] and then calculates [G{circumflex over ( )}]←rshift ([G{circumflex over ( )}], bg{circumflex over ( )}+bβ{circumflex over ( )}_1).
  • 3. A secure deep learning method performed by a secure deep learning system including a plurality of secure computation apparatuses, the secure deep learning method learning a deep neural network while at least a feature X of learning data, and true data T and a parameter W of the learning data are kept concealed, the secure deep learning method comprising: calculating, by a forward propagation calculation circuitry of each of the secure computation apparatuses, [Ul]←[W0][X];calculating, by the forward propagation calculation circuitry, [Y1]←Activation ([U1]);calculating, by the forward propagation calculation circuitry, [Ui+1]←[W][Yi] for each i greater than or equal to 1 and less than or equal to n−1;calculating, by the forward propagation calculation circuitry, [V+1]←Activation ([Ui+1]) for each i greater than or equal to 1 and less than or equal to n−1;calculating, by the forward propagation calculation circuitry, [Un+1]←[Wn][Yn],calculating, by the forward propagation calculation circuitry, [Yn+1]←Activation2 ([Un+1]);calculating, by a back propagation calculation circuitry of each of the secure computation apparatuses, [Zn+1]←Activation2′ ([Yn+1], [T]);calculating, by the back propagation calculation circuitry, [Zn]←Activation′ ([Un])◯([Zn+1][Wn]);calculating, by the back propagation calculation circuitry, [Zn−i]←Activation′ ([Un−i])◯([Zn−i+1][Wn−i]) for each i greater than or equal to 1 and less than or equal to n−1;calculating, by a gradient calculation circuitry of each of the secure computation apparatuses, [G0]←[Z1][X];calculating, by the gradient calculation circuitry, [Gi]←[Zi+1][Yi] for each i greater than or equal to 1 and less than or equal to n−1;calculating, by the gradient calculation circuitry, [Gn]←[Zn+1][Yn];calculating, by a parameter update circuitry of each of the secure computation apparatuses, [G0]←rshift ([G0], H′);calculating, by the parameter update circuitry, [Gi]←rshift ([Gi], H′) for each i greater than or equal to 1 and less than or equal to n−1;calculating, by the parameter update circuitry, [Gn]←rshift ([Gn], H′); andlearning, by the parameter update circuitry, a parameter [Wi] between an i layer and an i+1 layer using a gradient [Gi] between the i layer and the i+1 layer, in accordance with the secure gradient descent computation method according to claim 1, for each i greater than or equal to 0 and less than or equal to n,where β1, β2, η, and ε are predetermined hyperparameters, is a product of matrices, ◯ is a product for each element, t is the number of learning times, [G] is a concealed value of a gradient G, [W] is a concealed value of the parameter W, [X] is a concealed value of the feature X of the learning data, [T] is a concealed value of a true label T of the learning data, [M], [M{circumflex over ( )}], [V], [V{circumflex over ( )}], [G{circumflex over ( )}], [U], [Y], and [Z] are concealed values of matrices M, M{circumflex over ( )}, V, V{circumflex over ( )}, G{circumflex over ( )}, U, Y, and Z having the same number of elements as the gradient G, and β{circumflex over ( )}1, t, β{circumflex over ( )}2, t, and g{circumflex over ( )} are given by following equations,
  • 4. The secure deep learning method according to claim 3, wherein bw is an accuracy of w, by is an accuracy of an element of Y, by is an accuracy of β1 and β2, bβ{circumflex over ( )}_1 is an accuracy of β{circumflex over ( )}1, t, and be is an accuracy of g{circumflex over ( )},the forward propagation calculation circuitry calculates [Yi+1]←Activation ([Ui+1]) and then calculates [Yi+1]←rshift ([Yi+1], bw),the back propagation calculation circuitry calculates [Zn]←Activation′ ([Un])◯([Zn+1]•[Wn]) and then calculates [Zn]←rshift ([Zn], by),each back propagation calculation circuitry calculates [Zn−i]←Activation′ ([Un−i])◯([Zn−i+1][Wn−i]), and then calculates [Zn−i]←rshift ([Zn−i], bw), andthe parameter update circuitry learns the parameter [Wi] between the i layer and the i+1 layer using the gradient [Gi] between the i layer and the i+1 layer, in accordance with the secure gradient descent computation method, for each i greater than or equal to 0 and less than or equal to n,wherein the secure gradient descent computation method comprises:calculating, by the parameter update circuitry, [M]←β1 [M]+(1−β1) [G] and then calculates [M]←rshift ([M], bβ),calculating, by the parameter update circuitry, [V]←β2 [V]+(1−β2) [G]◯[G] and then calculates [V]←rshift ([V], bβ), andcalculating, by the parameter update circuitry, [G{circumflex over ( )}]←[G{circumflex over ( )}]◯[M{circumflex over ( )}] and then calculates [G{circumflex over ( )}]←rshift ([G{circumflex over ( )}], bg{circumflex over ( )}+bβ{circumflex over ( )}_1).
  • 5. A secure gradient descent computation system comprising a plurality of secure computation apparatuses, the secure gradient descent computation system calculating a gradient descent method while at least a gradient G and a parameter W are kept concealed, each of the secure computation apparatuses including:a parameter update circuitry configured to calculate [M] β1 [M]+(1−β1) [G], [V]←β2 [V]+(1−β2) [G]◯[G], [M{circumflex over ( )}]←β{circumflex over ( )}1, t [M], [V{circumflex over ( )}]←β{circumflex over ( )}2, t [V], [G{circumflex over ( )}]←Adam ([V{circumflex over ( )}]), [G{circumflex over ( )}]←[G{circumflex over ( )}]◯[M{circumflex over ( )}], and [W]←[W]−[G{circumflex over ( )}],where β1, β2, η, and ε are predetermined hyperparameters, ◯ is a product for each element, t is the number of learning times, [G] is a concealed value of the gradient G, [W] is a concealed value of the parameter W, [M], [M{circumflex over ( )}], [V], [V{circumflex over ( )}], and [G{circumflex over ( )}] are concealed values for matrices M, M{circumflex over ( )}, V, V{circumflex over ( )}, and G{circumflex over ( )} having the same number of elements as the gradient G, and β{circumflex over ( )}1, t, β{circumflex over ( )}2, t, and g{circumflex over ( )} are given by following equations,
  • 6. A secure deep learning system comprising a plurality of secure computation apparatuses, the secure deep learning system learning a deep neural network while at least a feature X of learning data, and true data T and a parameter W of the learning data are kept concealed, each of the secure computation apparatuses including:a forward propagation calculation circuitry configured to calculate [U1]←[W0][X], [Y1]←Activation ([U1]), [Ui+1]←[Wi][Yi] [Yi+1]←Activation ([Ui+1]) for each i greater than or equal to 1 and less than or equal to n−1, [Un+1]←[Wn][Yn], and [Yn+1]←Activation2 ([Un+1]);a back propagation calculation circuitry configured to calculate [Zn+1]←Activation2′ ([Yn+1], [T]), [Zn]←Activation′ ([Un])◯([Zn+1][Wn]), and [Zn−i]←Activation′ ([Un−i])◯([Zn−i+1][Wn−i]) for each i greater than or equal to 1 and less than or equal to n−1;a gradient calculation circuitry configured to calculate [G0]←[Z1][X], [Gi]←[Zi+1][Yi] for each i greater than or equal to 1 and less than or equal to n−1, and [Gn]←[Zn+1][Yn], anda parameter update circuitry configured to calculate [G0]←rshift ([G0], H′), [Gi]←rshift ([Gi], H′) for each i greater than or equal to 1 and less than or equal to n−1, and [Gn]←rshift ([Gn], H′), and learn a parameter [Wi] between an i layer and an i+1 layer using a gradient [Gi] between the i layer and the i+1 layer, in accordance with the secure gradient descent computation system according to claim 5, for each i greater than or equal to 0 and less than or equal to nwhere β1, β2, η, and ε are predetermined hyperparameters, is a product of matrices, ◯ is a product for each element, t is the number of learning times, [G] is a concealed value of a gradient G, [W] is a concealed value of the parameter W, [X] is a concealed value of the feature X of the learning data, [T] is a concealed value of a true label T of the learning data, [M], [M{circumflex over ( )}], [V], [V{circumflex over ( )}], [G{circumflex over ( )}], [U], [Y], and [Z] are concealed values of matrices M, M{circumflex over ( )}, V, V{circumflex over ( )}, G{circumflex over ( )}, U, Y, and Z having the same number of elements as the gradient G, and β{circumflex over ( )}1, t, β{circumflex over ( )}2, t, and g{circumflex over ( )} are given by following equations,
  • 7. A secure computation apparatus used in the secure gradient descent computation system according to claim 5.
  • 8. A non-transitory computer-readable recording medium on which a program is recorded for causing a computer to function as the secure computation apparatus according to claim 7.
  • 9. A secure computation apparatus used in the secure deep learning system according to claim 6.
  • 10. A non-transitory computer-readable recording medium on which a program is recorded for causing a computer to function as the secure computation apparatus according to claim 9.
PCT Information
Filing Document Filing Date Country Kind
PCT/JP2019/031941 8/14/2019 WO