The present invention relates to secure computation techniques. In particular, the present invention relates to a technique for performing secure computation of a sigmoid function or a technique for performing secure computation of a model parameter of a logistic regression model.
As an existing method for performing secure computation of a sigmoid function, there are a method for performing approximation using a cubic polynomial by fully homomorphic encryption-based secure computation (Non-patent Literature 1) and a method for performing approximation using a polynomial by additively homomorphic encryption-based secure computation (Non-patent Literature 2 and Non-patent Literature 3).
Secure computation is a method for obtaining the computation result of a designated computation without reconstructing the encrypted numerical values (see, for example, Reference Non-patent Literature 1). With the method of Reference Non-patent Literature 1, it is possible to perform encryption by which a plurality of pieces of information, whose numerical values can be reconstructed, are distributed over three secure computation apparatuses and make the three secure computation apparatuses hold the results of addition and subtraction, constant addition, multiplication, constant multiplication, logical operations (a NOT, an AND, an OR, and an XOR), and data format conversion (an integer or a binary) with the results being distributed over these secure computation apparatuses, that is, in an encrypted state, without reconstructing the numerical values. In general, the number of secure computation apparatuses over which the information is distributed is not limited to 3 and can be set at W (W is a predetermined constant greater than or equal to 3), and a protocol that implements secure computation by cooperative computations by W secure computation apparatuses is called a multi-party protocol.
Non-patent Literature 3: Yoshinori Aono, Takuya Hayashi, Le Trieu Phong, and Lihua Wang, “Proposal for Large-Scale and Privacy-Preserving Logistic Analysis Technique”, In SCIS2016, 2016.
However, a sigmoid function is a nonlinear function which is expressed by the following formula (see
σ(x)=1/(1+exp(−x)) (1)
The accuracy of approximation of all of the methods described in Non-patent Literatures 1 to 3 is low and these methods require an approximate formula to be rewritten each time in accordance with the domain of definition of x, which makes these methods impractical. Moreover, the higher the order of an approximate polynomial is made, the higher the accuracy of approximation thereof becomes; however, this increases the number of multiplications accordingly and thereby slows down processing.
Therefore, an object of the present invention is to provide a technique for performing secure computation of a sigmoid function with high speed and precision. Moreover, an object of the present invention is to provide a technique for performing secure computation of a model parameter of a logistic regression model with high speed and precision using a technique for performing secure computation of a sigmoid function.
A secure sigmoid function calculation system according to an aspect of the present invention is a secure sigmoid function calculation system in which mapσ is assumed to be secure batch mapping defined by parameters (a0, . . . , ak-1) representing the domain of definition of a sigmoid function σ(x) and parameters (σ(a0), . . . , σ(ak-1)) representing the range of the sigmoid function σ(x) (where k is an integer greater than or equal to 1 and a0, . . . , ak-1 are real numbers that satisfy a0< . . . <ak-1), and which is configured with three or more secure sigmoid function calculation apparatuses and calculates, from a share [[x→]]=([[x0]], . . . , [[xm-1]]) of an input vector x→=(x0, . . . , xm-1), a share [[y→]]=([[y0]], . . . , [[ym-1]]) of a value y→>=(y0, . . . , ym-1) of a sigmoid function for the input vector x→. The secure sigmoid function calculation system includes a secure batch mapping calculating means that calculates mapσ([[x→]])=([[σ(af(0))]], . . . , [[σ(af(m-1))]]) (where f(i) (0≤i≤m−1) is j that makes aj≤xi<aj+1 hold) from the share [[x→]] and calculates the share [[y→]] by ([[y0]], . . . , [[ym-1]])=([[σ(af(0))]], . . . , [[σ(af(m-1))]]).
A secure logistic regression calculation system according to an aspect of the present invention is a secure logistic regression calculation system in which m is assumed to be an integer greater than or equal to 1, η is assumed to be a real number that satisfies 0<η<1, and Sigmoid([[x]]) is assumed to be a function that calculates, from a share [[x→]] of an input vector x→, a share [[y→]] of a value y→ of a sigmoid function for the input vector x→ using the secure sigmoid function calculation system according to claim 1, and which is configured with three or more secure logistic regression calculation apparatuses and calculates a share [[w→]] of a model parameter w→ of a logistic regression model from a share [[xi→]] (0≤i≤m−1) of data xi→ on an explanatory variable and a share [[yi]] (0≤i≤m−1) of data yi on a response variable. The secure logistic regression calculation system includes: an initializing means that sets a share [[w0→]] of an initial value w0→ of the model parameter w→; an error calculating means that calculates, for i=0, . . . , [[bi]] by [[bi]]=hpsum([[wt→]], [[(1, xi→]]) from a share [[wt→]] of a value wt→ of the model parameter w→ obtained as a result oft updates and the share [[xi→]], calculates ([[c0]], . . . , [[cm-1]]) by ([[c0]], . . . , [[cm-1]])=Sigmoid(([[b0]], . . . , [[bm-1]])) from the [[bi]] (0≤i≤m−1), and calculates, for i=0, . . . , m−1, an error [[di]] by [[di]]=[[ci]]−[[yi]] from the share [[yi]] and an i-th element [[ci]] of the ([[c0]], . . . , [[cm-1]]); and a model parameter updating means that calculates, for j=0, . . . , n, [[e]] by [[e]]=Σi=0m-1[[di]][[xi,j]] from the error [[di]] (0≤i≤m−1) and a j-th element [[xi, j]] (0≤i≤m−1) of the share [[xi→]] and calculates, from a j-th element [[wi, t]] of the share [[wt→]] and the [[e]] by [[wj, t+1]]=[[wj, t]]−η(l/m)[[e]], a j-th element [[wj, t+1]] of a share [[wt+1→]] of a value wt+1→ of the model parameter w→ obtained as a result of t+1 updates.
A secure logistic regression calculation system according to an aspect of the present invention is a secure logistic regression calculation system in which m is assumed to be an integer greater than or equal to 1, η is assumed to be a real number that satisfies 0<η<1, and Sigmoid([[x]]) is assumed to be a function that calculates, from a share [[x→]] of an input vector x→, a share [[y→]] of a value y→ of a sigmoid function for the input vector x→ using the secure sigmoid function calculation system according to claim 1; when an arbitrary value, which is an object on which secure computation is to be performed, is assumed to be p and the precision of p is written as b_p [bit], it means that a share [[p]] of p is actually a share [[p×2b-p]] of a fixed-point number; when an arbitrary vector, which is an object on which secure computation is to be performed, is assumed to be q→, an element of q→ is assumed to be qi, and the precision of q→ is written as b_q [bit], it means that a share [[q→]] of q→ is actually made up of a share [[qi×2b_q]] of a fixed-point number; the precision of w→, w0→, wt→, wt+1→, and eta_grad_ave_shift is written as b_w [bit], the precision of xi→ (0≤i≤m−1) is written as b_x [bit], the precision of yi (0≤i≤m−1), ci (0≤i≤m−1), and di (0≤i≤m−1) is written as b_y [bit], the precision of η is written as b_η [bit], the precision of a reciprocal 1/m of the number of pieces of learning data is written as b_m+H [bit], the precision of bi (0≤i≤m−1) is written as b_w+b_x [bit], the precision of e is written as b_y+b_x [bit], the precision of eta_grad is written as b_y+b_x+b_η [bit], the precision of eta_grad_shift is written as b_tmp [bit], and the precision of eta_grad_ave is written as b_tmp+b_m+H [bit]; b_w, b_x, b_y, b_η, b_m, H, and b_tmp are assumed to be predetermined positive integers; and rshift(a, b) is assumed to mean shifting a value a to the right by b [bit] by performing an arithmetic right shift. The secure logistic regression calculation system is configured with three or more secure logistic regression calculation apparatuses and calculates a share [[w→]] of a model parameter w→ of a logistic regression model from a share [[xi→]] (0≤i≤m−1) of data xi→ on an explanatory variable and a share [[yi]] (0≤i≤m−1) of data yi on a response variable. The secure logistic regression calculation system includes: an initializing means that sets a share [[w0→]] of an initial value w0→ of the model parameter w→; an error calculating means that calculates, for i=0, . . . , m−1, [[bi]] by [[bi]]=hpsum([[wt→]], [[(1, xi→]]) from a share [[wt→]] of a value wt→ of the model parameter w→ obtained as a result oft updates and the share [[xi→]], calculates ([[c0]], . . . , [[cm-1]]) by ([[c0]], . . . , [[cm-1]])=Sigmoid(([[b0]], . . . , [[bm-1]])) from the [[bi]] (0≤i≤m−1), and calculates, for i=0, . . . , m−1, an error [[di]] by [[di]]=[[ci]]−[[yi]] from the share [[yi]] and an i-th element [[ci]] of the ([[c0]], . . . , [[cm-1]]); and a model parameter updating means that calculates, for j=0, . . . , n, [[e]] by [[e]]=Σi=0m-1[[di]][[xi,j]] from the error [[di]] (0≤i≤m−1) and a j-th element [[xi, j]] (0≤i≤m−1) of the share [[xi→]], calculates [[eta_grad]] by [[eta_grad]]=η[[e]] from the η and the [[e]], calculates [[eta_grad_shift]] by [[eta_grad_shift]]=rshift([[eta_grad]], b_y+b_x+b_η−b_tmp) from the [[eta_grad]], calculates [[eta_grad_ave]] by [[eta_grad_ave]]=(1/m)[[eta_grad_shift]] from the [[eta_grad_shift]], calculates [[eta_grad_ave_shift]] by [[eta_grad_ave_shift]]=rshift([[eta_grad_ave]], b_tmp+b_m+H−b_w) from the [[eta_grad_ave]], and calculates, from a j-th element [[wj, t]] of the share [[wt→]] and the [[eta_grad_ave_shift]] by [[wj, t+1]]=[[wj, t]]−[[eta_grad_ave_shift]], a j-th element [[wj, t+1]] of a share [[wt+1→]] of a value wt+1→ of the model parameter w→ obtained as a result of t+1 updates.
A secure logistic regression calculation system according to an aspect of the present invention is a secure logistic regression calculation system in which m is assumed to be an integer greater than or equal to 1, η is assumed to be a real number that satisfies 0<η<1, and Sigmoid([[x]]) is assumed to be a function that calculates, from a share [[x→]] of an input vector x→, a share [[y→]] of a value y→ of a sigmoid function for the input vector x→ using the secure sigmoid function calculation system according to Claim 1; when an arbitrary value, which is an object on which secure computation is to be performed, is assumed to be p and the precision of p is written as b_p [bit], it means that a share [[p]] of p is actually a share [[p×2b_p]] of a fixed-point number; when an arbitrary vector, which is an object on which secure computation is to be performed, is assumed to be q→, an element of q→ is assumed to be qi, and the precision of q→ is written as b_q [bit], it means that a share [[q→]] of q→ is actually made up of a share [[qi×2b_q]] of a fixed-point number; the precision of w→, w0→, wt→, wt+1→, and eta_grad_ave is written as b_w [bit], the precision of xi→ (0≤i≤m−1) is written as b_x [bit], the precision of yi (0≤i≤m−1), ci (0≤i≤m−1), and di (0≤i≤m−1) is written as b_y [bit], the precision of η is written as b_η [bit], the precision of bi (0≤i≤m−1) is written as b_w+b_x [bit], and the precision of e is written as b_y+b_x [bit]; b_w, b_x, b_y, and b_η are assumed to be predetermined positive integers; rshift(a, b) is assumed to mean shifting a value a to the right by b [bit] by performing an arithmetic right shift; and floor is assumed to be a function representing rounding down and X=−(floor(log 2(η/m))). The secure logistic regression calculation system is configured with three or more secure logistic regression calculation apparatuses and calculates a share [[w→]] of a model parameter w→ of a logistic regression model from a share [[xi→]] (0≤i≤m−1) of data xi→ on an explanatory variable and a share [[yi]] (0≤i≤m−1) of data yi on a response variable. The secure logistic regression calculation system includes: an initializing means that sets a share [[w0→]] of an initial value w0→ of the model parameter w→; an error calculating means that calculates, for i=0, . . . , m−1, [[bi]] by [[bi]]=hpsum([[wt→]], [[(1, xi→)]]) from a share [[wt→]] of a value wt→ of the model parameter wt→ obtained as a result oft updates and the share [[xi→]], calculates ([[c0]], . . . , [[cm-1]]) by ([[c0]], . . . , [[cm-1]])=Sigmoid(([[b0]], . . . [[bm-1]])) from the [[bi]] (0≤i≤m−1), and calculates, for i=0, . . . , m−1, an error [[di]] by [[di]]=[[ci]]−[[yi]] from the share [[yi]] and an i-th element [[ci]] of the ([[c0]], . . . , [[cm-1]]); and a model parameter updating means that calculates, for j=0, . . . , n, [[e]] by [[e]]=Σi=0m-1[[di]][[xi,j]] from the error [[di]] (0≤i≤m−1) and a j-th element [[xi, j]] (0≤i≤m−1) of the share [[xi→]], calculates [[eta_grad_ave]] by [[eta_grad_ave]]=rshift([[e]], X+b_y+b_x−b_w) from the [[e]], and calculates, from a j-th element [[wj, t]] of the share [[wt→]] and the [[eta_grad_ave]] by [[wj, t+1]]=[[wj, t]]−[[eta_grad_ave]], a j-th element [[wj, t+1]] of a share [[wt+1→]] of a value wt+1→ of the model parameter w→ obtained as a result of t+1 updates.
According to the present invention, it is possible to perform secure computation of a sigmoid function with high speed and precision. Moreover, according to the present invention, it is possible to perform secure computation of a model parameter of a logistic regression model with high speed and precision.
Hereinafter, embodiments of the present invention will be described in detail. It is to be noted that constituent units having the same function are denoted by the same reference character and overlapping explanations are omitted.
A secure sigmoid function calculation algorithm and a secure logistic regression calculation algorithm, which will be described later, are constructed of a combination of computations on the existing secure computation. Computations required by these algorithms include concealment, addition, multiplication, and hpsum. Here, hpsum is the sum of products. Hereinafter, each computation will be described.
<Computations>
[Concealment]
Assume that [[x]] is a value (hereinafter referred to as a share of x) obtained by concealing x by secure sharing. Any method can be used as a secure sharing method. For example, Shamir's secure sharing over GF(261−1) or replicated secure sharing over Z2 can be used.
A plurality of secure sharing methods may be combined and used in a certain algorithm. In this case, it is assumed that the secure sharing methods are mutually converted as appropriate.
Moreover, assume that [[x→]]=([[x0]], . . . , [[xn-1]]) for an n-dimensional vector x→=(x0, . . . , xn-1). n is the number of model parameters and is a predetermined positive integer.
It is to be noted that x is referred to as plaintext of [[x]].
As a method for obtaining [[x]] from x (concealment) and a method for obtaining x from [[x]] (reconstruction), specifically, there are methods described in Reference Non-patent Literature 1 and Reference Non-patent Literature 2.
[Addition and Multiplication]
Addition [[x]]+[[y]] by secure computation uses [[x]] and [[y]] as input and outputs [[x+y]]. Multiplication [[x]]x[[y]] (mul([[x]], [[y]])) by secure computation uses [[x]] and [[y]] as input and outputs [[x×y]].
Here, either one of [[x]] and [[y]] may be a value that is not concealed (hereinafter referred to as a public value). For example, a configuration can be adopted in which, on the assumption that β and γ are public values, [[x]] and β are used as input and [[x+β]] is output or γ and [[y]] are used as input and [[γ×y]] is output.
As specific methods of addition and multiplication, there are methods described in Reference Non-patent Literature 3 and Reference Non-patent Literature 4.
[hpsum]
Secure computation hpsum([[x→]], [[y→]]) uses [[x→]] and [[y→]] (where x→=(x0, . . . , xn-1) and y→=(y0, . . . , yn-1)) as input and outputs [[Σj=0n-1xjyj]], that is, outputs the sum of products of j-th elements of two vectors.
(Secure Batch Mapping)
In embodiments of the present invention, secure batch mapping is used to perform secure computation of a sigmoid function σ(x) which is a monotonically increasing function as shown in
Secure batch mapping is a feature of calculating a look-up table and is a technique that can arbitrarily define a domain of definition and a range. A case where a function that multiplies input by 10 is calculated by secure batch mapping is taken as an example. Assume that a certain domain of definition X={1, 3, 5} and a range Y={10, 30, 50}, which is a set of values obtained by multiplying these values by 10, are prepared. In secure batch mapping, for input x that does not belong to the domain of definition, a maximum value that is less than or equal to x and belongs to the domain of definition is output. Therefore, 30 is output when 4 is input. However, by setting the domain of definition X and the range Y more finely, such as X={1, 2, 3, 4, 5} and Y={10, 20, 30, 40, 50}, 40 is output when 4 is input, which makes it possible to perform calculation with high precision. By using this property, it is possible to set appropriate precision that does not exceed the upper limit of a data type even when a fixed-point number is used and also makes a calculation error small. It is to be noted that the processing time of secure batch mapping depends on the size of a domain of definition and a range. Therefore, there is a trade-off between a high level of calculation precision and processing speed.
[Algorithm]
The following description deals with an algorithm (a secure sigmoid function calculation algorithm) that performs secure computation of a sigmoid function using secure batch mapping. For example, an algorithm for secure batch mapping described in Reference Non-patent Literature 5 can be used.
Secure batch mapping uses a share [[x→]]=([[x0]], . . . , [[xm-1]]) of a vector x→=(x0, . . . , xm-1) as input and outputs, using parameters (a0, . . . , ak-1) representing the domain of definition of a function and parameters (b0, . . . , bk-1) representing the range of the function (where a0, . . . , ak-1 and b0, . . . bk-1 are real numbers and satisfy a0< . . . <ak-1), shares onto which shares of elements of the vector were mapped, that is, ([[y0]], . . . , [[ym-1]]) which make aj≤xi<aj+1 and yi=bj hold for 0≤i≤m−1.
The secure sigmoid function calculation algorithm is an algorithm in which parameters (a0, . . . , ak-1) representing the domain of definition of a function and parameters (b0, . . . , bk-1) representing the range of the function in secure batch mapping are selected so as to satisfy bj=σ(aj) (0≤j≤k−1) (hereinafter, the secure batch mapping is represented as mapσ).
That is, the secure sigmoid function calculation algorithm uses a share [[x→]]=([[x0]], . . . , [[xm-1]]) of a vector x→=(x0, . . . , xm-1) as input and outputs ([[σ(af(0))]], . . . , [[σ(af(m-1))]]), in which f(i) is j that makes aj≤xi<aj+1 hold for 0≤i≤m−1, by using the secure batch mapping mapσ defined by parameters (a0, . . . , ak-1) representing the domain of definition of the sigmoid function σ(x) and parameters (σ(a0), . . . , σ(ak-1)) representing the range of the sigmoid function σ(x) (where k is an integer greater than or equal to 1 and a0, . . . , ak-1 are real numbers that satisfy a0< . . . <ak-1).
As mentioned earlier, since a domain of definition and a range in secure batch mapping are values that can be arbitrarily set, they can be determined in accordance with necessary precision and processing speed. Thus, unlike a case where approximation is performed using a polynomial, an arbitrary precision value can be set. Therefore, it is also possible to achieve precision equivalent to plaintext, for example.
Moreover, performing secure computation of a sigmoid function using secure batch mapping has another advantage. This advantage will be described below. When calculation is performed using a fixed point (not a floating point) from the viewpoint of processing cost, numerical precision sometimes increases every time a multiplication is performed and exceeds the upper limit of a data type, which sometimes makes it necessary to intentionally perform truncate in the middle of processing. However, since using secure batch mapping allows a domain of definition and a range to be set independently, it is possible to calculate a sigmoid function and, at the same time, adjust numerical precision, which is efficient.
It is to be noted that, in the following description, the secure sigmoid function calculation algorithm is expressed as Sigmoid. Therefore, Sigmoid([[x→]])=([[σ(af(0))]], . . . , [[σ(af(m-1))]]) holds.
(Logistic Regression Analysis)
A model f(x→) (where x→=(x1, . . . , xn)) of a logistic regression analysis is expressed by the following formula using an n+1-dimensional vector w→=(w0, . . . , wn) as a model parameter.
f({right arrow over (x)})=σ(−{right arrow over (w)}·(1,{right arrow over (x)}))=σ(−(w0+w1x1+ . . . +wnxn)) (2)
Here, (1, x→) represents an n+1-dimensional vector (1, x1, . . . , xn).
As a method for learning the model parameter w→, there is the steepest descent method which is a learning method for searching for a minimum value of a function. In the steepest descent method, learning is performed using the following input and parameters.
(Input) Data xi→ on an explanatory variable and data yi on a response variable (0≤i≤m−1, where m is an integer greater than or equal to 1 and represents the number of pieces of learning data)
(Parameters) A learning rate η (0<η<1) and the number of learning processes T
It is assumed that appropriate values are set as the learning rate η and the number of learning processes T.
wt→=(w0, t, . . . wn, t) is learned by the following formula as the model parameter obtained as a result oft (0≤t≤T−1) updates.
That is, an update is performed for each j-th element wj of the model parameter w→ using the learning data xi→ and yi. It is assumed that an appropriate value is set as an initial value w0→ of the model parameter w→.
[Algorithm]
The following description deals with an algorithm (a secure logistic regression calculation algorithm) that performs secure computation of a model parameter of a logistic regression model. The secure logistic regression calculation algorithm uses a share [[xi→]] of the data on the explanatory variable and a share [[yi]] of the data yi on the response variable as input, calculates a share [[w→]] of the model parameter using the parameters η and T which are public values, and outputs the share [[w→]]. A specific procedure is shown in
Using the secure sigmoid function calculation algorithm Sigmoid enhances the precision of calculation of the sigmoid function, which also enhances the precision of calculation of logistic regression as compared with the existing methods (the methods described in Non-patent Literatures 1 to 3). Moreover, in the methods described in Non-patent Literatures 2 and 3, some of the values cannot be concealed in the course of calculation, which causes a security problem; by contrast, it is clear from each step of the secure logistic regression calculation algorithm that a secure is kept in the course of calculation, which makes it possible to perform calculations securely without leaking information at all to the outside.
Furthermore, when the sigmoid function is calculated using secure batch mapping, by using the property of being able to adjust the accuracy of approximation concurrently with this calculation, it is possible to perform even calculation of the logistic regression analysis, in which a multiplication has to be performed repeatedly, without exceeding the upper limit of a data type.
As mentioned earlier, precision can be arbitrarily set in secure batch mapping; since overhead proportional to the size of a range and a domain of definition is required, it is necessary to consider a balance between precision and overhead in order to perform secure computation of the sigmoid function more efficiently. For example, by setting precision at a precision of the order of 10−4, it is possible to perform calculation with a high degree of precision while keeping the size of a domain of definition and a range small as compared with plaintext.
Moreover, in the logistic regression analysis, a value near x=0 is often used as a threshold at the time of final binary classification; therefore, the accuracy of approximation of the sigmoid function near x=0 is preferably high. As is clear from
Hereinafter, a secure sigmoid function calculation system 10 will be described with reference to
As shown in
By cooperative computations which are performed by the W secure sigmoid function calculation apparatuses 100i, the secure sigmoid function calculation system 10 implements the secure sigmoid function calculation algorithm which is a multi-party protocol. Thus, a secure batch mapping calculating means 110 (which is not shown in the drawing) of the secure sigmoid function calculation system 10 is configured with the secure batch mapping calculating units 1101, . . . , 110W.
The secure sigmoid function calculation system 10 calculates, from a share [[x→]]=([[x0]], [[xm-1]]) of an input vector x→=(x0, xm-1), a share [[y→]]=([[y0]], [[ym-1]]) of a value y→=(y0, . . . , ym-1) of the sigmoid function for the input vector x→. In general, it is assumed that calculating the sigmoid function for an input vector means calculating the value of the sigmoid function for each element of the input vector. Hereinafter, an operation of the secure sigmoid function calculation system 10 will be described in accordance with
The secure batch mapping calculating means 110 calculates mapσ([[x→]])=([[σ(af(0))]], . . . , [[σ(af(m-1))]]) (where f(i) (0≤i≤m−1) is j that makes aj≤xi<aj+1 hold) from the share [[x→]]=([[x0]], . . . , [[xm-1]]) of the input vector x→=(x0, . . . , xm-1) and calculates the share [[y→]] by ([[y0]], . . . , [[ym-1]])=([[σ(af(0))]], . . . , [[σ(af(m-1))]]) (S110).
According to the invention of the present embodiment, it is possible to perform secure computation of the sigmoid function with high speed and precision. In particular, it is possible to perform secure computation of the sigmoid function with precision equivalent to plaintext.
The invention of the present embodiment implements calculation of the sigmoid function, which is a nonlinear function whose secure computation is not easy, by using secure batch mapping. Therefore, using the feature of secure batch mapping that can arbitrarily define a domain of definition makes it possible to implement secure computation of the sigmoid function with higher precision by giving higher priority thereto than processing speed when necessary.
In the first embodiment, when a domain of definition and a range are set, an inverse function σ−1 of the sigmoid function may be used.
That is, an inverse sigmoid function σ−1 may be used when parameters (a0, . . . , ak-1) representing the domain of definition of a function and parameters (b0, . . . , bk-1) representing the range of the function are set. For example, after desired parameters (b0, . . . , bk-1) are set, the value of ai corresponding to each bi (0≤i≤k−1) is calculated using the inverse sigmoid function σ−1. In this way, the calculated (a0, . . . , ak-1) may be used as parameters representing the domain of definition of a function and the desired parameters (b0, . . . , bk-1) may be used as parameters representing the range of the function.
The inverse sigmoid function σ−1 is as follows, for example.
σ−1(x)=−ln((1/x)−1)
Using the inverse function makes it possible to easily calculate the domain of definition and the range of a function whose gradient changes sharply like the sigmoid function.
Hereinafter, a secure logistic regression calculation system 20 will be described with reference to
As shown in
By cooperative computations which are performed by the W′ secure logistic regression calculation apparatuses 200i, the secure logistic regression calculation system 20 implements the secure logistic regression calculation algorithm which is a multi-party protocol. Thus, an initializing means 210 (which is not shown in the drawing) of the secure logistic regression calculation system 20 is configured with the initializing units 2101, . . . , 210W′, an error calculating means 220 (which is not shown in the drawing) is configured with the error calculating units 2201, . . . , 220W′, a model parameter updating means 230 (which is not shown in the drawing) is configured with the model parameter updating units 2301, . . . , 230W′, and a convergence condition judging means 240 (which is not shown in the drawing) is configured with the condition judging units 2401, . . . , 240W′.
The secure logistic regression calculation system 20 calculates the share [[wi]] of the model parameter w→ of the logistic regression model from the share [[xi→]] (0≤i≤m−1, where m is an integer greater than or equal to 1) of the data xi→ on the explanatory variable and the share [[yi]] (0≤i≤m−1) of the data yi on the response variable (see
The initializing means 210 sets a share [[w0→]] of the initial value w0→ of the model parameter w→ (S210). Specifically, the initializing means 210 only has to set the share [[w0→]] of the appropriate initial value w0→ recorded on the recording unit 290i in advance. This corresponds to Step 1 of the secure logistic regression calculation algorithm of
The error calculating means 220 calculates, for i=0, . . . , m−1, [[bi]] by [[bi]]=hpsum([[wt→]], [[(1, xi→)]]) from a share [[wt→]] of a value wt→ of the model parameter w→ obtained as a result oft updates and the share [[xi→]], calculates ([[c0]], . . . , [[cm-1]]) by ([[c0]], . . . , [[cm-1]])=Sigmoid(([[b0]], . . . , [[bm-1]])) from [[bi]] (0≤i≤m−1), and calculates, for i=0, . . . , m−1, an error [[di]] by [[di]]=[[ci]]−[[yi]] from the share [[yi]] and an i-th element [[ci]] of ([[c0]], . . . , [[cm-1]]) (S220). This corresponds to Steps 5 to 15 of the secure logistic regression calculation algorithm of
For j=0, . . . , n, the model parameter updating means 230 calculates [[e]] by [[e]]=Σi=0m-1[[di]][[xi, j]] from the error [[di]] (0≤i≤m−1) calculated in S220 and a j-th element [[xi, j]] (0≤i≤m−1) of the share [[xi→]] and calculates, from a j-th element [[wj, t]] of the share [[wt→]] and [[e]] by [[wj, t+1]]=[[wj, t]]−η(1/m)[[e]], a j-th element [[wj, t+1]] of a share [[wt+1→]] of a value wt+1→ of the model parameter w→ obtained as a result of t+1 updates (S230). This corresponds to Steps 16 to 21 of the secure logistic regression calculation algorithm of
The condition judging unit 240 judges whether or not a repetition condition, that is, t<T, for model parameter update, which is set in advance, is satisfied. If the condition is satisfied, the processing from S220 to S230 is repeated; if the repetition condition is not satisfied (when the number of learning processes reaches a predetermined number of learning processes T), a share [[wT-1→]] is output as the share [[w→]] of the model parameter w→ and the processing is ended (S240).
According to the invention of the present embodiment, it is possible to perform secure computation of a model parameter of a logistic regression model with high speed and precision.
The logistic regression analysis may be conducted using secure batch mapping and a right shift. Secure batch mapping has the property of being able to adjust numerical precision. However, when it is necessary to perform calculation with higher precision and/or when large learning data is used, precision may be further adjusted using a right shift. It is to be noted that, when simply written as a right shift, it means an arithmetic right shift.
It is to be noted that, in a first modification of the second embodiment, secure computation is performed on a fixed-point number to reduce processing cost. That is, it is assumed that, when a value such as w, which is an object on which secure computation is to be performed, contains a decimal fraction, secure computation is performed on a value obtained by multiplying the above value by 2b. In this case, by calculating a value obtained by dividing the result of the secure computation by 2b, the result of secure computation corresponding to the value on which secure computation was to be originally performed is obtained. b in this case is referred to as precision.
That is, in the first modification of the second embodiment, when an arbitrary value, which is an object on which secure computation is to be performed, is assumed to be p and the precision of p is written as b_p [bit], it means that a share [[p]] of p is actually a share [[p×2b_p]] of a fixed-point number.
Moreover, when an arbitrary vector, which is an object on which secure computation is to be performed, is assumed to be q→, an element of q→ is assumed to be qi, and the precision of q→ is written as b_q [bit], it means that a share [[q→]] of q→ is actually made up of a share [[qi×2b_q]] of a fixed-point number.
It is to be noted that multiplication by 2b is performed by, for example, a secure logistic regression calculation apparatus 200 at an appropriate time. An example of an appropriate time is when data, which is an object on which secure computation is to be performed, is registered on the secure logistic regression calculation apparatus 200, when an object on which secure computation is to be performed is converted to a share from plaintext, or before secure computation.
Furthermore, division by 2b is performed by, for example, the secure logistic regression calculation apparatus 200 at an appropriate time. An example of an appropriate time is when the result of secure computation is returned to a client who has made a request to perform secure computation, when a share is converted to plaintext, or after secure computation.
For this reason, in the following description, multiplication by 2b and division by 2b are not mentioned in some cases.
The precision of w→, w0→, wt+1→, wt+1→, and eta_grad_ave_shift is written as b_w [bit], the precision of xi→ (0≤i≤m−1) is written as b_x [bit], the precision of yi (0≤i≤m−1), ci (0≤i≤m−1), and di (0≤i≤m−1) is written as b_y [bit], the precision of η is written as b_η [bit], the precision of the reciprocal 1/m of the number of pieces of learning data is written as b_m+H [bit], the precision of bi (0≤i≤m−1) is written as b_w+b_x [bit], the precision of e is written as b_y+b_x [bit], the precision of eta_grad is written as b_y+b_x+b_η [bit], the precision of eta_grad_shift is written as b_tmp [bit], and the precision of eta_grad_ave is written as b_tmp+b_m+H [bit]. b_w, b_x, b_y, b_η, b_m, H, and b_tmp are positive integers which are determined in advance in accordance with the performance of a computer that performs secure computation. Here, H satisfies H≥log2 (m).
Moreover, since it is not easy to perform division in secure computation, calculation of 1/m is performed in advance using plaintext by using the fact that the number of pieces of data m is obtained as plaintext. By doing so, it is possible to perform calculation as multiplication by 1/m in secure computation. In doing so, to guarantee the precision b_m [bit] of 1/m, it is necessary to multiply 1/m by 2b_m+H.
Hereinafter, a difference from the secure logistic regression calculation apparatus 200 of the second embodiment will be mainly described.
The processing which is performed by the initializing means 210 in S210 and the processing which is performed by the error calculating means 220 in S220 are the same as those described above. In the first modification of the second embodiment, the model parameter updating means 230 performs the following processing.
For j=0, . . . , n, the model parameter updating means 230 calculates [[e]] by [[e]]=Σi=0m-1[[di]][[xi, j]] from the error [[di]] (0≤i≤m−1) calculated in S220 and a j-th element [[xi, j]] (0≤i≤m−1) of the share [[xi→]].
Next, the model parameter updating means 230 calculates [[eta_grad]]=[[e]] from η and [[e]]. When a maximum value of i is in (that is, the number of pieces of data is in), the amount of memory required by eta_grad is b_y+b_x+b_η+ceil(log2 (m)) [bit]. Since this value increases and overflows, rshift is performed in next processing. Here, ceil means rounding up.
Then, the model parameter updating means 230 calculates [[eta_grad_shift]]=rshift([[eta_grad]], b_y+b_x+b_tmp) from [[eta_grad]]. Here, rshift(a, b) means shifting a value a to the right by b [bit] by performing an arithmetic right shift. Since eta_grad requires b_y+b_x+b_η+ceil(log2 (m)) [bit]-memory (an example: 55 bits when b_y=14, b_x=17, b_η=7, and m=100,000), if eta_grad_ave is calculated without performing a right shift, this value further increases by b_m+H (an example: b_m=17 and H=17) [bit], which may cause the value to overflow. Depending on the performance of a machine, in general, the value overflows when it exceeds 64 bits. It is to be noted that sometimes a smaller amount of memory can be handled in secure computation.
Next, the model parameter updating means 230 calculates [[eta_grad_ave]]=(1/m)[[eta_grad_shift]] from [[eta_grad_shift]].
Then, the model parameter updating means 230 calculates [[eta_grad_ave_shift]]=rshift([[eta_grad_ave]], b_tmp+b_m+H−b_w) from [[eta_grad_ave]].
Finally, the model parameter updating unit 230 calculates, from a j-th element [[wj, t]] of the share [[wt→]] and [[eta_grad_ave_shift]] by [[wj, t+1]]=[[wj, t]]−[[eta_grad_ave_shift]], a j-th element [[wj, t+1]] of a share [[wt+1→]] of a value wt+1→ of the model parameter w→ obtained as a result of t+1 updates.
The processing which is performed by the condition judging unit 240 is the same as that described above.
It is to be noted that the following processing, for example, only has to be performed to convert a logical right shift to an arithmetic right shift. It is to be noted that, in the following description, rlshift(a, b) means shifting a to the right by b [bit] by performing a logical right shift. In the following description, it is explicitly indicated that secure computation has been performed on a value obtained by multiplication by 2b. Therefore, for example, [[B×2n]] literally means that a value B×2n has been shared.
When a right shift of a value a whose precision is n [bit] is performed to change the precision to m [bit], addition of a value A, whose precision is n [bit] and which makes |a|≤A hold, is performed. Assume that n>m holds.
B×2n=a×2n+A×2n
A logical right shift is performed to shift B×2n to the right by n-m [bit].
B×2m=rlshift(B×2n,n−m)
Subtraction of A added before the shift is performed.
a×2m=B×2m−A×2m
That is, since the precision of eta_grad is b_y+b_x+b_η, the model parameter updating unit 230 first sets a value A whose precision is b_y+b_x+b_η and which makes |eta_grad| A hold. Then, the model parameter updating unit 230 calculates a share [[B×2n]] of B×2n that satisfies B×2n=eta_grad×2n+A×2n. The model parameter updating unit 230 then calculates a share [[B×2n]] of B×2n that satisfies B×2m=rlshift([[B×2n]], b_y+b_x+b_η−(b_y+b_x+b_η-b_tmp)) based on the calculated share [[B×2n]]. Then, the model parameter updating unit 230 calculates a share [[a×2m]] of a×2m that satisfies a×2m=B×2m−Ax2m based on the calculated share [[B×2n]]. The calculated [[a×2m]] is [[eta_grad_shift]] in the above description.
Moreover, since the precision of eta_grad_ave is b_tmp+b_m+H, the model parameter updating unit 230 first sets a value A whose precision is b_tmp+b_m+H and which makes |eta_grad_ave|≤A hold. Then, the model parameter updating unit 230 calculates a share [[B×2n]] of B×2n that satisfies B×2n=eta_grad_ave×2n+A×2n. The model parameter updating unit 230 then calculates a share [[B×2n]] of B×2n that satisfies B×2m=rlshift([[B×2n]], b_tmp+b_m+H−(b_tmp+b_m+H−b_w)) based on the calculated share [[B×2n]]. Then, the model parameter updating unit 230 calculates a share [[a×2m]] of a×2m that satisfies a×2m=B×2m−Ax2m based on the calculated share [[B×2n]]. The calculated [[a×2m]] is [[eta_grad_ave_shift]] in the above description.
As described above, by adopting a right shift, it is possible to perform calculation with higher precision and perform processing of large-scale learning data.
It can be said that, since batch mapping which is not common in the architecture of plaintext is used also in the first modification, the first modification is not obvious because specific processing is different from processing that would be obtained by performing processing of plaintext as it is by secure computation.
Moreover, specific fixed-point logistic regression including adjustment of precision has never been configured under conditions where the number of bits, which is determined by the performance of a computing machine used, is limited.
Since batch mapping has the effect of making a precision adjustment, this precision adjustment is not the same as a precision adjustment which is performed in the case of plaintext, and has a property that is unique to secure computation.
These points apply to second and subsequent modifications, which will be described below.
A right shift (rshift) may be replaced by approximation of η/m by a power of 2. In the first modification of the second embodiment, it is possible to make fine settings by increasing the precision of the learning rate η. However, in the first modification of the second embodiment, two right shift operations are necessary and the values of b_y+b, x+b_η, and b_tmp+b_m+H tend to be large values, which may reduce the efficiency of calculation. For this reason, in a second modification of the second embodiment, by performing processing using η/m by combining multiplication by η and multiplication by 1/m, the number of processing operations is reduced and calculation efficiency is increased.
Although the second modification of the second embodiment achieves higher processing efficiency than the first modification of the second embodiment, it does not allow the learning rate η to be finely set. However, this poses little problem because a difference in learning rate does not affect very much the quality of the parameter obtained as the final result.
It is to be noted that, in the first modification of the second embodiment, secure computation is performed on a fixed-point number to reduce processing cost. That is, it is assumed that, when a value such as w, which is an object on which secure computation is to be performed, contains a decimal fraction, secure computation is performed on a value obtained by multiplying the above value by 2b. In this case, by calculating a value obtained by dividing the result of the secure computation by 2b, the result of secure computation corresponding to the value on which secure computation was to be originally performed is obtained. b in this case is referred to as precision.
That is, in the first modification of the second embodiment, when an arbitrary value, which is an object on which secure computation is to be performed, is assumed to be p and the precision of p is written as b_p [bit], it means that a share [[p]] of p is actually a share [[p×2b_p]] of a fixed-point number.
Moreover, when an arbitrary vector, which is an object on which secure computation is to be performed, is assumed to be q→, an element of q→ is assumed to be qi, and the precision of q→ is written as b_q [bit], it means that a share [[q→]] of q→ is actually made up of a share [[qi×2b_q]] of a fixed-point number.
It is to be noted that multiplication by 2b is performed by, for example, a secure logistic regression calculation apparatus 200 at an appropriate time. An example of an appropriate time is when data, which is an object on which secure computation is to be performed, is registered on the secure logistic regression calculation apparatus 200, when an object on which secure computation is to be performed is converted to a share from plaintext, or before secure computation.
Furthermore, division by 2b is performed by, for example, the secure logistic regression calculation apparatus 200 at an appropriate time. An example of an appropriate time is when the result of secure computation is returned to a client who has made a request to perform secure computation, when a share is converted to plaintext, or after secure computation.
For this reason, in the following description, multiplication by 2b and division by 2b are not mentioned in some cases.
The precision of w→, w0→, wt′, wt+1→, and eta_grad_ave is written as b_w [bit], the precision of xi→ (0≤i≤m−1) is written as b_x [bit], the precision of yi (0≤i≤m−1), ci (0≤i≤m−1), and di (0≤i≤m−1) is written as b_y [bit], the precision of η is written as b_η [bit], the precision of bi (0≤i≤m−1) is written as b_w+b_x [bit], and the precision of e is written as b_y+b_x [bit]. b_w, b_x, b_y, and b_η are positive integers which are determined in advance in accordance with the performance of a computer that performs secure computation.
Hereinafter, a difference from the secure logistic regression calculation apparatus 200 of the second embodiment will be mainly described.
The processing which is performed by the initializing means 210 in S210 and the processing which is performed by the error calculating means 220 in S220 are the same as those described above. In the second modification of the second embodiment, the model parameter updating means 230 performs the following processing.
For j=0, n, the model parameter updating means 230 calculates [[e]] by [[e]]=Σi=0m-1[[di]][[xi, j]] from the error [[di]] (0≤i≤m−1) calculated in S220 and a j-th element [[xi, j]] (0≤i≤m−1) of the share [[xi→]]. Since memory required by e is b_y_b_x+ceil(log_2(m))=48 [bit] when b_y=14, b_x=17, and m=100,000, there is more space in memory as compared with the first modification. That is, it is possible to store larger learning data therein because overflow does not occur even when the number of pieces of data m is increased.
Next, the model parameter updating means 230 calculates [[eta_grad_ave]]=rshift([[e]], X+b_y+b_x−b_w) from [[e]]. Here, X is a value that approximates η/m by division by 2X and is calculated as follows, for example. floor is a function representing rounding down. X is calculated by the secure logistic regression calculation apparatus 200 before secure computation.
X=−(floor(log2(η/m)))
Finally, the model parameter updating unit 230 calculates, from a j-th element [[wj, t]] of the share [[wt′]] and [[eta_grad_ave]] by [[wt, t+1]]=[[wj, t]]−[[eta_grad_ave]], a j-th element [[wj, t+1]] of a share [[wt+1→]] of a value wt+1→ of the model parameter w→ obtained as a result of t+1 updates.
The processing which is performed by the condition judging unit 240 is the same as that described above.
As in the case of the first modification of the second embodiment, the following processing, for example, only has to be performed to convert a logical right shift to an arithmetic right shift. It is to be noted that consideration has to be given to X in the second modification of the second embodiment. In the following description, it is explicitly indicated that secure computation has been performed on a value obtained by multiplication by 2b. Therefore, for example, [[B×2n]] literally means that a value B×2n has been shared.
When a right shift of a value a whose precision is n [bit] is performed to change the precision to m [bit] and, at the same time, division by 2X is performed, addition of a value A, whose precision is n [bit] and which makes |a|≤A hold, is performed.
B×2n=a×2n+A×2n
A logical right shift is performed to shift B×2n to the right by n-m+X [bit].
B×2m/(2X)=rlshift(B×2n, n−m+X)
Subtraction of A added before the shift is performed.
a×2m/2X=B×2m/2X−A×2m/2X
Since the precision of e is b_y+b_x, the model parameter updating unit 230 first sets a value A whose precision is b_y+b_x and which makes |e|≤A hold. Then, the model parameter updating unit 230 calculates a share [[B×2n]] of B×2n that satisfies B×2n=e×2n+A×2n. The model parameter updating unit 230 then calculates a share [[B×2m/(2x)]] of B×2m/(2X) that satisfies B×2m/(2X)=rlshift([[B×2n]], b_y+b_x−(X+b_y+b_x−b_w)+X) based on the calculated share [[B×2n]]. Then, the model parameter updating unit 230 calculates a share [[a×2m/2X]] of a×2m/2X that satisfies a×2m/2X=B×2m/2X−A×2m/2X based on the calculated share [[B×2m/(2X)]]. The calculated [[a×2m/2X]] is [[eta_grad_ave]] in the above description.
As described above, by adopting a right shift, it is possible to perform calculation with higher precision and perform processing of large-scale learning data.
When logistic regression of the second embodiment is performed, normalization may be performed using a right shift with data being concealed. Normalization, which is preprocessing to make all the data with different ranges, such as height and age, fall within a specific area such as from 0 to 1, is often used in machine learning in general. Therefore, in order to make machine learning on secure computation actually operational, normalization needs to be able to be performed on secure computation.
When a certain data string x→=x0, x1, . . . , xm is normalized such that 0≤{circumflex over ( )}xi≤1 ({circumflex over ( )}xi is an arbitrary element of {circumflex over ( )}x→) holds, a shift amount S having the following property is calculated using a maximum value maxx→ and a minimum value minx→ of x→.
2S≥max(|maxx→|, |minx→|) . . . (A) max(a, b) represents a value a or a value b, whichever is greater. By performing the following calculation using the shift S amount obtained by the above-described calculation, normalization can be performed such that 0≤{circumflex over ( )}xi≤1 holds.
{circumflex over ( )}x→=rshift(x→,S)
When a model parameter {circumflex over ( )}w→={circumflex over ( )}w0, {circumflex over ( )}w1, . . . , {circumflex over ( )}wn obtained by performing learning using {circumflex over ( )}x→ is converted to a parameter w→ which would be obtained if calculation was performed without normalization, it is only necessary to perform the following calculation.
w
→
>={circumflex over ( )}w
→×2−S (B)
Hereinafter, a difference from the secure logistic regression calculation apparatus 200 of the second embodiment will be mainly described.
The secure logistic regression calculation apparatus 200 of a third modification of the second embodiment first calculates, for each explanatory variable xj→, a shift amount Sj (j=0, 1, . . . , n) that satisfies the above-described formula (A). Then, the secure logistic regression calculation apparatus 200 calculates [[{circumflex over ( )}xj→]]=rshift([[xj→]], Sj) using the calculated shift amount S and [[xj→]]. The secure logistic regression calculation apparatus 200 calculates a share [[{circumflex over ( )}w→]] of the model parameter by performing the above-described processing which is performed by the initializing means 210, the error calculating means 220, and the model parameter updating means 230 using [[{circumflex over ( )}xi→]] in place of [[xi→]]. Finally, the secure logistic regression calculation apparatus 200 calculates [[w→]]=[[{circumflex over ( )}w→]]×2−S using [[{circumflex over ( )}w→]].
As described above, by performing normalization, it is possible to make machine learning on secure computation actually operational.
When logistic regression of the second embodiment is performed, the Nesterov accelerated gradient (NAG) method may be applied. Performing parameter learning using a simple gradient descent method results in low learning efficiency; therefore, in actuality, an optimization technique is often adopted to increase learning efficiency. However, since many optimization techniques include calculations, such as division, which are difficult for secure computation to perform, it is not easy to adopt an optimization technique when performing parameter learning in secure computation. Under these circumstances, the NAG method is an optimization technique that can be implemented only by addition/subtraction and multiplication, which makes it possible to adopt the NAG method into secure computation at low cost.
A parameter learning formula in the gradient descent method is as follows.
w
j,t+1
=w
j,t−Θ(1/m)Σi=0m(f(xi→)t−yi)xi, j
f(xi→)=σ(w→·(1,xi→))
On the other hand, a parameter learning formula which is used when the NAG method is applied is as follows.
v
j,t+1
=αv
j,t−η(1/m)Σi=0m(f(xi→)t−yi)xi,j
w
j,t+1
=w
j,t
−v
j,t+1
f(xi→)=σ(θ→·(1,xi→))
θ→=w→−αv→
v→ is a weight vector that is newly added in a fourth modification of the second embodiment. α is a parameter (an attenuation factor) which makes 0≤α≤1 hold, and can be arbitrarily set. The fourth modification of the second embodiment, which is a modification that is obtained when the NAG method is applied to the second modification of the second embodiment, is as follows.
Hereinafter, a difference from the secure logistic regression calculation apparatus 200 of the second modification of the second embodiment will be mainly described. Overlapping explanations of portions similar to those of the second modification of the second embodiment will be omitted.
The precision of w→, w0→, wt→, wt+1→, and eta_grad_ave is written as b_w [bit], the precision of xi→ (0≤i≤m−1) is written as b_x [bit], the precision of yi (0≤i≤m−1), ci (0≤i≤m−1), and di (0≤i≤m−1) is written as b_y [bit], the precision of bi (0≤i≤m−1) is written as b_w+b_x [bit], the precision of e is written as b_y+b_x [bit], the precision of the attenuation factor α is written as b_α, the precision of v→, v0→, vt→, and vt+1→, is written as b_v, and the precision of alpha_v is written as b_v+b_α. b_w, b_x, b_y, and b_α are positive integers which are determined in advance in accordance with the performance of a computer that performs secure computation. It is to be noted that b_v+b_α may be set such that b_v+b_α=b_w holds. This makes it easy to perform processing.
In addition to the share [[w0→]] of the initial value w0→ of the model parameter w→, the initializing means 210 sets a share [[v0→]] of an initial value v0→ of the weight vector v→ (S210). Specifically, the initializing means 210 only has to set the share [[w0→]] of the appropriate initial value w0→ recorded on the recording unit 290i in advance and the share [[v0→]] of the appropriate initial value v0→ recorded on the recording unit 290i in advance.
The error calculating means 220 first calculates θ→=w→−αv→. Then, the error calculating means 220 performs calculation of an error using θ→ in place of w→ in the same manner as that described above.
In the fourth modification of the second embodiment, the model parameter updating means 230 performs the following processing.
For j=0, . . . , n, the model parameter updating means 230 calculates [[e]] by [[e]]=Σi=0m-1[[di]][[xi, j]] from the error [[dt]] (0≤i≤m−1) calculated in S220 and a j-th element [[xi, j]] (0≤i≤m−1) of the share [[xi→]].
The model parameter updating means 230 calculates [[eta_grad_ave]]=rshift([[e]], X+b_y+b_x−b_w) from [[e]] in the same manner as that of the second modification of the second embodiment.
The model parameter updating means 230 calculates [[alpha_v]]=α[[vj, t]] from α and a j-th element [[vj, t]] (0≤i≤m−1) of a share [[vt→]].
The model parameter updating means 230 calculates, by [[vj, t+1]]=[[alpha_v]]−[[eta_grad_ave]], a j-th element [[vj, t+1]] of a share [[vt+1→]] of a value vt+1→ of the weight vector v→ obtained as a result of t+1 updates.
The model parameter updating unit 230 calculates, from a j-th element [[wj, t]] of the share [[wt→]] and [[vj, t+1]] by [[wj, t+1]]=[[wj, t]]−[[vj, t+1]], a j-th element [[wj, t+1]] of a share [[wt+1→]] of a value wt+1→ of the model parameter obtained as a result of t+1 updates.
The model parameter updating means 230 calculates [[vj, t+1]]=rshift([[vj, t+1]], b_α) from [[vj, t+1]]. That is, the model parameter updating means 230 uses, as a new share [[vj, t+1]], a share [[vj, t+1]] of a value obtained by shifting vj, t+1 to the right by b_α.
The processing which is performed by the condition judging unit 240 is the same as that described above.
As described above, by using the optimization technique based on the NAG method, it is possible to increase learning efficiency.
Each apparatus according to the present invention has, as a single hardware entity, for example, an input unit to which a keyboard or the like is connectable, an output unit to which a liquid crystal display or the like is connectable, a communication unit to which a communication device (for example, communication cable) capable of communication with the outside of the hardware entity is connectable, a central processing unit (CPU, which may include cache memory and/or registers), RAM or ROM as memories, an external storage device which is a hard disk, and a bus that connects the input unit, the output unit, the communication unit, the CPU, the RAM, the ROM, and the external storage device so that data can be exchanged between them. The hardware entity may also include, for example, a device (drive) capable of reading and writing a recording medium such as a CD-ROM as desired. A physical entity having such hardware resources may be a general-purpose computer, for example.
The external storage device of the hardware entity has stored therein programs necessary for embodying the aforementioned functions and data necessary in the processing of the programs (in addition to the external storage device, the programs may be prestored in ROM as a storage device exclusively for reading out, for example). Also, data or the like resulting from the processing of these programs are stored in the RAM and the external storage device as appropriate.
In the hardware entity, the programs and data necessary for processing of the programs stored in the external storage device (or ROM and the like) are read into memory as necessary to be interpreted and executed/processed as appropriate by the CPU. As a consequence, the CPU embodies predetermined functions (the components represented above as units, means, or the like).
The present invention is not limited to the above embodiments, but modifications may be made within the scope of the present invention. Also, the processes described in the embodiments may be executed not only in a chronological sequence in accordance with the order of their description but may be executed in parallel or separately according to the processing capability of the apparatus executing the processing or any necessity.
As already mentioned, when the processing functions of the hardware entities described in the embodiments (the apparatuses of the present invention) are to be embodied with a computer, the processing details of the functions to be provided by the hardware entities are described by a program. By the program then being executed on the computer, the processing functions of the hardware entity are embodied on the computer.
The program describing the processing details can be recorded on a computer-readable recording medium. The computer-readable recording medium may be any kind, such as a magnetic recording device, an optical disk, a magneto-optical recording medium, or a semiconductor memory. More specifically, a magnetic recording device may be a hard disk device, flexible disk, or magnetic tape; an optical disk may be a DVD (digital versatile disc), a DVD-RAM (random access memory), a CD-ROM (compact disc read only memory), or a CD-R (recordable)/RW (rewritable); a magneto-optical recording medium may be an MO (magneto-optical disc); and a semiconductor memory may be EEP-ROM (electronically erasable and programmable-read only memory), for example.
Also, the distribution of this program is performed by, for example, selling, transferring, or lending a portable recording medium such as a DVD or a CD-ROM on which the program is recorded. Furthermore, a configuration may be adopted in which this program is distributed by storing the program in a storage device of a server computer and transferring the program to other computers from the server computer via a network.
The computer that executes such a program first, for example, temporarily stores the program recorded on the portable recording medium or the program transferred from the server computer in a storage device thereof. At the time of execution of processing, the computer then reads the program stored in the storage device thereof and executes the processing in accordance with the read program. Also, as another form of execution of this program, the computer may read the program directly from the portable recording medium and execute the processing in accordance with the program and, furthermore, every time the program is transferred to the computer from the server computer, the computer may sequentially execute the processing in accordance with the received program. Also, a configuration may be adopted in which the transfer of a program to the computer from the server computer is not performed and the above-described processing is executed by so-called application service provider (ASP)-type service by which the processing functions are implemented only by an instruction for execution thereof and result acquisition. Note that a program in this form shall encompass information that is used in processing by an electronic computer and acts like a program (such as data that is not a direct command to a computer but has properties prescribing computer processing).
Further, although the hardware entity was described as being configured via execution of a predetermined program on a computer in this form, at least some of these processing details may instead be embodied with hardware.
Number | Date | Country | Kind |
---|---|---|---|
2018-189297 | Oct 2018 | JP | national |
2019-003285 | Jan 2019 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/038966 | 10/2/2019 | WO | 00 |