This application is a National Stage Entry of PCT/JP2020/001631 filed on Jan. 20, 2020, the contents of all of which are incorporated herein by reference, in their entirety.
The present invention relates to a secure computation system, a secure computation server apparatus, a secure computation method, and a secure computation program.
In recent years, research and development referred to as a secure computation have been actively carried out (for example, see Patent Literature (PTL) 1 or Non-Patent Literature (NPL) 1). The secure computation is a technique for performing a predetermined processing while concealing computation processes and a result thereof from a third party. As one of a typical technique for realizing the secure computation, a Multi-Party Computation (MPC) technique is known. According to the Multi-Party Computation technique, secure data are distributedly arranged [i. e. shared] at multiple servers (secure computation servers), and arbitrary operations can be performed while keeping the data secure. Hereinafter, unless otherwise noted, the term “secure computation” means “Multi-Party Computation technique”.
NPL2 and NPL 3 disclose a type conversion processing using the secure computation such as a bit-decomposition and a bit-recomposition.
Each disclosure of the above literatures of Citation List is to be incorporated herein by reference thereto. The following analysis is given by the present inventor.
By the way, while arbitrary operations can be performed in the secure computation, there are some processings that are unique to the secure computation due to the special nature of sharing data among multiple secure computation servers. A “bit-injection (or padding)” which is one of type conversions disclosed in the above NPL 3 is also a processing unique to the secure computation. In the secure computation, the bit-injection (or padding) may be performed as a subroutine for realizing a specific application.
For example, in the type conversions, such as the “bit-recomposition”, the bit-injections (or paddings) may be performed in parallel. Then, as a degree of parallelism of the bit-injections (or paddings) increases, communication traffic (volume) among secure computation servers also increases. For example, when the bit-injections (or paddings) are performed k times in parallel, communication traffic will be O(k2), which is a significant increase in communication traffic compared to increasing the degree of parallelism. This will have a significant impact on communication traffic when the degree of the parallelism of the bit-injections (or paddings) is large.
It is an object of the present invention to provide a secure computation system, a secure computation server apparatus, a secure computation method, and a secret calculation program, which contribute to efficient processings, in view of the above circumstances.
According to a first aspect of the present invention, there is provided a secure computation system including at least three or more secure computation server apparatuses connected to each other through a network, wherein
According to a second aspect of the present invention, there is provided a secure computation server apparatus that is one of a secure computation server apparatus among at least three or more secure computation server apparatuses connected to each other through a network, including:
According to a third aspect of the present invention, there is provided a secure computation method using at least three or more secure computation server apparatuses connected to each other through a network, including:
According to a fourth aspect of the present invention, there is provided a non-transitory computer-readable medium storing therein a secure computation program that causes at least three or more secure computation server apparatuses connected to each other through a network to execute processes, including:
According to each aspect of the present invention, there is provided a secure computation system, a secure computation server apparatus, a secure computation method, and a secret calculation program, which contribute to efficient processings.
Hereinafter, example embodiments of the present invention will be described with reference to the drawings. However, the present invention is not limited to the example embodiments which will be described in the following. Also, it should be noted that the drawings are schematic drawings, and dimensional relationships of respective elements, ratios of respective elements etc. may be different from those in reality. Interactions between drawings may also include parts that have different mutual dimensional relationships and ratios.
[Preparation]
Hereinafter, for explaining the present example embodiment, a notation will be defined and processing elements will be explained. The notation and an operation element(s) explained below will be commonly used in explanations of each example embodiment.
A residue class ring modulo 2 is notated as Z2, and a residue class ring modulo 2k is notated as Z2k. Here, k is a natural number not smaller than 2. The secure computation server may be referred to Pi for indices i=1, 2, 3. XOR means an exclusive OR.
When a share for an arithmetic operation is denoted by [x]:=x1+x2+x3 mod 2k (x, xi∈Z2k), each secure computation server Pi has a shared data [x]i (i=1, 2, 3) as follows.
When a share for a logical operation is denoted by [[x]]:=x1 XOR x2 XOR x3 mod 2 (x, xi∈Z2), each secure computation server Pi has a shared data [[x]]i (i=1, 2, 3), as follows.
A correlated randomness αi (i=1, 2, 3) will be generated as follows. αi=H(ki, vid) XOR H(ki+1, vid)
The αi generated in this manner can be regarded as random numbers, and the following relationship holds.
α1 XOR α2 XOR α3=0
Note that a call for a processing of the correlated randomness is expressed by α1←CR(Pi,(ki, ki+1),vid).
[Random Share]
A random share for the logical operation, [[r]](r=r1 XOR r2 XOR r3), will be generated as follows.
First, each ri=H (ki, vid) is generated using the seeds ki (i=1, 2, 3) and the vid, which is the publicly opened value, such as the counter, both of which are also used in the explanation of the correlated randomness. Each of the secure computation server Pi will hold each of these generated ri as a shared data [[r]]i, as follows.
A bit-injection (or padding) is a processing that receives the share [[x]] for the logical operation as an input and outputs the share [x] for the arithmetic operation. A call for a processing of the bit-injection (or padding) is expressed by [x]←BitInjection([[x]]). The method(s) described in NPL 2, NPL 3 and/or NPL 4, for example, can be used as a concrete processing for the bit-injection (or padding). However, other appropriate processing(s) of the bit-injection (or padding) can be used in the example embodiment(s) of the invention.
[Inner Product Calculation]
An inner product calculation is a processing that receives two vectors ([x1], . . . , [xn]), ([y1], . . . , [yn]) of the shares for the arithmetic operations related to two vectors x=(x1, . . . , xn), y=(y1, . . . , yn), as inputs, and outputs [Σi=0n xiyi]. A call for the processing is expressed by [Σi=0nxiyi]←InnerProduct(([x1], . . . , [xn]), ([y1], . . . , [yn])). The method(s) described in NPL 1, and/or NPL 4, for example, can be used as a concrete processing for the inner product calculation. However, other appropriate processing(s) of the inner product calculation can be used in the example embodiment(s) of the invention.
[Subtraction Between Arithmetic Shares]
A subtraction between arithmetic shares is a processing that receives two shares for the arithmetic operation [a], [b] as inputs, and outputs [a-b]. A call for a processing of the subtraction between arithmetic shares is expressed by [a-b]←Sub([a], [b]). The method(s) described in NPL 1, and/or NPL 4, for example, can be used as a concrete processing for the subtraction between arithmetic shares. However, other appropriate processing(s) of the subtraction between arithmetic shares can be used in the example embodiment(s) of the invention.
[Resharing]
A resharing is a processing that receives a share for the logical operation [[x]] as an input, and outputs ([x1], [x2], [x3]), where x=x1 XOR x2 XOR x3 mod 2. A call for a processing of the resharing is expressed by ([x1], [x2], [x3])←LocalReshare([[x]]). The method described in NPL 2, for example, can be used as a concrete processing for the resharing. However, other appropriate processing(s) of the resharing can be used in the example embodiment(s) of the invention.
[Generating Random Numbers for Fraud Detection Related to Arithmetic Operation]
Generating random numbers for fraud detection related to the arithmetic operation is a processing that outputs [a], [b], [c], or ⊥, which means that a fraud has been detected. Here, a, b, c are random value and satisfies a, b, c∈Z2k and c=ab. The processing is used for a fraud detection related to an arithmetic multiplication among shares. A call for a processing is expressed by ([a], [b], [c])←A-TripleGen. The method(s) described in NPL 5 and/or NPL 6, for example, can be used as a concrete processing. However, other appropriate processing(s) of the generating random numbers for fraud detection related to the arithmetic operation can be used in the example embodiment(s) of the invention.
[Generating Random Numbers for Fraud Detection Related to Matrix Operation]
Generating random numbers for fraud detection related to a matrix operation is a processing that outputs [A], [B], [C], or, ⊥, which means that a fraud has been detected. Here, A, B, C are random matrices having values on Z2k, as elements and satisfies C=AB. This processing is used for a fraud detection related to a matrix product operation. Here, a matrix product includes an inner product operation. A call for a processing is expressed by ([A], [B], [C])←M-TripleGen. The method described in NPL 4, for example, can be used as a concrete processing. However, other appropriate processing(s) of the generating random numbers for fraud detection related to the matrix operation can be used in the example embodiment(s) of the invention.
[Fraud Detectable (or Maliciously Secure) Bit-Injection (or Padding)]
A fraud detectable (or maliciously secure) bit-injection (or padding) is a processing that receives [[x]], ([aj], [bj], [cj]), and ([a′j], [b′j], [c′j]) as input, and outputs [x] or ⊥, which means that a fraud has been detected. A call for processing is expressed by [x]←m-BitInjection([[x]], ([aj], [bj], [cj]), ([a′j], [b′j], [c′j])). The concrete processing can be achieved, for example, by combining the method(s) described in NPL 2, NPL 3 and/or NPL 4 with the method(s) described in NPL 5, and/or NPL 6. However, other appropriate processing(s) of the fraud detectable (or maliciously secure) bit-injection (or padding) can be used in the example embodiment(s) of the invention.
[Fraud Detectable (or Maliciously Secure) Inner Product Calculation]
A fraud detectable (or maliciously secure) inner product calculation is a processing that receives two vectors ([x1], . . . , [xn]), ([y1], . . . , [yn]) of the shares for the arithmetic operations related to two vectors x=(x1, . . . , xn), y(y1, . . . , yn), and ([A], [B], [C]) as input and outputs [Σi=0nxiyi] or ⊥, which means that a fraud has been detected. A call for processing of the fraud detectable (or maliciously secure) inner product calculation is expressed by m-InnerProduct(([x1], . . . , [xn]), ([y1], . . . , [yn], ([A], [B], [C])). The method described in NPL 4, for example, can be used as a concrete processing. However, other appropriate processing(s) of the fraud detectable (or maliciously secure) inner product calculation can be used in the example embodiment(s) of the invention.
Hereinafter, referring to
As illustrated in
In the secure computation system 100, provided with the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) of above configuration, for a value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2jxj, xj∈Z2) inputted from one of the first to the third secure computation server apparatuses 100_i among the first to the third secure computation server apparatuses 100_i (i=1, 2, 3), a share [x] is computed, without being known of the value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2jxj, xj∈Z2) from the inputted value and/or a value(s) generated in computation processes, and the share [x] is stored in each of the share value storage parts 106_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3), respectively.
In addition, in the secure computation system 100, provided with the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) of above configuration, for the share of [[x0]], . . . , [[xk−1]](x=Σj=0k−1 2ixj, xj∈Z2) stored in each of the share value storage parts 106_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3), a share [x] is computed, without being known of the value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2ixj, xj∈Z2) from a value(s) generated in the computation processes, and the share [x] is stored in each of the share value storage part 106_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3), respectively.
Furthermore, in the secure computation system 100, provided with the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) of above configuration, for a share of [[x0]], . . . , [[xk−1]](x=Σj=0k−1 2jxj, xj∈Z2) inputted from an apparatus other than the first to the third secure computation server apparatuses 100_i (i=1, 2, 3), a share [x] is computed, without being known of the value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2jxj, xj∈Z2) from a value(s) generated in computation processes, and the share [x] is stored in each of the share value storage parts 106_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3), respectively.
It is noted that the share of the above computation result may be restored by transmitting and receiving the share among the first to the third secure computation server apparatuses 100_1 to 100_3. Alternatively, the share may be restored by transmitting the share to an outside other than the first to the third secure computation server apparatuses 100_1 to 100_3.
Next, a secure computation method according to the first example embodiment of the present invention will be described in detail. That is, an operation of the secure computation system 100 provided with the first to the third secure computation server apparatuses 100_i (i=1, 2, 3), as described above, will be described.
(Step A1)
The secure computation system 100 stores seeds (ki, ki+1) in each of the seed storage parts 105_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3), respectively. It is noted that the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) share a pseudo random number generator H in each of the random number generation parts 104_i.
(Step A4)
Next, the secure computation system 100 generates random numbers for the logical operation. When the secure computation server apparatus 100_i represented as Pi, each of the secure computation server apparatuses 100_i performs a processing represented as (rj,i, rj,i+1)<-RandGen(Pi,(ki, ki+1), vid), (i=1, 2, 3 and j=0, . . . , k−1). Assuming that rj=rj,i XORrj,i XOR rj,i mod 2, each of the secure computation server apparatuses 100_i (i=1, 2, 3) stores [[rj]]i in the pre-generated random number storage part 107_i.
(Step A5)
Further, the secure computation system 100 generates random numbers for the arithmetic operation. Each of the arithmetic operation parts 101_i in the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) performs a processing of the bit-injection (or padding) represented as [rj]←BitInjection([[rj]]), using [[rj]]i which is stored in the pre-generated random number storage part 107_i (j=0, . . . , k−1). Then, the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) store the computed share [rj]i of the random numbers (or random share) to the pre-generated random number storage parts 107_i.
(Step A6)
Here, the secure computation system 100 uses shares [[x0]]i, . . . , [[xk−1]],(x=Σj=0k−1 2jxj, xj∈Z2), which are to be targets of the bit-recomposition and are stored in each of the share value storage parts 106_i, for the first time for processing. That is, in the secure computation method according to the first example embodiment of the present invention, it is not necessary to use the targets of the bit-recomposition in the processings from step A1 to step A5. The secure computation method according to the first example embodiment of the present invention, shares [[x0]]i, . . . , [[xk−1]]i(x=Σj=0k−1 2jxj, xj∈Z2), which are to be targets of the bit-recomposition, may have been already stored in each of the share value storage parts 106_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3), or may be configured to accept an input that is to be targets of the bit-recomposition in step A6.
The secure computation system 100 restores a carry. Concretely, each of the logical operation parts 102_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) computes a carry Cj,i using, the shares [[x0]]i, . . . , [[xk−1]]i(x=Σj=0k−1 2jxj, xj∈Z2), which are to be targets of the bit-recomposition and the shares [[r0]]i, . . . , [[rk−1]]i of the random numbers for the logical operation, as follows;
Next, the logical operation part 102_3 of the third secure computation server apparatus 100_3 transmits Cj,3 (j=0, . . . , k−1) to the logical operation part 102_1 of the first secure computation server apparatus 100_1 and the logical operation part 102_2 of the second secure computation server apparatus 100_2. On the other hand, the logical operation part 102_1 of the first secure computation server apparatus 100_1 transmits Cj,2 (j=0, . . . , k−1) to the logical operation part 102_2 of the second secure computation server apparatus 100_2, and the logical operation part 102_2 of the second secure computation server apparatus 100_2 transmits Cj,1 (j=0, . . . , k−1) to the logical operation part 102_1 of the first secure computation server apparatus 100_1.
Then, the logical operation part 102_1 of the first secure computation server apparatus 100_1 and the logical operation part 102_2 of the second secure computation server apparatus 100_2 compute Cj XOR rj (j=0, . . . , k−1) as follows;
After computation, the logical operation part 102_1 of the first secure computation server apparatus 100_1 and the logical operation part 102_2 of the second secure computation server apparatus 100_2 transmit each Cj to the arithmetic operation part 101_1 of the first secure computation server apparatus 100_1 and the arithmetic operation part 101_2 of the second secure computation server apparatus 100_2, respectively. Then, each of the arithmetic operation parts 101_i obtains [Cj XOR rj] as follows;
Further, each of the arithmetic operation parts 101_i transmits [Cj XOR rj] (j=0, . . . , k−1) to each share value storage part 106_i, respectively, and each share value storage part 106_i stores [Cj XOR rj](j=0, . . . , k−1) therein.
(Step A8)
The secure computation system 100 performs subtraction between the carry and the random numbers. Concretely, the arithmetic operation parts 101_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) compute [(Cj XOR rj)−rj] (j=0, . . . , k−1) using, [Cj XOR rj] and [rj] (j=0, . . . , k−1), as follows;
After computation, each of the arithmetic operation parts 101_i transmits [(Cj XOR rj)−rj] (j=0, . . . , k−1) to each share value storage part 106_i, then, each share value storage part 106_i stores [(Cj XOR rj)−rj](j=0, . . . , k−1) therein.
(Step A9)
The secure computation system 100 performs computation to remove a mask from the carry using inner product. Concretely, the inner product calculation parts 103_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) perform following computation using [(Cj XOR rj)−rj] (j=0, . . . , k−1). Here, [y]=[Σj=0k−1 2j(−2)cj].
After computation, each of the inner product calculation parts 103_i transmits [y]i to each share value storage part 106_i, respectively, and each share value storage part 106_i stores [y]i therein,
(Step A10)
The secure computation system 100 performs resharing. Concretely, each of the arithmetic operation parts 101_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) performs following computation using [[x0]]i, . . . , [[xk−1]]i(x=Σj=0k−1 2jxj,xj ∈Z2,xj=xj,1 XOR xj,2 XOR xj,3 mod 2);
The secure computation system 100 erases the carry. Concretely, the arithmetic operation parts 101_i of the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) perform following computation using reshared arithmetic shares, ([xj,1],[xj,2],[xj,3])(j=0, . . . , k−1), and [y]i.
After computation, each of the arithmetic operation parts 101_i transmits [x]i to each share value storage part 106_i, respectively, and each share value storage part 106_i stores [x]i. therein. Thus, the first to the third secure computation server apparatuses 100_i (i=1, 2, 3) obtain [x]i after the bit-recomposition from shares [[x0]]i, . . . , [[xk−1]]i(x=Σj=0k−1 2jxj,xj∈Z2), which are to be targets of the bit-recomposition.
The first example embodiment of the present invention described above has advantageous effects which will be described in the following.
According to the first example embodiment of the present invention, efficiency is improved in a processing, such as the bit-recomposition, where the bit-injections (or paddings) is performed in parallel. As explained above, according to the first example embodiment of the present invention, processings of steps A1 to A5 can be performed independently of the input, therefore, only processings of steps A6 to A11 are computed accompanying the input, and the order of communication traffic in processings of steps A6 to A11 is suppressed to O(k). As mentioned above, when bit-injections (or paddings) are performed k times in parallel, the order of communication traffic will be O(k2), therefore, according to the first example embodiment of the present invention, as compared in the communication traffic after input, communication traffic is improved with respect to orders of magnitude. In other words, the first embodiment of the invention is remarkably efficient.
It should be noted that the first example embodiment of the present invention is not limited to the bit-recomposition but can also be applied to type conversions with modulus conversion. The “modulus” here refers to the modulus when a residue class ring modulo 2 is notated as Z2, and a residue class ring modulo 2k is notated as Z2k, as described above. Therefore, the first example embodiment of the present invention can be applied to a PopCount (a processing to count the number of bits with a value of 1).
Concretely, the first example embodiment of the present invention can be applied to PopCount by modifying the processing in step A9 and the processing in step A11 above as follows, respectively.
(Step A11)
[x]=Σj=0k−1([xj,1]+[xj,2]+[xj,3])−[y]=Σj=0k−1([xj,1]+[xj,2]+[xj,3]−2·cj)=Σj=0k−1[xj] [Math 4]
Even if the processing in step A9 and the processing in step A11 of the first example embodiment of the present invention are modified as described above, the communication traffic in steps A6 to A11 can be suppressed by O(k).
Hereinafter, referring to
As illustrated in
In the secure computation system 200, provided with the first to the third secure computation server apparatuses 200_i (i=1, 2, 3) of above configuration, for a value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2jxj, xj∈Z2) inputted from one of the first to the third secure computation server apparatuses 200_i among the first to the third secure computation server apparatuses 200_i (i=1, 2, 3), a share [x] is computed, without being known of the value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2jxj, xj∈Z2) from the inputted value and/or a value(s) generated in computation processes, and the share [x] is stored in each of the share value storage parts 206_i of the first to the third secure computation server apparatuses 200_i (i=1, 2, 3), respectively.
In addition, in the secure computation system 200, provided with the first to the third secure computation server apparatuses 200_i (i=1, 2, 3) of above configuration, for the share of [[x0]], . . . , [[xk−1]](x=Σj=0k−1 2ixj, xj∈Z2) stored in each of the share value storage parts 206_i of the first to the third secure computation server apparatuses 200_i (i=1, 2, 3), a share [x] is computed, without being known of the value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2ixj, xj∈Z2) from a value(s) generated in the computation processes, and the share [x] is stored in each of the share value storage parts 206_i of the first to the third secure computation server apparatuses 200_i (i=1, 2, 3), respectively.
Furthermore, in the secure computation system 200, provided with the first to the third secure computation server apparatuses 200_i (i=1, 2, 3) of above configuration, for a share of [[x0]], . . . , [[xk−1]](x=Σj=0k−1 2jxj, xj∈Z2) inputted from an apparatus other than the first to the third secure computation server apparatuses 200_i (i=1, 2, 3), a share [x] is computed, without being known of the value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2jxj, xj∈Z2) from a value(s) generated in computation processes, and the share [x] is stored in each of the share value storage parts 206_i of the first to the third secure computation server apparatuses 200_i (i=1, 2, 3), respectively.
It is noted that the share of the above computation result may be restored by transmitting and receiving the share among the first to the third secure computation server apparatuses 200_1 to 200_3. Alternatively, the share may be restored by transmitting the share to an outside other than the first to the third secure computation server apparatuses 200_1 to 200_3.
Next, a secure computation method according to the second example embodiment of the present invention will be described in detail. That is, an operation of the secure computation system 200 provided with the first to the third secure computation server apparatuses 200_i (i=1, 2, 3) and the auxiliary server apparatus 208, as described above, will be described.
(Step B1)
The secure computation system 200 performs the same processing as step A1 described above. That is, the secure computation system 200 stores seeds (ki,ki+1) in each of the seed storage parts 205_i of the first to the third secure computation server apparatuses 200_i (i=1, 2, 3), respectively. It is noted that the first to the third secure computation server apparatuses 200_i (i=1, 2, 3) share a pseudo random number generator H in each of the random number generation parts 104_i.
(Step B2)
Next, the secure computation system 200 generates random numbers for fraud detection. Concretely, each of the random number generation parts 204_i of the first to the third secure computation server apparatuses 200_i (i=1, 2, 3) performs generation of random numbers for fraud detection related to the arithmetic operation and the matrix operation, as follows;
It is noted that if fraud is detected during the process of generating the random numbers for fraud detection (step B2; ⊥), ⊥ is output and the process is aborted. Then, each of the random number generation parts 204_i stores ([aj], [bj], [cj]), ([a′j], [b′j], [c′j]) and ([Aj], [Bj], [Cj]) in the pre-generated random number storage parts 207_i. Note that abort means to stop processing based on the judgment that an abnormality has been detected.
(Step B4)
The secure computation system 200 performs generation of random numbers for the logical operation using the same processing as Step A4 described above. That is, the first to the third secure computation server apparatuses 200_i (i=1, 2, 3) generate random numbers [[rj]]i for the logical operation and store [[rj]]i in the pre-generated random number storage parts 207_i.
(Step B5)
Next, the secure computation system 200 performs generation of random numbers for the arithmetic operation. Concretely, the arithmetic operation parts 201_i of the first to the third of the secure computation server apparatuses 200_i (i=1, 2, 3) perform following processing using [[rj]]i, ([aj], [bj], [cj]) and ([a′j], [b′j], [c′j]). [rj]←m-BitInjection([[rj]], ([aj], [bj], [cj]), ([a′j], [b′j], [c′j])) (for j=0, . . . , k−1)
It is noted that if fraud is detected during the above processing (step B5; ⊥), ⊥ is output and the process is aborted. Then, each of the arithmetic operation parts 201_i stores [rj]i in each of the pre-generated random number storage parts 207_i.
(Step B6)
The secure computation system 200 performs restoration of the carry using the same processing as step A6 described above. It is noted that in the secure computation method according to the second example embodiment of the present invention, the secure computation system 200 uses shares [[x0]]i, . . . , [[xk−1]]i(x=Σj=0k−1 2jxj, xj∈Z2), which are to be targets of the bit-recomposition and are stored in each of the share value storage parts 206_i, for the first time for processing in step B6.
(Step B7)
Next, the secure computation system 200 verifies whether there is a fraud in the carry Cj,1 (i=1, 2, 3) transmitted in step B6.
First of all, verification of a carry Cj,3 received at the logical operation part 202_1 will be described. First, the logical operation part 202_1 transmits a value obtained by masking the received carry Cj,3 with random numbers and other value(s) to the auxiliary server apparatus 208. The auxiliary server apparatus 208 determines, whether the value of Cj,3, which is received at the logical operation part 202_1 and the logical operation part 202_2, has been tampered by using the value transmitted from the logical operation part 202_1 and the logical operation part 202_2, respectively. If a verification equation (*) described below is valid, the auxiliary server apparatus 208 continues the processings thereafter. If it is not valid (step B7; ⊥), the auxiliary server apparatus 208 outputs ⊥ and aborts the processings.
Subsequently, the logical operation part 202_1 computes followings. After computation, the logical operation part 202_1 transmits m′j, mj,1,1, mj,1,2 to the auxiliary server apparatus 208.
m′j=cj,3 XOR H(k2,vid′1)XOR H(k2,vid′2)
mj,1,1=H(k2,vidj,1)XOR xj,1.
mj,1,2=H(k2,vid′1)XOR(H(k2,vid′j,1)·H(k2,vidj,2))XOR(H(k2,vidj,2)·xj,1)XOR H(k1,vidj,α) [Math 5]
On the other hand, the logical operation part 202_2 computes followings. After computation, the logical operation part 202_2 transmits mj,2,1, mj,2,2 to the auxiliary server apparatus 208.
mj,2,1=H(k2,vidj,2)XOR xj,3
mj,2,2=H(k2,vid′2)XOR(H(k2,vidj,1)·xj,3)XOR H(k3,vidj,α)XOR H(k3,vidj) [Math 6]
The auxiliary server apparatus 208 determines whether or not the following equation holds. If the equation holds, the auxiliary server apparatus 208 continues processings, and if it does not hold (step B7; ⊥), the auxiliary server apparatus 208 outputs ⊥ and abort processings.
[Math 7]
m′j=mj,1,1·mj,2,1XORmj,1,2XORmj,2,2 (*)
It is noted that the verification of Cj,1 and Cj,2, and the carry Cj,3 received at the logical operation part 202_2 is performed in the same manner.
(Step B8)
The secure computation system 200 performs subtraction between the carry and the random numbers using the same processing as step A8 described above.
(Step B9)
Next, the secure computation system 200 performs computation to remove the mask from the carry using inner product. Concretely, the inner product calculation parts 203_i of the first to the third secure computation server apparatuses 200_i (i=1, 2, 3) perform following computation using [(Cj XOR rj)−rj] (j=0, . . . , k−1). Here, [y]=[Σj=0k−1 2j(−2cj]. It is noted that, here, the inner product calculation parts 203_i perform the fraud detectable (or maliciously secure) inner product calculation, and if fraud is detected(step B9; ⊥), it outputs ⊥ and abort processings.
After computation, each of the inner product calculation parts 203_i transmits [y]i to each share value storage part 206_i, respectively, and each share value storage part 206_i stores [y]i, therein.
(Step B10)
The secure computation system 200 performs resharing using the same processing as step A10 described above.
(Step B11)
The secure computation system 200 erases carry using the same processing as step A11 described above. Thus, the first to the third secure computation server apparatuses 200_i (i=1, 2, 3) can obtain [x]i after bit-recomposition from shares [[x0]]i, . . . , [[xk−1]]i(x=Σj=0k−1 2jxj, xj ∈ Z2), which are to be targets of the bit-recomposition.
The second example embodiment of the present invention described above has advantageous effects which will be described in the following.
According to the second example embodiment of the present invention, efficiency is improved in a processing, such as the bit-recomposition, where the bit-injections (or paddings) is performed in parallel. As explained above, according to the second example embodiment of the present invention, processings of steps B1 to B5 can be performed independently of the input, therefore, only processings of steps B6 to B11 are computed accompanying the input, and the order of communication traffic in processings of steps B6 to B11 is suppressed to O(k). As mentioned above, when bit-injection (or padding) is performed k times in parallel, the order of communication traffic is O(k2), therefore, according to the second example embodiment of the present invention, as compared in the communication traffic after input, communication traffic is improved with respect to orders of magnitude. In other words, the second example embodiment of the invention is fraud detectable (or maliciously secure) and remarkably efficient.
It should be noted that the second example embodiment of the present invention is not limited to the fraud detectable (or maliciously secure) bit-recomposition but can also be applied to processing, such as fraud detectable (or maliciously secure) PopCount (counting the number of bit(s) that has a value of 1). In that case, the processing in Step B9 and the processing in Step B11 above should be modified as in the first example embodiment.
Hereinafter, referring to
As illustrated in
In the secure computation system 300, provided with the first to the third secure computation server apparatuses 300_i (i=1, 2, 3) of above configuration, for a value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2jxj, xj∈Z2) inputted from one of the first to the third secure computation server apparatuses 300_i among the first to the third secure computation server apparatuses 300_i (i=1, 2, 3), a share [x] is computed, without being known of the value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2jxj, xj∈Z2) from the inputted value and/or a value(s) generated in computation processes, and the share [x] is stored in each of the share value storage parts 306_i of the first to the third secure computation server apparatuses 300_i (i=1, 2, 3), respectively.
In addition, in the secure computation system 300, provided with the first to the third secure computation server apparatuses 300_i (i=1, 2, 3) of above configuration, for the share of [[x0]], . . . , [[xk−1]](x=Σj=0k−1 2ixj, xj∈Z2) stored in each of the share value storage parts 306_i of the first to the third secure computation server apparatuses 300_i (i=1, 2, 3), a share [x] is computed, without being known of the value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2ixj, xj∈Z2) from a value(s) generated in the computation processes, and the share [x] is stored in each of the share value storage parts 306_i of the first to the third secure computation server apparatuses 300_i (i=1, 2, 3), respectively.
Furthermore, in the secure computation system 300, provided with the first to the third secure computation server apparatuses 300_i (i=1, 2, 3) of above configuration, for a share of [[x0]], . . . , [[xk−1]](x=Σj=0k−1 2jxj, xj∈Z2) inputted from an apparatus other than the first to the third secure computation server apparatuses 300_i (i=1, 2, 3), a share [x] is computed, without being known of the value(s) of x0, . . . , xk−1 (x=Σj=0k−1 2jxj, xj∈Z2) from a value(s) generated in computation processes, and the share [x] is stored in each of the share value storage parts 306_i of the first to the third secure computation server apparatuses 300_i (i=1, 2, 3), respectively.
It is noted that the share of the above computation result may be restored by transmitting and receiving the share among the first to the third secure computation server apparatuses 300_1 to 300_3. Alternatively, the share may be restored by transmitting the share to an outside other than the first to the third secure computation server apparatuses 300_1 to 300_3.
Next, a secure computation method according to the third example embodiment of the present invention will be described in detail. That is, an operation of the secure computation system 300 provided with the first to the third secure computation server apparatuses 300_i (i=1, 2, 3), the first auxiliary server apparatus 308_1, and the auxiliary server apparatus 308_2, as described above, will be described.
(Step C1)
The secure computation server apparatuses 300_1 to 300_3 in the secure computation system 300 perform operations equal to those performed in Step A1. Then, the first auxiliary server apparatus 308_1 and the second auxiliary server apparatus 308_2 share a seed′ and a pseudo random number generator H.
(Step C2)
Next, the first auxiliary server apparatus 308_1 and the second auxiliary server apparatus 3082 generate random numbers a,b∈Z2k using the shared seed′ and the shared pseudo random number generator H. In addition, the first auxiliary server apparatus 308_1 and the second auxiliary server apparatus 308_2 generate ([a], [b], [ab]) using the shared seed′ and the shared pseudo random number generator H to share (or distribute) them to the first to the third secure computation server apparatuses 300_1 to 300_3.
(Step C3)
The first to the third secure computation server apparatuses 300_1 to 300_3 determine whether or not the values received from each of the first auxiliary server apparatus 308_1 and the second auxiliary server apparatus 308_2, match, in step C2. If the values match, the first to the third secure computation server apparatuses 300_1 to 300_3 continue with subsequent processings, and if the values mismatch (Step C3; ⊥), output ⊥ and abort the processings.
Subsequently, the secure computation system 300 performs the same processing as steps B4 to B11 in steps C4 to C11. That is, the secure computation system 300 performs generating random numbers of the logical operation in step C4, generating random numbers of the arithmetic operation in step C5, restoring carry in step C6, verifying whether there is a fraud in the carry in step C7, subtracting between the carry and random numbers in step C8, removing the mask from the carry using inner product in step S9, resharing in step S10, and erasing the carry in step C11. Thus, the first to the third secure computation server apparatuses 300_i (i=1, 2, 3) obtain [x]i after bit-recomposition from shares [[x0]]i, . . . , [[xk−1]]i(x=Σj=0k−1 2jxj, xj∈Z2), which are to be targets of bit-recomposition.
It is noted that in the secure computation method according to the third example embodiment of the present invention, the secure computation system 300 uses shares [[x0]]i, . . . , [[xk−1]]i(x=Σj=0k−1 2jxj, xj ∈ Z2), which are to be targets of the bit-recomposition and are stored in each of the share value storage part 306_i, for the first time for processing in step C6.
The third example embodiment of the present invention described above has advantageous effects which will be described in the following.
According to the third example embodiment of the present invention, efficiency is improved in a processing, such as the bit-recomposition, where the bit-injections (or paddings) is performed in parallel. As explained above, according to the second example embodiment of the present invention, processings of steps B1 to B5 can be performed independently of the input, therefore, only processings of steps B6 to B11 are computed accompanying the input, and the order of communication traffic in processings of steps B6 to B11 is suppressed to O(k). As mentioned above, when bit-injection (or padding) is performed k times in parallel, the order of communication traffic is O(k2), therefore, as compared in the communication traffic after input, according to the second example embodiment of the present invention, communication traffic is improved with respect to orders of magnitude. In other words, the second example embodiment of the invention is fraud detectable (or maliciously secure) and remarkably efficient
However, unlike the second example embodiment, the third example embodiment of the present invention allows for definitive fraud detection. In the first effect in the second example embodiment, only probabilistic fraud detection could be performed. Therefore, communication traffic would increase when trying to improve the probability of fraud detection. In contrast, the fraud detection is performed decisively in the third example embodiment.
It should be noted that the third example embodiment of the present invention is not limited to the fraud detectable (or maliciously secure) bit-recomposition but can also be applied to processing, such as fraud detectable (or maliciously secure) PopCount (counting the number of bit(s) that has a value of 1). In that case, the processing in Step C9 and the processing in Step C11 above should be modified as in the first example embodiment. PopCount, which applies the third example embodiment of the present invention, also enables a definitive fraud detection, as does bit-recomposition.
[Hardware Configuration]
It should be noted that the hardware configuration illustrated in
The hardware configuration 10 that can be employed by the secure computation server apparatuses 100_i, 200_i, 300_i (i=1, 2, 3) is provided with a CPU (Central Processing Unit) 11, a main storage device 12, an auxiliary storage device 13 and an IF (Interface) part 14, which are interconnected by an internal bus, as illustrated in
CPU 11 executes each instruction included in a secret calculation program executed by the secure computation server apparatuses 100_i, 200_i, 300_i (i=1, 2, 3). The main storage device 12 has, for example, a RAM (Random Access Memory), and temporarily stores various programs such as the secret calculation program executed by the secure computation server apparatus 100_i, 200_i, 300_i (i=1, 2, 3) for processing by the CPU 11.
The auxiliary storage device 13 has, for example, a HDD (Hard Disk Drive), and can store various programs such as the secret calculation program executed by the secure computation server apparatus 100_i, 200_i, 300_i (i=1, 2, 3) in mid and long term. The various programs such as the secret calculation program may provide as a program product recorded in a non-transitory computer readable storage medium. The auxiliary storage device 13 can be used to store various programs such as the secret calculation program recorded in the non-transitory computer readable storage medium in mid and long term.
The IF part 14 provides an interface for an input/output between the secure computation server apparatuses 100_i, 200_i, 300_i (i=1, 2, 3). The IF part 14 also can be used as an interface for an input/output between apparatuses including the auxiliary server apparatuses 208, 308_1, 308_2.
An information processing apparatus employing the hardware configuration illustrated above can realize each function of the secure computation server apparatuses 100_i, 200_i, 300_i by executing the secure computation method described above as a program.
A part or a whole of the above-mentioned example embodiments may be described as, but not limited to, the following supplementary notes.
[Supplementary Note 1]
A secure computation system, including at least three or more secure computation server apparatuses connected to each other through a network, wherein each of the secure computation server apparatuses includes:
The secure computation system described in the supplementary note 1, wherein
The secure computation system described in supplementary note 2, wherein
The secure computation system described in any one of supplementary notes 1 to 3, wherein
The secure computation system described in supplementary note 4, wherein
The secure computation system described in supplementary note 4, wherein
The secure computation system described in any one of supplementary notes 1 to 6 including:
The secure computation system described in any one of supplementary notes 1 to 7 including:
A secure computation server apparatus that is one of at least three or more secure computation server apparatuses connected to each other through a network, including: a random number generation part that shares a pseudo random number generator, the pseudo random generator being shared among the secure computation server apparatuses;
A secure computation method using at least three or more secure computation server apparatuses connected to each other through a network, including:
The secure computation method described in supplementary note 10, wherein
The secure computation method described in supplementary note 11, wherein
The secure computation method described in any one of supplementary notes 10 to 12, wherein
The secure computation method described in any one of supplementary notes 10 to 13, wherein
The secure computation method described in any one of supplementary notes 10 to 14, wherein
The secure computation method described in any one of supplementary notes 10 to 15, including:
A non-transitory computer-readable medium storing therein a secure computation program that causes at least three or more secure computation server apparatuses connected to each other through a network to execute processes, including:
It should be noted that, each disclosure of the PTLs and NPLs cited above is incorporated herein by reference thereto. It is to be noted that it is possible to modify or adjust the example embodiments or examples within the whole disclosure of the present invention (including the Claims) and based on the basic technical concept thereof. Further, it is possible to variously combine or select (or partially delete) a wide variety of the disclosed elements (including the individual elements of the individual claims, the individual elements of the individual example embodiments or examples, and the individual elements of the individual figures) within the scope of the whole disclosure of the present invention. That is, it is self-explanatory that the present invention includes any types of variations and modifications to be done by a skilled person according to the whole disclosure including the Claims, and the technical concept of the present invention. Particularly, any numerical ranges disclosed herein should be interpreted that any intermediate values or subranges falling within the disclosed ranges are also concretely disclosed even without specific recital thereof. In addition, as needed and based on the gist of the present invention, partial or entire use of the individual disclosed matters in the above literatures that have been referred to in combination with what is disclosed in the present application should be deemed to be included in what is disclosed in the present application, as a part of the disclosure of the present invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/001631 | 1/20/2020 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2021/149092 | 7/29/2021 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
8595275 | Nariyoshi | Nov 2013 | B2 |
9218159 | Seol | Dec 2015 | B2 |
9292259 | Ross | Mar 2016 | B2 |
11153104 | Fetterolf | Oct 2021 | B2 |
11290257 | Tanimoto | Mar 2022 | B2 |
11477135 | Pitio | Oct 2022 | B2 |
11537362 | Ross | Dec 2022 | B2 |
20160218862 | Ikarashi et al. | Jul 2016 | A1 |
20180270057 | Furukawa | Sep 2018 | A1 |
20190052327 | Motozuka | Feb 2019 | A1 |
Number | Date | Country |
---|---|---|
2008-020871 | Jan 2008 | JP |
2008-139996 | Jun 2008 | JP |
2011-250335 | Dec 2011 | JP |
2015053185 | Apr 2015 | WO |
2017038761 | Mar 2017 | WO |
Entry |
---|
International Search Report for PCT Application No. PCT/JP2020/001631, mailed on Mar. 3, 2020. |
Toshinori Araki, et al, “High-Throughput Semi-Honest Secure Three-Party Computation with an Honest Majority”, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016. p. 805-817, 2016. |
Toshinori Araki, et al, “Generalizing the SPDZ Compiler For Other Protocols.”, Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2018. p. 880-895, 2018. |
Toshinori Araki, et al, “How to Choose Suitable Secure Multiparty Computation Using Generalized SPDZ”, CCS '18, Oct. 15-19, 2018. |
Payman Mohassel and Peter Rindal, “ABY3: A Mixed Protocol Framework for Machine Learning”, Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2018. p. 35-52, 2018. |
Jun Furukawa, et al, “High-Throughput Secure Three-Party Computation for Malicious Adversaries and an Honest Majority”, In J. Coron and J. B. Nielsen, editors, Advances in Cryptology—Eurocrypt 2017—36th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Paris, France, Apr. 30-May 4, 2017, Proceedings, Part II, vol. 10211 of Lecture Notes in Computer Science, pp. 225-255, 2017. |
Toshinori Araki, et al, “Optimized Honest-Majority MPC for Malicious Adversaries—Breaking the 1 Billion-Gate Per Second Barrier”, In the IEEE S&P, 2017. |
Dai Ikarashi et al., MEVAL2 vs. CCS Best paper on MPC-AES_SCIS2017, Preprints of 2017 Symposium on Cryptography and Information Security, Jan. 24, 2017, pp. 1-8. |
Jun Furukawa, “Multi-party calculation with high throughput”, SCIS2016, Proceedings of Computer Security Symposium 2016, Jan. 19, 2016, pp. 1-7.pdf. |
Number | Date | Country | |
---|---|---|---|
20230046000 A1 | Feb 2023 | US |