The present disclosure relates to verification of an inference model using zero-knowledge proofs.
AI inference techniques using neural networks have been used with great success in machine learning tasks such as data classification. AI is an abbreviation for artificial intelligence.
In order to perform data analysis using a neural network, it is necessary to train an inference model using a large amount of training data in advance. At this time, it may be difficult for a user to build an inference model in the user's own environment due to the difficulty of preparing training data, constraints of computational resources, and so on.
In this context, in recent years, there have been services that provide data analysis using neural networks on a cloud (MLaaS). These services allow clients to perform inference using provided inference models by uploading data to be analyzed onto the cloud. Therefore, the clients do not need to incur costs in building inference models.
MLaaS is an abbreviation for machine learning as a service.
In MLaaS, when a client sends data to be analyzed and entrusts inference processing to a provider of an inference model, the service provider needs to prove to the client that an inference result is actually a result of analysis performed by the inference model.
The simplest solution is for the service provider to publish the inference model itself. However, since the inference model is the intellectual property of the service provider, it is difficult to disclose the inference model to the client.
Non-Patent Literature 1 proposes a method that makes it possible to prove, using a zero-knowledge proof, that inference processing using an inference model has been actually performed.
This method allows the service provider to prove to the client that the inference result has been obtained by analysis processing by the inference model.
The method of Non-Patent Literature 1 can only handle integer values as parameters of the inference model due to constraints of protocols for zero-knowledge proofs that are used.
An object of the present disclosure is to make it possible to verify an inference model whose parameters are decimals.
An inference verification system according to the present disclosure includes an inference unit to obtain an inference result by executing an inference model by expressing a decimal value that is data on which inference processing is to be performed as an integer value and treating the integer value as a parameter of a convolutional neural network;
According to the present disclosure, an inference model whose parameters are decimals can be verified.
In the embodiment and drawings, the same elements or corresponding elements are denoted by the same reference sign. Description of an element denoted by the same reference sign as that of an element that has been described will be omitted or simplified as appropriate. Arrows in figures mainly indicate flows of data or flows of processing.
An inference verification system 100 will be described based on
The inference verification system 100 is a system that can verify inference processing.
Based on
The inference verification system 100 includes devices such as a parameter generation device 200, a key generation device 300, an inference device 400, a proving device 500, and a verification device 600.
There devices communicate with one another through a network. A specific example of the network is the Internet.
Based on
The parameter generation device 200 is a computer that includes hardware such as a processor 201, a memory 202, an auxiliary storage device 203, a communication device 204, and an input/output interface 205. These hardware components are connected with one another through signal lines.
The processor 201 is the processor of the parameter generation device 200.
The processor is an IC that performs operational processing and controls other hardware components. For example, the processor is a CPU.
IC is an abbreviation for integrated circuit.
CPU is an abbreviation for central processing unit.
The memory 202 is the memory of the parameter generation device 200.
The memory is a volatile or non-volatile storage device. The memory is also called a main storage device or a main memory. For example, the memory is a RAM.
Data stored in the memory 202 is saved in the auxiliary storage device 203 as necessary.
RAM is an abbreviation for random access memory.
The auxiliary storage device 203 is the auxiliary storage device of the parameter generation device 200.
The auxiliary storage device is a non-volatile storage device. For example, the auxiliary storage device is a ROM, an HDD, a flash memory, or a combination of these.
Data stored in the auxiliary storage device 203 is loaded into the memory 202 as necessary.
ROM is an abbreviation for read only memory.
HDD is an abbreviation for hard disk drive.
The communication device 204 is the communication device of the parameter generation device 200.
The communication device is a receiver and a transmitter. For example, the communication device is a communication chip or a NIC.
Communication of the parameter generation device 200 is performed using the communication device 204.
NIC is an abbreviation for network interface card.
The input/output interface 205 is the input/output interface of the parameter generation device 200.
The input/output interface is a port to which an input device and an output device are connected. For example, the input/output interface is a USB terminal, the input device is a keyboard and a mouse, and the output device is a display.
Input to and output from the parameter generation device 200 is performed using the input/output interface 205.
USB is an abbreviation for Universal Serial Bus.
The parameter generation device 200 includes elements such as an acceptance unit 210, a generation unit 220, and an output unit 230. These elements are realized by software.
The auxiliary storage device 203 stores a parameter generation program to cause a computer to function as the acceptance unit 210, the generation unit 220, and the output unit 230. The parameter generation program is loaded into the memory 202 and executed by the processor 201.
The auxiliary storage device 203 further stores an OS. At least part of the OS is loaded into the memory and executed by the processor.
The processor 201 executes the parameter generation program while executing the OS.
OS is an abbreviation for operating system.
Input data and output data of the parameter generation program are stored in a storage unit 290.
The memory 202 functions as the storage unit 290. However, a storage device such as the auxiliary storage device 203, a register in the processor 201, and a cache memory in the processor 201 may function as the storage unit 290 in place of the memory 202 or together with the memory 202.
The parameter generation device 200 may include a plurality of processors as an alternative to the processor 201.
Based on
The key generation device 300 is a computer that includes hardware such as a processor 301, a memory 302, an auxiliary storage device 303, a communication device 304, and an input/output interface 305. These hardware components are connected with one another through signal lines.
The processor 301 is the processor of the key generation device 300.
The memory 302 is the memory of the key generation device 300.
The auxiliary storage device 303 is the auxiliary storage device of the key generation device 300.
The communication device 304 is the communication device of the key generation device 300.
The input/output interface 305 is the input/output interface of the key generation device 300.
The key generation device 300 includes elements such as an acceptance unit 310, a generation unit 320, and an output unit 330. These elements are realized by software.
The auxiliary storage device 303 stores a key generation program to cause a computer to function as the acceptance unit 310, the generation unit 320, and the output unit 330. The key generation program is loaded into the memory 302 and executed by the processor 301.
The key generation device 300 further stores an OS.
The processor 301 executes the key generation program while executing the OS.
Input data and output data of the key generation program are stored in a storage unit 390.
The memory 302 functions as the storage unit 390. However, a storage device such as the auxiliary storage device 303, a register in the processor 301, and a cache memory in the processor 301 may function as the storage unit 390 in place of the memory 302 or together with the memory 302.
The key generation device 300 may include a plurality of processors as an alternative to the processor 301.
Based on
The inference device 400 is a computer that includes hardware such as a processor 401, a memory 402, an auxiliary storage device 403, a communication device 404, and an input/output interface 405. These hardware components are connected with one another through signal lines.
The processor 401 is the processor of the inference device 400.
The memory 402 is the memory of the inference device 400.
The auxiliary storage device 403 is the auxiliary storage device of the inference device 400.
The communication device 404 is the communication device of the inference device 400.
The input/output interface 405 is the input/output interface of the inference device 400.
The inference device 400 includes elements such as an acceptance unit 410, an inference unit 420, and an output unit 430. These elements are realized by software.
The auxiliary storage device 403 stores an inference program to cause a computer to function as the acceptance unit 410, the inference unit 420, and the output unit 430. The inference program is loaded into the memory 402 and executed by the processor 401.
The inference device 400 further stores an OS.
The processor 401 executes the inference program while executing the OS.
Input data and output data of the inference program are stored in a storage unit 490.
The memory 402 functions as the storage unit 490. However, a storage device such as the auxiliary storage device 403, a register in the processor 401, and a cache memory in the processor 401 may function as the storage unit 490 in place of the memory 402 or together with the memory 402.
The inference device 400 may include a plurality of processors as an alternative to the processor 401.
Based on
The proving device 500 is a computer that includes hardware such as a processor 501, a memory 502, an auxiliary storage device 503, a communication device 504, and an input/output interface 505. These hardware components are connected with one another through signal lines.
The processor 501 is the processor of the proving device 500.
The memory 502 is the memory of the proving device 500.
The auxiliary storage device 503 is the auxiliary storage device of the proving device 500.
The communication device 504 is the communication device of the proving device 500.
The input/output interface 505 is the input/output interface of the proving device 500.
The proving device 500 includes elements such as an acceptance unit 510, a storing unit 520, a proving unit 530, and an output unit 540. The storing unit 520 includes a key storing unit 521 and an inference result storing unit 522. These elements are realized by software.
The auxiliary storage device 503 stores a proving program to cause a computer to function as the acceptance unit 510, the storing unit 520, the proving unit 530, and the output unit 540. The proving program is loaded into the memory 502 and executed by the processor 501.
The proving device 500 further stores an OS.
The processor 501 executes the proving program while executing the OS.
Input data and output data of the proving program are stored in a storage unit 590.
The memory 502 functions as the storage unit 590. However, a storage device such as the auxiliary storage device 503, a register in the processor 501, and a cache memory in the processor 501 may function as the storage unit 590 in place of the memory 502 or together with the memory 502.
The proving device 500 may include a plurality of processors as an alternative to the processor 501.
Based on
The verification device 600 is a computer that includes hardware such as a processor 601, a memory 602, an auxiliary storage device 603, a communication device 604, and an input/output interface 605. These hardware components are connected with one another through signal lines.
The processor 601 is the processor of the verification device 600.
The memory 602 is the memory of the verification device 600.
The auxiliary storage device 603 is the auxiliary storage device of the verification device 600.
The communication device 604 is the communication device of the verification device 600.
The input/output interface 605 is the input/output interface of the verification device 600.
The verification device 600 includes elements such as an acceptance unit 610, a storing unit 620, a verification unit 630, and an output unit 640. The storing unit 620 includes a key storing unit 621 and a proof storing unit 622. These elements are realized by software.
The auxiliary storage device 603 stores a verification program to cause a computer to function as the acceptance unit 610, the storing unit 620, the verification unit 630, and the output unit 640. The verification program is loaded into the memory 602 and executed by the processor 601.
The verification device 600 further stores an OS.
The processor 601 executes the verification program while executing the OS.
Input data and output data of the verification program are stored in a storage unit 690.
The memory 602 functions as the storage unit 690. However, a storage device such as the auxiliary storage device 603, a register in the processor 601, and a cache memory in the processor 601 may function as the storage unit 690 in place of the memory 602 or together with the memory 602.
The verification device 600 may include a plurality of processors as an alternative to the processor 601.
The notation in the following description will be indicated.
When “A” is a distribution, “y←A” denotes that y is randomly selected from A according to that distribution.
When “A” is a set, “y←A” denotes that y is uniformly selected from A for input x.
When “n” is a natural number, [n] is a set {1, . . . , n}.
The operation of the inference verification system 100 is equivalent to an inference verification method. A procedure for the operation of the inference verification system 100 is equivalent to a procedure for processing by an inference verification program.
The inference verification program includes the parameter generation program, the key generation program, the inference program, the proving program, and the verification program.
Each program can be recorded (stored) in a computer readable format in a non-volatile recording medium such as an optical disc or a flash memory.
Based on
In step S210, the acceptance unit 210 accepts parameters (λ, D) that are input into the parameter generation device 200.
For example, the parameters (λ, D) are input into the parameter generation device 200 by an administrator.
In step S220, the generation unit 220 executes a setup algorithm using as the parameters (λ, D) as input.
This generates a public parameter pp.
The setup algorithm is an algorithm for generating the public parameter pp.
For example, the setup algorithm (Setup) is expressed as indicated below.
In step S230, the output unit 230 outputs the public parameter pp.
Specifically, the output unit 230 transmits the public parameter pp to the key generation device 300.
Based on
In step S310, the acceptance unit 310 accepts the public parameter pp that is input into the key generation device 300.
Specifically, the acceptance unit 310 receives the public parameter pp from the parameter generation device 200.
In step S320, the generation unit 320 executes a key generation algorithm using the public parameter pp as input.
This generates a public key pk and a secret key sk.
The key generation algorithm is an algorithm for generating the public key pk and the secret key sk.
For example, the key generation algorithm (Kg) is expressed as indicated below.
In step S330, the output unit 330 outputs the public key pk and the secret key sk.
Specifically, the output unit 330 transmits the public key pk and the secret key sk to the proving device 500. The output unit 330 transmits the public key pk to the verification device 600.
Based on
In step S410, the acceptance unit 410 accepts parameters (M, x) that are input into the inference device 400.
For example, an inference model M is input into the inference device 400 by a provider of the inference model M. Data x is transmitted to the inference device 400 by a client.
The inference model M is a model for inference processing.
The data x is data on which inference processing is to be performed.
In step S420, the inference unit 420 executes an inference processing algorithm using the inference model M and the data x as input.
This generates an inference result c.
The inference processing algorithm is an algorithm for generating the inference result c.
For example, the inference processing algorithm (Classify) is expressed as indicated below.
CNN is a convolutional neural network that is executed by the inference model M.
In step S430, the output unit 430 outputs the inference result c.
Specifically, the output unit 430 transmits the inference result c to the proving device 500.
Based on
In step S510, the acceptance unit 510 accepts the public key pk, the secret key sk, and the inference result c.
Specifically, the acceptance unit 510 receives the public key pk and the secret key sk from the key generation device 300. The acceptance unit 510 receives the inference result c from the inference device 400.
In step S520, the key storing unit 521 stores the public key pk and the secret key sk.
The inference result storing unit 522 stores the inference result c.
For example, the public key pk, the secret key sk, and the inference result c are stored in the auxiliary storage device 503.
In step S530, the proving unit 530 executes a proof generation algorithm using the public key pk, the secret key sk, and the inference result c as input.
This generates a proof P.
The proof generation algorithm is an algorithm for generating the proof P.
An example of the proof generation algorithm (Prove) will be described below.
The inference result c is expressed as indicated below, where ci is a computation result of the i-th layer of the CNN.
c=c
1
, . . . , c
n
Provei is an algorithm for generating a proof Pi for the computation result ci.
For example, the proof generation algorithm (Prove) is expressed as indicated below.
Types (Pa to Pf) of Provei are as described below.
The types (Pa to Pf) of Provei will be described later.
In step S540, the output unit 540 outputs the proof P.
Specifically, the output unit 540 transmits the proof P to the verification device 600.
Based on
In step S610, the acceptance unit 610 accepts the public key pk and the proof P.
Specifically, the acceptance unit 610 receives the public key pk from the key generation device 300. The acceptance unit 610 receives the proof P from the proving device 500.
In step S620, the key storing unit 621 stores the public key pk.
The proof storing unit 622 stores the proof P.
For example, the public key pk and the proof P are stored in the auxiliary storage device 603.
In step S630, the verification unit 630 executes a verification algorithm using the public key pk and the proof P as input.
This generates a verification result V.
The verification algorithm is an algorithm for generating the verification result V.
An example of the verification algorithm (Verify) will be described below.
Verifyi is an algorithm for verifying a proof Pi.
For example, the verification algorithm (Verify) is expressed as indicated below.
Types (Va to Vf) of Verify; are as described below.
The types (Va to Vf) of Verifyi will be described later.
In step S640, the output unit 640 outputs the verification result V.
For example, the output unit 640 displays the verification result V on a display.
With regard to zero-knowledge proof protocols used in Embodiment 1, the following protocols will be described. In addition, representation of a decimal as an integer value in Embodiment 1 will be described.
(1) The Schnorr protocol will be described.
The Schnorr protocol is a protocol for proving that a prover knows x in y=gx without providing a verifier with any information about x.
The Schnorr protocol is composed of a proof generation algorithm ProveSchnorr and a verification algorithm VerifySchnorr.
ProveSchnorr outputs the proof P.
VerifySchnorr outputs true or false.
(2) The Schnorr protocol for multiple exponents will be described. The Schnorr protocol for multiple exponents is obtained by generalizing (1) the Schnorr protocol.
Note that g1, . . . , gn are generators of G. In this case, expression (2A) holds for x1 ∈ Zp, . . . , xn ∈ Zp.
The Schnorr protocol for multiple exponents is a protocol for proving that the prover knows (x1, . . . , xn) in expression (2A) without providing the verifier with any information about (x1, . . . , xn).
The Schnorr protocol for multiple exponents is composed of a proof creation algorithm ProveMultiSchnorr and a verification algorithm VerifyMultiSchnorr.
ProveMultiSchnorr outputs the proof P.
VerifyMultiSchnorr outputs true or false.
(3) The generalized Schnorr protocol will be described. The generalized Schnorr protocol is obtained by generalizing (2) the Schnorr protocol for multiple exponents.
Note that g1,1, . . . , g1,n, g2,1, . . . , gm,n are an mn number of generators of G. In this case, expression (3A) holds for x1 ∈ Zp, . . . , xn ∈ Zp.
The generalized Schnorr protocol is a protocol for proving that the prover knows (x1, . . . , xn) in expression (3A) without providing the verifier with any information about (x1, . . . , xn).
The generalized Schnorr protocol is composed of a proof creation algorithm ProveGenSchnorr and a verification algorithm VerifyGenschnorr.
ProveGenSchnorr outputs the proof P.
VerifyGenSchnorr outputs true or false.
(4) The OR Proof protocol will be described.
Note that g1,1, . . . , g1,n, g2,1, . . . , gm,n are an mn number of generators of G. In this case, expression (4A) holds for x=(x1, . . . , xn).
For i=1, 2, the following expressions hold.
x
(i)=(x1(i), . . . , xn(i)) ∈(Zp)n
y
(i)
=f(x(i))
The OR Proof protocol is a protocol for proving that the prover knows x(1) in y(1)=f(x(1)) or x(2) in y(2)=f(x(2)) without providing the verifier with any information about x(1) and x(2).
The OR Proof protocol is composed of a proof creation algorithm ProveOR and a verification algorithm VerifyOR.
ProveOR outputs the proof P.
VerifyOR outputs true or false.
(5) The nOR Proof protocol will be described. The nOR Proof protocol is obtained by generalizing (4) the OR Proof protocol.
Note that g1,1, . . . , g1,n, g2,1, . . . , gm,n are an mn number of generators of G. In this case, expression (5A) holds for x=(x1, . . . , xn).
The following expressions hold for i ∈ [n].
x
(i)=(x1(i), . . . , xn(i)) ∈(Zp)n
y
(i)
=f(x(i))
The nOR Proof protocol is a protocol for proving that the prover knows at least one x(i) in y(i)=f(x(i)) without providing the verifier with any information about each x(i).
The nOR Proof protocol is composed of a proof creation algorithm ProvenOR and a verification algorithm VerifynOR.
ProvenOR computes three expressions s(j), C(j), and R(j) for all j ∈ [n]{i}, and outputs the proof P.
VerifynOR outputs true or false.
(6) The representation of a decimal as an integer value will be described.
“1” and “d” are positive integers, and expression (6A) holds.
In this case, the fixed-point representation of a decimal x is expressed as an integer <X>.
This allows model parameters of the convolutional neural network, which are usually expressed as real numbers, to be treated as integer values.
(7) The Range Proofs protocol will be described.
“g” and “h” are generators of G.
“t” is an integer.
The Range Proofs protocol is a protocol for proving by the prover that the integer t is an integer equal to or greater than 0 and less than 2m without providing the verifier with information about the integer t.
The Range Proofs protocol is composed of a proof creation algorithm ProveRange and a verification algorithm VerifyRange.
ProveRange computes a binary number representation of t.
Then, ProveRange calculates the proof P for the following four expressions using ProveGenSchnorr in (3) the generalized Schnorr protocol.
VerifyRange outputs the verification result V.
(8) The Multiplication Proofs protocol will be described.
“g” and “h” are generators of G. In this case, the following expressions hold for integers x, y, and z.
The Multiplication Proofs protocol is a protocol for proving by the prover that a relationship <z>=<xy> holds for the integers x, y, and z excluding truncation without providing the verifier with information about z, x, and y.
The Multiplication Proofs protocol is composed of a proof creation algorithm ProveMult and a verification algorithm VerifyMult.
In ProveMult, the meaning of each symbol is as indicated below.
ProveMult calculates the proof P for the following nine expressions using (3) the generalized Schnorr protocol and (7) the Range Proofs protocol.
VerifyMult outputs the verification result V.
(9) The ReLU Layer protocol will be described.
A ReLU function is a function that is used in the Activation layer of the convolutional neural network.
The ReLU Layer protocol is a protocol for proving by the prover that a relation y=ReLU(x) holds for the integers x and y without providing the verifier with information about the integers x and y.
The ReLU Layer protocol is composed of a (Pa) proof generation algorithm ProveReLU and a (Va) verification algorithm VerifyReLU.
In ProveReLU, the meaning of each symbol is as indicated below.
ProveReLU calculates the proof P for the following expressions using (4) the OR proof protocol, (7) the Range Proof protocol, and (8) the Multiplication Proofs protocol.
The proof P proves that one of the following two expressions holds.
VerifyReLU outputs the verification result V
(10) The Affine Layer protocol will be described.
The Affine Layer protocol is a protocol for proving by the prover that y=Ax+b holds for A ∈ Rn×n, b ∈ Rn, and x, y ∈ Rn without providing the verifier with information about x, y, A, and b.
A, b, x, and y are as indicated below.
The Affine Layer Protocol is composed of a (Pb) proof generation algorithm ProveAffine and a (Vb) verification algorithm VerifyAffine.
In ProveAffine, the meaning of each symbol is as indicated below.
ProveAffine calculates the proof P for the following expressions using (3) the generalized Schnorr protocol, (7) the Range Proofs protocol, and (8) the Multiplication Proofs protocol.
VerifyAffine outputs the verification result V.
(11) The Convolution Layer protocol will be described.
The Convolution Layer protocol is a protocol for proving by the prover that y=Conv(x) holds for x, y ∈ Rmm′ without providing the verifier with information about x, y, and Conv.
Conv is an operation that is performed on input data x in the Convolution layer of the convolutional neural network.
The Convolution Layer protocol is composed of a (Pc) proof generation algorithm ProveConv and a (Vc) verification algorithm VerifyConv.
(ai,j) is a weight parameter of Conv. In this case, for input x=(x1,1, . . . x1,m′, . . . , xm,1, . . . , xm,m′) and output y=(y1,1, . . . , y1,m′, . . . ,ym, 1, . . . , ym,m′), yi,j can be expressed as the following expression.
Therefore, the proof P and the verification result V can be generated in a similar manner to that of (10) the Affine Layer protocol.
(12) The Average Pooling Layer protocol will be described.
The Average Pooling Layer protocol is a protocol for proving by the prover that y=AP(x) holds for x, y ∈ Rm×m without providing the verifier with information about x, y, and AP.
AP is an operation that is performed on input data x in the Average Pooling layer of the convolutional neural network.
The Average Pooling Layer protocol is composed of a (Pd) proof generation algorithm ProveAP and a (Vd) verification algorithm VerifyAP.
For y and x, the following relationship holds.
“1” is the size of rows and columns and a stride in an AP filter.
Therefore, y can be expressed as a linear transformation of x.
Therefore, the proof P and the verification result V can be generated in a similar manner to (10) the Affine Layer protocol.
(13) The Max Pooling Layer protocol will be described.
The Max Pooling Layer protocol is a protocol for proving by the prover that y=MP(x) holds for x ∈ Rkm×km and y ∈ Rm×m without providing the verifier with information about x, y, and MP.
MP is an operation that is performed on input data x in the Max Pooling layer of the convolutional neural network.
The Max Pooling Layer protocol is composed of a (Pe) proof generation algorithm ProveMP and a (Ve) verification algorithm VerifyMP.
For y and x, the following relationship holds.
“k” is the size of rows and columns and a stride in an MP filter.
In order to prove that this relationship holds, it is proved that the following expression holds.
ProveMP performs the following computation for all (i, j) ∈ [m]×[m].
ProveMP calculates the proof P for the following formula using (7) the Range Proofs protocol and the nOR Proof protocol.
VerifyMP generates the verification result V using VerifynOR.
(14) The SoftMax Layer protocol will be described.
The SoftMax Layer protocol is a protocol for proving by the prover that y=SoftMax(x) holds for x, y ∈ Rn without providing the verifier with information about x and y.
SoftMax is an operation that is performed on input data x in the SoftMax layer of the convolutional neural network.
For x=(x1, . . . , xn) and y=(y1, . . . , yn), yi is expressed as expression (13A).
The SoftMax Layer protocol performs computation by replacing the input x with x′=log2ex.
Therefore, a proof for expression (13A) can be constructed as described below. A protocol for proving that y′=2x‘ holds for x’, y′ ∈ R is constructed. By combining this protocol with (8) the Multiplication Proofs protocol, the proof can be constructed.
The protocol for proving that y′=2x′ holds is composed of a proof generation algorithm Proveexp and a verification result generation algorithm Verifyexp.
<x′>=x0′2d+x1′ holds. This protocol computes expression (13D) using polynomial with degree 8 (13C) instead of computing expression (13B).
Note that c0 to c8 are as indicated below.
The proof generation algorithm Proveexp is indicated below.
Note that x0′[i] is the i-th bit of x0′.
The verification algorithm Verifyexp is indicated below.
The SoftMax Layer protocol is composed of a (Pf) proof generation algorithm ProveSoftMax and a (Vf) verification algorithm VerifySoftMax.
ProveSoftMax calculates the proof P for the following expressions using Proveexp and ProveMult.
VerifySoftMax outputs the verification result V.
Embodiment 1 has the following features.
Embodiment 1 converts model parameters of an inference model into integer values and verifies the inference model.
The algorithms of the zero-knowledge proof protocols of the respective layers are features of Embodiment 1.
The verification method of the inference model is realized by combining conversion of weight parameters (model parameters) into integer values and the zero-knowledge proof protocols.
The features of Embodiment 1 will be presented with reference sings indicated in parentheses.
An inference device (400) obtains an inference result (c) by executing an inference model (M) by expressing a decimal value that is data (x) on which inference processing is to be performed as an integer value and treating the integer value as a parameter of a convolutional neural network.
A proving device (500) obtains a proof (P) by executing a proof generation algorithm using the inference result as input.
A verification device (600) obtains a verification result (V) by executing a verification algorithm using the proof as input.
The inference result includes a computation result of each layer of the convolutional neural network.
For each layer of the convolutional neural network, the proving device (500) executes the proof generation algorithm of a protocol corresponding to the type of the layer, using the computation result of the layer as input.
The proof includes the execution result of the proof generation algorithm for each layer of the convolutional neural network.
For each layer of the convolutional neural network, the verification device (600) executes the verification algorithm of the protocol corresponding to the type of the layer, using the execution result of the layer as input.
The verification result includes the execution result of the verification algorithm for each layer of the convolutional neural network.
Embodiment 1 realizes zero-knowledge proof protocols that can handle decimal parameters by representing fixed-point representations of decimals as integer values.
This makes it possible to verify an inference model whose parameters are decimals.
Embodiment 1 has, for example, the following effects.
Data is analyzed by a third party. The third party provides an inference service using a machine learning model.
Embodiment 1 makes it possible to prove that an inference result for the analyzed data is a result obtained by actually performing inference on the data using the machine learning model without disclosing information about an inference model.
The method of Embodiment 1 represents fixed-point representations of decimals as integer values, and uses zero-knowledge proofs using the difficulty of the discrete logarithm problem. This realizes zero-knowledge proof protocols that can handle decimal parameters. Then, it is possible to verify an inference model whose parameters are decimals.
The parameter generation device 200, the key generation device 300, the inference device 400, and the proving device 500 may be combined with each other. That is, the inference verification system 100 may include one or more computers that function as the parameter generation device 200, the key generation device 300, the inference device 400, and the proving device 500.
The generation unit 220 of the parameter generation device 200 may include a random number generation function or the like for generating the public parameter pp.
The generation unit 320 of the key generation device 300 may include a random number generation function or the like for generating the public key pk and the secret key sk.
Based on
The parameter generation device 200 includes processing circuitry 209.
The processing circuitry 209 is the processing circuitry that realizes the acceptance unit 210, the generation unit 220, and the output unit 230.
The processing circuitry may be dedicated hardware, or may be a processor that executes programs stored in a memory.
When the processing circuitry is dedicated hardware, the processing circuitry is, for example, a single circuit, a composite circuit, a programmed processor, a parallel-programmed processor, an ASIC, an FPGA, or a combination of these.
ASIC is an abbreviation for application specific integrated circuit.
FPGA is an abbreviation for field programmable gate array.
The parameter generation device 200 may include a plurality of processing circuits as an alternative to the processing circuitry 209.
In the processing circuitry 209, some functions may be realized by dedicated hardware and the remaining functions may be realized by software or firmware.
As described above, the functions of the parameter generation device 200 can be realized by hardware, software, firmware, or a combination of these.
Based on
The key generation device 300 includes processing circuitry 309.
The processing circuitry 309 is the processing circuitry that realizes the acceptance unit 310, the generation unit 320, and the output unit 330.
The key generation device 300 may include a plurality of processing circuits as an alternative to the processing circuitry 309.
In the processing circuitry 309, some functions may be realized by dedicated hardware and the remaining functions may be realized by software or firmware.
As described above, the functions of the key generation device 300 can be realized by hardware, software, firmware, or a combination of these.
Based on
The inference device 400 includes processing circuitry 409.
The processing circuitry 409 is the processing circuitry that realizes the acceptance unit 410, the inference unit 420, and the output unit 430.
The inference device 400 may include a plurality of processing circuits as an alternative to the processing circuitry 409.
In the processing circuitry 409, some functions may be realized by dedicated hardware and the remaining functions may be realized by software or firmware.
As described above, the functions of the inference device 400 can be realized by hardware, software, firmware, or a combination of these.
Based on
The proving device 500 includes processing circuitry 509.
The processing circuitry 509 is the processing circuitry that realizes the acceptance unit 510, the storing unit 520, the proving unit 530, and the output unit 540.
The proving device 500 may include a plurality of processing circuits as an alternative to the processing circuitry 509.
In the processing circuitry 509, some functions may be realized by dedicated hardware and the remaining functions may be realized by software or firmware.
As described above, the functions of the proving device 500 can be realized by hardware, software, firmware, or a combination of these.
Based on
The verification device 600 includes processing circuitry 609.
The processing circuitry 609 is the processing circuitry that realizes the acceptance unit 610, the storing unit 620, the verification unit 630, and the output unit 640.
The verification device 600 may include a plurality of processing circuits as an alternative to the processing circuitry 609.
In the processing circuitry 609, some functions may be realized by dedicated hardware and the remaining functions may be realized by software or firmware.
As described above, the functions of the verification device 600 can be realized by hardware, software, firmware, or a combination of these.
Embodiment 1 is an example of a preferred embodiment, and is not intended to limit the technical scope of the present disclosure. Embodiment 1 may be partially implemented or may be implemented in combination with another embodiment. The procedures described using flowcharts or the like may be suitably changed.
Each “unit” that is an element of the inference verification system 100 may be interpreted as “process”, “step”, “circuit”, or “circuitry”.
This application is a Continuation of PCT International Application No. PCT/JP2022/026498, filed on Jul. 1, 2022, which is hereby expressly incorporated by reference into the present application.
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/JP2022/026498 | Jul 2022 | WO |
| Child | 18966341 | US |