The invention relates to efficient protocols for verifying remote computations, with particular application for cloud-based services and mobile environments.
We consider a scenario where a trusted software author Alice wishes to make it possible for a set of users to make use of her program P, which we treat as a (non-uniform) Boolean circuit. In particular, this program P may have embedded within it a large proprietary database that Alice's program makes use of. However, Alice neither wants to release her program P nor does she want to host and execute the program herself. Instead she wishes to delegate this computation to an untrusted Worker, and the User/Verifier wants to be certain that they are receiving an output obtained via a computation of Alice's actual program P.
What we have just described is one of the most challenging variants of the classical problem of publicly verifiable delegation which has been the subject of intense work for decades, for many relaxed variations of the model that we describe above.
Specifically, delegation schemes without public verification based on standard assumptions for deterministic and non-deterministic computations have been designed. Restricting verification to a designated verifier implies that the worker needs to produce a fresh proof unique for each particular verifier for any computation, which is certainly not ideal. Other prior work achieves public verification but does not achieve public delegation. In other words, the input provider needs to run a pre-processing algorithm corresponding to the program P before being able to delegate.
With regard to non-interactive publicly verifiable delegation, starting from the seminal work on computationally sound proofs by Micali in the random oracle model, there have been several constructions on publicly verifiable non-interactive delegation schemes based on the Random Oracle Model or non-standard knowledge assumptions. From more standard assumptions, there have been several works. An illustrative example is work that proposed the first publicly verifiable non-interactive delegation scheme from a falsifiable decisional assumption on groups with bilinear pairings. However, in contrast with the setting described above, they can only achieve succinct delegation when the Verifier knows the program P. In the setting of Boolean circuits, this trivializes the delegation problem, since reading P's description takes as long as evaluating P. Indeed, the case that we consider—where Alice's program is large—is extremely well motivated: the program P could be an ML model with billions of painstakingly learned parameters.
The SNARGs for barrier.
Constructing a protocol that caters to the fully non-interactive setting defined above has been elusive. Note that in this problem, the User/Verifier and Input Provider do not know the program P. Hence, from User/Verifier's perspective, P is an witness. Thus, it certainly seems that finding a solution is intricately related to a major goal in the area of non interactive succinct proof systems, i.e., SNARGs for . Unfortunately, the only known constructions of SNARGs for base their soundness on the Random Oracle Model or non-standard knowledge assumptions. Finding a solution solely relying on standard assumptions has been an open problem for over a decade. Current solutions only achieve SNARGs for .
Thus, there is a need to enable Non-Interactive Publicly Verifiable Succinct Delegation for Committed Programs without having to use SNARGs for .
We present the first complete solution to achieving succinct non interactive publicly verifiable delegation for committed programs. In our setting, only one-way communication is permitted between the parties, as can be seen in the acyclic graph in
We show that many ideas from SNARGs for can be applied here. Although P is unknown to the User/Verifier, we show that it suffices for Alice to communicate a tiny amount of information of size poly(log|P|) about the program P (referred to as HP) as shown in
As illustrated in
As illustrated in
In some embodiments, the Worker is trusted with the program P by Alice, whereas, it is not trusted by the verifier. This asymmetry of trust is inherent in our setup and is well motivated. In a typical real world situation, the verifier is typically a user on the internet who takes part in a one off interaction with a cloud service for some computation. The need to prove honestly in this situation is significant. Alternatively, Alice might be able to have an agreement with the cloud service before handing over her program, which would make it hard for their Worker to breach trust without consequences.
Assuming the hardness of the LWE problem and existence of One-Way functions,
Finally, in order to get zero-knowledge, it suffices for Alice to commit to HP rather than sending it out in the open. Disclosed herein is a generic transformation to convert any delegation protocol of this form to attain zero-knowledge.
Assuming the hardness of the LWE problem and existence of a succinct delegation scheme,
Also disclosed herein is how to achieve zero knowledge versions of our delegation scheme, meeting the same strong succinctness and efficiency goals, and under the same assumption (LWE).
Before this work, the only object known to imply this challenging form of delegation was a SNARG/SNARK for . This is because from the point of view of the user/verifier, the program P is an unknown witness to the computation. However, constructing a SNARG for remains a major open problem. Herein, it is shown how to achieve delegation in this challenging context assuming only the hardness of the Learning With Errors (LWE) assumption, bypassing the apparent need for a SNARG for .
Some embodiments of the invention include systems, methods, network devices, and machine-readable media for a non-interactive method for executing a program at a remote cloud processor such that the result of a computation at the remote cloud processor is trusted, including by, at a program provider module: creating and storing an arbitrary program; creating a succinct hash of the program and providing access to the hash to a remote verifier; transmitting the program to a remote cloud processor, and not transmitting the program to the verifier; at an input provider module: storing an input value for use with the program; transmitting the input value to the remote cloud processor and the verifier; at the remote cloud processor module: executing the program based on the input value to generate an result output; generating a proof, wherein the proof comprises a bitstring and the bitstring having a length that is polylogarithmic with the program size, and wherein the generating of the proof can be accomplished in polynomial time in the program size, and wherein the generated proof is capable of being executed by any arbitrary verifier module; transmitting the result output and the proof to the verifier module; at the verifier module: executing a verifier routine based on the input value, the result output, the proof and the hash of the program to output a binary value representing that the remote cloud processor has generated the result output by executing the program provided by the program provider on the input provided by the input provider; wherein the time required for the verifier routine is polylogarithmic with the program size; and wherein the verifier module does not have a copy of the program, and the verifier module is distinct from the input provider module, and the only the value provided from the input provider to the verifier module is the input value.
In some further embodiments, the input provider module and the verifier module may be configured for operation by the same entity or party. In some further embodiments, the verifier module is computationally and logically distinct from the input provider module. In some further embodiments, the verifier module receives only the hash from program provider module. In some further embodiments, the size of the hash is either independent of a size of the program or is of a fixed size. In some further embodiments, the arbitrary program is represented as a non-uniform boolean circuit. In some further embodiments, the program provider module is an honest party. In some further embodiments, the method is configured to generate the output binary value without executing any further communication steps. In some further embodiments, the method is non-interactive and comprises only unidirectional communications between the modules.
The accompanying drawings, which are included to provide further understanding and are incorporated in and constitute a part of this specification, illustrate disclosed embodiments, and together with the description, serve to explain the principles of the disclosed embodiments. In the drawings:
Delegation Scenario With reference to the setup of our delegation scenario: there are four parties, namely, (1) Alice-the program author ProgAuth who sends a program P and some computed state state to a Worker, (2) an Input Provider I that outputs some value x, (3) Worker W that takes as input (P, state, x) and outputs P(x) and a proof Π, and (4) User/Verifier V gets as inputs (x, P(x), Π) and outputs 1 if and only if Π was a valid proof. Assume that all the parties get the security parameter λ as an input. An additional requirement is that |Π| and runtime of V is poly(λ, log|P|, |x|), and W runs in time poly(λ, |x|, |P|). Thus, any non-interactive publicly verifiable succinct delegation scheme can be viewed as a collection of four algorithms: sDel=(ProgAuth, W, I, V) with the input output behaviour and efficiency guarantees as specified. Note that this is a computation for the Worker but the primary challenge is that the verifier does not have knowledge of the “witness” P, hence this is an computation from the verifier's point of view. Herein, we observe that it is indeed feasible to achieve our delegation scenario for all circuits without having to go through SNARGs for . Some embodiments of the technique are based on SNARGs for .
Challenges of implementing Known Batch Arguments for (BARGs) can be built from LWE. BARGs allow an efficient prover to compute a non-interactive and publicly verifiable “batch proof” of many instances, with size poly(|w|log T) for T—many statements with each witness of size |w|. They begin by looking at P as a Turing machine and the steps of P's computation are interpreted as an Index Circuit Cindex. Say, P terminates in T steps. Formally, they construct a BARG for the Index Language , where
={(Cindex,i)|∃wi, such that C(i,wi)=1},
where i∈[T] is an index. Let s0, s1, . . . sT denote the encoding of internal states of P along with its tape information, and let Step be its step function such that Step(si−1)=si The witness for the ith intermediate computation is then defined as wi=(si−1, si). The index circuit is built such that (Cindex, i)∈ essentially implies that the Turing machine step function was correctly computed on si−1 to yield si. Note that this alone does not suffice as a proof because the BARG only confirms that (si−1, si) and (s′i, si+1) are valid witnesses. If si−1, si, s′i,si+1 are generated by the step function of the same Turing machine P, they they must be consistent with each other, i.e., si=s′i. However, this is not guaranteed by a BARG.
To resolve this issue, the prover also sends a Somewhere Extractable Hash (SE) to the witnesses (s0, {si−1, si}i∈[T]). The extraction property of this hash allows the verifier to check if the witness of two consecutive BARG instances are indeed consistent with each other. At this stage, we would like to remind the reader of their efficiency goals where crucially, they desire proof size and verification time to be poly(λ, log T). However, note that |Cindex| grows linearly with |si| and the known constructions of SE hashes can only produce hashes with size poly(|si|). This means that total communication and verifier run time will be at least poly(|si|). This is certainly no good if the Turing machine has massive states. To overcome this final barrier, they make use of Hash Trees which compress the states si to a short hash hi such that |hi|=poly(λ). Such trees also have a soundness property where a Prover must produce a succinct proof Πi that the hash tree was indeed implemented correctly at the ith step of the Turing machine computation. Once the succinctness guarantee is ensured, the prover then produces SE hashes corresponding to (h0, Π0, {hi−1, Πi−1, hi, Πi}i∈[T]) along with the openings to these hashes. To summarise, the proof consists of two parts, (1) The BARG proof, and (2) A somewhere extractable hash of the witnesses. Relying on the soundness of BARG, extraction correctness property of SE hash and soundness of the Hash Tree, a User/Verifier can check if each of these T intermediate steps are indeed the correct states for P, i.e., the computation was done honestly.
However, this approach only works if User/Verifier can confirm that the inputs used for the computation by the Worker, i.e. (P, x) are indeed the correct starting values as provided by the Program Author and Input Provider. This works fine for some cases because in their setting, the User/Verifier actually knows (P, x). Unfortunately, this is not at all true in our scenario. Thus, the prior techniques cannot be implemented directly as the soundness of the BARG proof cannot provide any guarantees if there is no way for to check that the initial inputs used by the Worker are correct.
Example Embodiment. Disclosed herein is an alternate way of interpreting the computation of P on input x as the following: Consider a Circuit-Universal Turing Machine which takes as input P, x, and accepts (P, x, ) in T=Õ(|P|) steps if P(x)=. We can assume without loss of generality that P∈{0, 1}m, x∈{0, 1}n and ∈{0, 1}, where m, n≤2λ. Keeping this in mind, we introduce the notion of Semi-Trusted SNARGs for . This new kind of SNARG is one that will work for general computations, but only with a little bit of extra help from a trusted party that knows the witness—which in our delegation scenario is Alice, who knows the witness P!
A Semi-Trusted SNARG is a tuple of algorithms: stSNARG=(Setup, TrustHash, P, V), where (1) Setup is a randomised algorithm that takes as input the security parameter and outputs a Common Random String (CRS). (2) a trusted deterministic TrustHash takes as input the (CRS, P) and outputs a digest HP, (3) a deterministic prover P which takes as input CRS and (P, x, ), and outputs a proof Π, and (4) a deterministic verifier V which gets CRS, (HP, x, , Π) as input and outputs 1 if Π is valid. It must be that |Π| and run time of V is poly(λ, log T), and P runs in time poly(λ, |x|, |P|, T). A simple reduction shows that in the CRS model (or alternatively in a model where Alice chooses the CRS), existence of stSNARG implies the existence of sDel. Hence, from here onwards, our goal is to construct a Semi-Trusted SNARG for .
We briefly provide an informal explanation of our construction.
Every intermediate state of the Universal Turing Machine is encoded into a succinct hash (call it h0, . . . , hT) accompanied with succinct proofs {Πi}i∈[T]. The prover computes two independent copies of Somewhere Extractable (SE) hashes (c1, c2) of the encoding {h0, {(h1, Π1), . . . , (hT, ΠT)}} along with their corresponding openings. Here h0=(st0, HP, Hx, Hwork) where st0 is that hash of 's starting state which is publicly known, Hx denote the hash of x, and Hwork is the hash of 's blank work tape. The use of two independent SE hashes are pivotal for soundness which we elaborate later.
We point out that TrustHash computes HP using the same hash tree which is used for hashing the Turing machine states by the Prover. This is crucial to ensure soundness of the protocol. We show in
In our proof, we choose to omit explicit use of this notion, and instead we make direct use of two independent SE hashes as mentioned above. A simple hybrid argument then gives a straightforward proof for soundness. This shows that the “anchor and step” use of SE hashes, which dates to the introduction of somewhere-binding hashes in 2015, is directly sufficient for this proof of soundness.
Zero-Knowledge In some embodiments of the delegation scenario, can be configured so that no information about P leaked to V during the delegation process. Hence, zero-knowledge guarantees can be added to our protocol. A generic transformation to modify a semi-trusted SNARG to add zero knowledge guarantees is disclosed. In order to do, so we make use of a statistically binding extractable commitment scheme and a NIZK, and make the following modifications:
We define the underlying primitives which are used as building blocks to perform the Succinct Delegation in the setup.
Definition 1.1 (Non-Interactive Zero Knowledge(NIZK) Arguments in the CRS model). A non interactive zero knowledge argument for a language L in the Common Reference String (CRS) model is defined three PPT algorithms:
Pr[crs←Setup(1n,1λ),π*←P(crs,x),V(crs,x,π*) accepts]≤ϵ(λ).
Known techniques show how to instantiate such NIZKs from LWE.
Definition 1.2 (Statistically Binding Extractable Commitment Scheme). A Statistically binding commitment scheme Combind in the CRS model is a tuple of efficiently polynomial time algorithms (Gen, TGen, C, Ext), where,
They have the following properties:
Pr[((crs,td)←TGen(1λ),com←C(m,crs;r))⇒EXT(com,td)=m]=1.
Any public key encryption scheme from LWE can be used to construct a Statistically Binding Extractable Commitment Scheme.
Somewhere Statistically Binding (SSB) Hashes were introduced in prior work and have been extensively used. An SSB hash works in two modes, namely, (1) normal mode where the key is generated uniformly at random and (2) the trapdoor mode where the key is generated according to a subset S denoting some bits of the message to be hashed. An extension of SSB hashes are somewhere extractable (SE) hashes. Formally, a somewhere extractable (SE) hash is a tuple of algorithms (Gen, TGen, Hash, Open, Verify, Ext) described below:
Furthermore, we need the SE Hash to have the following properties:
|Pr[2(K)=1|S←1(1λ,1N),K←Gen(1λ,1N,1|S|)]−Pr[2(K*)=1|S←1(1λ,1N),(K*,td)←TGen(1λ,1N,S)]|≤v(λ).
Pr[Verify(K*,h,mi*,i*,πi*)=1|h←Hash(K,m),πi←Open(K,m,i)]=1.
Pr[Verify(K*,h,mi*,i*,πi*)=1⇒Ext(h,td)|i*=mi*]=1.
Definition 1.3 (Circuit Satisfiability Language). We define the language SAT={(C,x)|∃w such that C(x,w)=1}, where C:{0, 1}n×{0, 1}m→{0, 1} is a boolean circuit, and x∈{0, 1}n is an instance.
A non interactive BARG for SAT involves a prover and verifier having as common input a circuit C, and a series of T instances x1, . . . , xT. The prover then sends a single message to the verifier with a proof that (C, x1), . . . , (C, xT)∈SAT. In particular, a non interactive BARG has a tuple of four algorithms (Gen, TGen, Prove, Verify) that are defined as follows:
CRS indistinguishability. For any non-uniform PPT adversary :=(1, 2) and any polynomial T=T(λ), there exists a negligible function v(λ) such that
|Pr[
2(crs)=1|i*←1(1λ,1T),crs←Gen(1λ,1T)]−Pr[2(crs*)=1|i*←1(1λ,1T),crs*←TGen(1λ,1T,i*)]|≤v(λ).
Corollary 1.4. As a direct consequence of CRS indistinguishability, we have that for any non-uniform PPT adversary :=(1, 2) and any polynomial T=T (λ), and i≠j, there exists a negligible function v(λ) such that
|Pr[
2(crsi)=1|i*←1(1λ,1T),crsi←TGen(1λ,1T,i)]−Pr[2(crsj)=1|j←1(1λ,1T),crsj*←TGen(1λ,1T,j)]|≤v(λ).
Pr[Verify(crs,C,x1, . . . ,xT,π)=1|crs←Gen(1λ,1T,1|C|),π←Prove(crs,C,x1, . . . ,xT,w1, . . . ,wT)]=1.
Pr[i*∈[T]∧(C,xi*)∉SAT∧Verify(crs*,C,x1, . . . ,xT,π)=1|i*←(1λ,1T),crs*←TGen(1λ,1T,i*),(C,x1, . . . ,xT,π)←(crs*)]≤v(λ).
|Pr[C(xi*,w)=1|i*←(1λ,1T),crs*←(1λ,1T,i*)(C,x1, . . . ,xT,π)←(crs*), w←E(C,x1, . . . ,xT,π)]−Pr[Verify(crs,C,x1, . . . ,xT,π)=1|i*←(1λ,1T), crs←Gen(1λ,1T),(C,x1, . . . ,xT,π)←(crs)]<v(λ).
In addition to this, crs* must be computationally indistinguishable from crs.
Definition 1.5 (Index Language). We define an Index Language as the following:
={(C,i)|∃w, such that C(i,w)=1},
where C is a boolean function and i is an index.
Note that non interactive batch arguments for index language is a special case of non interactive BARGs for circuit satisfiability when the instances (x1, . . . , xT) are indices (1, . . . , T). In this case, one removes the instances from input to the prover and verifier algorithm. Since, the verifier does not read the instances, it reduces the succinct verification time to poly(λ, log T, |C|).
To enable Turing Machine Delegation when the state space is unbounded, we use the notion of Hash Tree. Formally, a hash tree consists of a tuple of six algorithms:
Pr[HT.VerRead(dk,rt,l,b,π)=1∧D[l]=b|dk←HT.Gen(1λ),(tree,rt):=HT.Hash(dk,D), (b,π):=HT.Read(tree,l)]=1.
Pr[HT.VerWrite(dk,rt,l,b,rt′,π)=1∧(tree′,rt′)=HT.Hash(dk,D′)|dk←HT.Gen(1λ), (tree,rt):=HT.Hash(dk,D),(tree′,rt′, π):=HT.Write(tree,l,b)]=1.
Pr[b
1
≠b
2
,HT.VerRead(dk,rt,l,b1,π)=1, HT.VerRead(dk,rt,l,b2,π2)=1|dk←HT.Gen(1λ), (rt,l,b1,π1,b2,π2)←(dk)]≤negl(λ).
Pr[rt
1
≠rt
2
,HT.VerWrite(dk,rt,l,b,rt1,π1)=1,HT.VerWrite(dk,rt,l,b,rt2,π2)=1|dk←HT.Gen(1λ), (rt,l,b,rt1,π1,rt2,π2)←(dk)]≤negl(λ).
Theorem 1.6 (Existence of hash trees). A hash tree scheme as defined above can be efficiently constructed from any collision resistant hash function.
We formally define the notion of Publicly Verifiable Non Interactive Succinct Delegation (sDel). Such a delegation scheme in the CRS model involves the following PPT algorithms, (1) Software/Program Author ProgAuth (2) Cloud Worker W, and (3) Verifier V An sDel comprises of the following polynomial time algorithms:
A publicly verifiable succinct delegation scheme (sDel.Setup, sDel.ProgAuth, s Del.W, sDel.V) satisfies the following properties:
Pr[sDel.V(crs,x,,HP,Π)=1|crs←sDel.Setup(1λ),((P,state)HP)←sDel.ProgAuth (1λ,crs), (,Π)←sDel.W(crs,P,state,HP,x)]=1.
Pr[sDel.V(crs,x,,HP,Π)=1∧P(x)≠|,crs←sDel.Setup(1λ),((P,state),HP)←sDel.ProgAuth (1λ,crs), (x,aux)←1(1λ,crs),(,Π)←2(crs,P,state,HP,x,aux)]≤negl(λ).
To construct sDel, we introduce a notion of Semi-Trusted Succinct Non-Interactive Arguments stSNARG which we formally introduce and construct in Section 3. After that, we prove the following lemma which shows how to construct sDel using stSNARG as a building block.
Lemma 2.1. Assuming T=poly(m, n), T, m, n≤2λ, the stSNARG protocol in
A publicly verifiable non interactive succinct delegation scheme with zero knowledge zk-sDel is defined by the following efficient algorithms:
(crs,x,,CP,Π)|(crs,aux)←Sim1(1λ),((P,state),CP)←zk-sDel.ProgAuth(1λ,crs,aux), (,Π)←Sim2(aux,crs,x,CP)
and
(crs,x,,CP,Π)|crs ←zk—sDel.Setup(1λ),((P,state),CP)←zk-sDel.ProgAuth(1λ,crs), (,Π)←zk-sDel.W(crs,P,state,x,CP)
are indistinguishable.
In Section 4, we present a generic construction of a semi trusted non-interactive succinct arguments with zero-knowledge (ZKstSNARG) from stSNARG.
Corollary 2.2. Assuming T=poly(m, n), T, m, n≤2λ, the ZKstSNARG protocol in
We introduce a notion of “Semi-Trusted” SNARGs which is similar to the general definition of SNARGs with an addition “trusted” polynomial time algorithm that outputs a hash for the witness. Further, we provide an explicit construction of an stSNARG for all of NP. Note that any SNARG for arbitrary NP language can be reformulated as a Turing Machine which takes in as input an instance x along with witness w and accepts x, w in T steps if x∈. In this work, we modify the definition by using a Universal Turing Machine which takes as input an instance (x, ), a witness which is a program P and accepts (P, x, ) in T steps if P(x)=. We formalise this notion as follows:
Let be a Universal Turing Machine which takes as input a program P∈{0, 1}m for some m<2λ, and x∈{0, 1}n for some n<2λ and ∈{0, 1} which serve as an input and output for P respectively. accepts (P, x, ) in T steps if P(x)=. A prover produces a proof Π to convince a verifier that accepts P, x, in T. A publicly verifiable semi-trusted SNARG (stSNARG) for has the following polynomial time algorithms:
A Universal Turing Machine on input (P, x, ) outputs 1 if it accepts (P, x, ) within T steps. We define the language as,
:={(P,x,,T,HP,crs)|(P,x,)=1∧stSNARG.TrustHash(crs,P)=HP}.
A publicly verifiable stSNARG scheme (stSNARG.Setup, stSNARG.TrustHash, stSNARG.P, stSNARG.V) satisfies the following properties:
Pr[stSNARG.V(crs,x,,HP, Π)=1|crs←stSNARG.Setup(1λ,1T), HP←stSNARG.TrustHash(crs,P), Π←stSNARG.P(crs,P,x,,HP)]=1.
Pr[stSNARG.V(crs,x,y,HP, Π)=1∧(P,x,,T,HP,crs)∉|,crs←stSNARG.Setup(1λ,1T), (P,aux)←1(1λ,crs),HPstSNARG.TrustHash(crs,P), (x,y,Π)←2(crs,P,HP,aux)]≤negl(λ).
Herein, we use the notion of non-interactive BARG for index language and SE Hash functions in our scheme.
Setup for Universal Turing Machine. For a cleaner analysis, we assume without loss of generality that consists of three tapes, namely, Tp1, Tp2, Tp3. Tp1 and Tp2 are read only tapes that store x and P respectively. Tp3 is the work tape which is initialized with □ to denote an empty string.
Transition steps for . 's state information along with the head locations of the three tapes are encoded as st. To handle Turing Machines with arbitrarily long tapes, we encode {Tpi}i∈[3] using three Hash Trees as defined herein and produce tree roots rt1, rt2, rt3 respectively.
Let the each intermediate transition state of be encoded as hi:=(sti, rti1, rti2, rti3) for i∈[T]. A single step of can be interpreted in the manner described below. We break down the step function at the ith stage into two deterministic polynomial time algorithms:
Now, we translate the ith single step of to the circuit ϕ which is defined such that on input digests hi−1:=(sti−1, rti−11, rti−12, rti−13) and hi:=(sti, rti1, rti2, rti2), bits bi1, bi2, bi3, and proofs Πi1, Πi2, Πi3, Π′i, ϕ(hi−1, hi, bi1, bi2, bi3, Πi1, Πi2, Πi3, Π′i)=1 if and only if the following hold:
Here, dk denote the hash keys used to build the three hash trees. Note that the efficiency of hash tree implies that ϕ can be constructed such that it can represented as a formula in L=poly(λ) variables. For the T steps of , we have the following formula over M=O(L·T) variables:
We use a combination of SE Hash along with ϕ to produce the circuit for index languages (Section 1.2).
Our semi-trusted SNARG scheme is given in
Theorem 3.2. Assuming the existence of Somewhere Extractable Hash functions, non-interactive Batch Arguments for Index Languages, and Collision Resistant Hash Trees as described herein,
Pr[stSNARG.V(crs,x,,HP,Π)=1|crs←stSNARG.Setup(1λ,1T), HP←stSNARG.TrustHash(crs,P), stSNARG.P(crs,P,x,,HP)]=Pr[BARG.V(BARG.crs,Cindex,Π)=1|crsstSNARG.Setup(1λ,1T),HPstSNARG.TrustHash(crs,P), ΠstSNARG.P(crs,P,x,,HP)]
where Cindex is the index circuit as shown in
If (P, x, , T, HP, crs)∈, then (Cindex, 0)∈ index is trivially true by observation. Now, let us look at (Cindex, 1). We start by analysing that ϕ(h0, h1, {b1j, Π1j}j∈[3], Π′1)=1 is true. {rt1i=rt0i}i∈[2] follow from the read-only nature of tapes Tp1, Tp2. Since, {(b1j, Π1j)←HT.Read(tree0j, l1j)}j∈[3], the hash tree completeness of read ensures that {HT.VerRead(dk, rt0i, l1i, b1i, Π1i)=1}i∈[3]=1 and {Tpi[l1i]=b1i}i∈[3]. This along with the correctness of Turing Machine StepR function implies that b11, b12, b13 are indeed the correct input for the StepW function of . Finally, (tree13, rt13,Π13)←HT.Write(tree03, l′1, b′1) implies HT.VerWrite(dk, rt03, l′, b′, rt13, Π′1)=1 from the hash tree completeness of write property. The same property also ensures that Tp3 changes only at the lth memory location. When paired with the correctness of StepW, we get that st1=st′
The completeness of the SE hash implies that the verification algorithm certainly accepts all the local openings. Thus, (Cindex, 1)∈. Now, (Cindex, T)∈ because accept (P, x, ) in T steps. We can show in a similar manner that for all other i, (Cindex, i)∈. This proves the completeness of the scheme in
A publicly verifiable semi-trusted non interactive argument with zero-knowledge scheme ZKstSNARG: (ZKstSNARG.Setup, ZKstSNARG.TrustHash, ZKstSNARG.P, ZKstSNARG.V) is defined as
:={(P,x,,T,POut,crs)|∃(HP,r1) such that (P,x,)=1∧(POut,(HP,r1))=ZKstSNARG.TrustHash(crs,P)}.
A ZKstSNARG satisfies the following properties:
Pr[ZKstSNARG.V(crs,x,,POut,Π)=1|crs←ZKstSNARG.Setup(1λ,1T), (POut,SOut))←ZKstSNARG.TrustHash(crs,P),Π:=ZKstSNARG.P(crs,x,,POut,SOut)]=1.
Pr[ZKstSNARG.V(crs,x,, POut,Π)=1∧(P,x,,T,POut,SOut,crs)∉,|crs←ZKstSNARG.Setup(1λ,1T),(P,aux)←1(1λ), (POut,SOut)←ZKstSNARG.TrustHash(crs,P),(x,, Π)←2(crs,P,POut,SOut,aux)]≤negl(λ).
(crs,x,,POut,Π)|(crs,aux)←Sim1(1λ,1T),(POut,SOut)←ZKstSNARG.TrustHash(crs,P), Π←Sim2(aux,crs,(x,),POut)
(crs,x,,POut,Π)|crs←ZKstSNARG.Setup(1λ,1T),(POut,SOut)←ZKstSNARG.TrustHash(crs,P), ΠZKstSNARG.P(crs,P,x,,POut,SOut)
To extend our delegation scheme to achieve non interactive zero knowledge, we use the following additional primitives, namely (1) a statistically binding extractable commitment scheme Combind as defined in Section 1, and (2) a Non Interactive Zero Knowledge argument NIZK:=(NIZK.Gen, NIZK.P, NIZK.V).
The protocol in
:={(P,x,,T,CP,crs)|∃(HP,r1) such that (P,x,)=1∧(CP,(HP,r1))=ZKstSNARG.TrustHash(crs,P) ∨(∃r such that crs contains a commitment to 1 under randomness r).}
:={(c.com,Π.com,(crs,x,,T),CP)|∃ri,r2,r3,r4,c,Π,HP such that (CP=Com.C(Combind.Ke1,HP;r1)∧c.com=Com.C(Combind.Ke2,c;r2) ∧Π.com=Com.C(Combind.Ke3,Π;r3)∧stSNARG.V(crs,((x,),T,HP),(c,Π))=1) ∨crs contains Com.C(Combind.Ke4,1;r4)}
Also, note that in this construction, the underlying stSNARG is built for the index circuit C′index.
Theorem 4.1. Assuming the existence of semi-trusted SNARGs and Extractable Statistically Binding Commitment Schemes, and NIZK as described in sections 1 and 3,
Completeness. An honest prover ignores the additional commitment to 0 in the CRS and follows
Efficiency. The following points follow from the above lemma, and the efficiency of the SE hash and hash tree construction.
Computer system 500 may include one or more processors (also called central processing units, processing devices, or CPUs), such as a processor 504. Processor 504 may be connected to a communication infrastructure 506 (e.g., such as a bus).
Computer system 500 may also include user input/output device(s) 503, such as monitors, keyboards, pointing devices, etc., which may communicate with communication infrastructure 506 through user input/output interface(s) 502. One or more of processors 504 may be a graphics processing unit (GPU). In an embodiment, a GPU may be a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU may have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc.
Computer system 500 may also include a main memory 508, such as random-access memory (RAM). Main memory 508 may include one or more levels of cache. Main memory 508 may have stored therein control logic (i.e., computer software, instructions, etc.) and/or data. Computer system 500 may also include one or more secondary storage devices or secondary memory 510. Secondary memory 510 may include, for example, a hard disk drive 512 and/or a removable storage device or removable storage drive 514. Removable storage drive 514 may interact with a removable storage unit 518. Removable storage unit 518 may include a computer-usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage drive 514 may read from and/or write to removable storage unit 518.
Secondary memory 510 may include other means, devices, components, instrumentalities, or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system 500. Such means, devices, components, instrumentalities, or other approaches may include, for example, a removable storage unit 522 and an interface 520. Examples of the removable storage unit 522 and the interface 520 may include a program cartridge and cartridge interface, a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface.
Computer system 500 may further include communications interface 524 (e.g., network interface). Communications interface 524 may enable computer system 500 to communicate and interact with any combination of external devices, external networks, external entities, etc. (individually and collectively referenced as remote device(s), network(s), entity(ies) 528). For example, communications interface 524 may allow computer system 500 to communicate with external or remote device(s), network(s), entity(ies) 528 over communications path 526, which may be wired and/or wireless (or a combination thereof), and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system 500 via communications path 526.
Computer system 500 may also be any of a personal digital assistant (PDA), desktop workstation, laptop or notebook computer, netbook, tablet, smartphone, smartwatch or other wearable devices, appliance, part of the Internet-of-Things, and/or embedded system, to name a few non-limiting examples, or any combination thereof.
Computer system 500 may be a client or server computing device, accessing or hosting any applications and/or data through any delivery paradigm, including but not limited to remote or distributed cloud computing solutions; local or on-premises software (“on-premise” cloud-based solutions); “as a service” models (e.g., content as a service (CaaS), digital content as a service (DCaaS), software as a service (SaaS), managed software as a service (MSaaS), platform as a service (PaaS), desktop as a service (DaaS), framework as a service (FaaS), backend as a service (BaaS), mobile backend as a service (MBaaS), infrastructure as a service (IaaS), etc.); and/or a hybrid model including any combination of the foregoing examples or other services or delivery paradigms.
The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, a specialized application or network security appliance or device, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 900 includes a processing device 902, a main memory 904 (e.g., read-only memory (ROM), flash memory, dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM), etc.), a static memory 906 (e.g., flash memory, static random-access memory (SRAM), etc.), and a data storage device 918, which communicate with each other via a bus 930.
Processing device 902 represents one or more processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 902 may also be one or more special-purpose processing devices such as an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 902 is configured to execute instructions 926 for performing the operations and steps discussed herein.
The computer system 900 may further include a network interface device 908 to communicate over the network 920. The computer system 900 also may include a video display unit 910, an alphanumeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse), a graphics processing unit 922, a signal generation device 916 (e.g., a speaker), graphics processing unit 922, video processing unit 928, and audio processing unit 932.
The data storage device 918 may include a machine-readable medium 924 (also known as a computer-readable storage medium) on which is stored one or more sets of instructions 926 (e.g., software instructions) embodying any one or more of the operations described herein. The instructions 926 may also reside, completely or at least partially, within the main memory 904 and/or within the processing device 902 during execution thereof by the computer system 900, where the main memory 904 and the processing device 902 also constitute machine-readable storage media.
In an example, the instructions 926 include instructions to implement operations and functionality corresponding to the disclosed subject matter. While the machine-readable storage medium 924 is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 926. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions 926 for execution by the machine and that cause the machine to perform any one or more of the operations of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
Some portions of the detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “identifying” or “determining” or “executing” or “performing” or “collecting” or “creating” or “sending” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices.
The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer-readable storage medium, such as but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMS), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
The operations and illustrations presented herein are not inherently related to any particular computer or other apparatus. Various types of systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the operations. The structure for a variety of these systems will appear as set forth in the description herein. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.
In some embodiments, a tangible, non-transitory apparatus or article of manufacture comprising a tangible, non-transitory computer useable or readable medium having control logic (software) stored thereon may also be referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system 500, main memory 508, secondary memory 510, and removable storage units 518 and 522, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system 500), may cause such data processing devices to operate as described herein.
Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use embodiments of this disclosure using data processing devices, computer systems, and/or computer architectures other than that shown in
It is to be appreciated that the Detailed Description section, and not any other section, is intended to be used to interpret the claims. Other sections can set forth one or more but not all exemplary embodiments as contemplated by the inventor(s), and thus, are not intended to limit this disclosure or the appended claims in any way.
While this disclosure describes exemplary embodiments for exemplary fields and applications, it should be understood that the disclosure is not limited thereto. Other embodiments and modifications thereto are possible and are within the scope and spirit of this disclosure. For example, and without limiting the generality of this paragraph, embodiments are not limited to the software, hardware, firmware, and/or entities illustrated in the figures described herein. Further, embodiments (whether or not explicitly described herein) have significant utility to fields and applications beyond the examples described herein.
Embodiments have been described herein with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined as long as the specified functions and relationships (or equivalents thereof) are appropriately performed. Also, alternative embodiments can perform functional blocks, steps, operations, methods, etc. using orderings different than those described herein.
References herein to “one embodiment,” “an embodiment,” “an example embodiment,” or similar phrases, indicate that the embodiment described can include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other embodiments whether or not explicitly mentioned or described herein. Additionally, some embodiments can be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments can be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, can also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
The breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments but should be defined only in accordance with the following claims and their equivalents. In the foregoing specification, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
This application claims the benefit of U.S. Provisional Application Ser. No. 63/414,694 filed Oct. 10, 2022, the content of which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63414694 | Oct 2022 | US |