Systems and Methods for Cryptography

Information

  • Patent Application
  • 20250030553
  • Publication Number
    20250030553
  • Date Filed
    May 10, 2024
    9 months ago
  • Date Published
    January 23, 2025
    17 days ago
  • Inventors
    • PAPAMANTHOU; Charalampos (Fairfield, CT, US)
    • SRINIVASAN; Shravan (Gaithersburg, MD, US)
    • GAILLY; Nicolas
    • HISHON-REZAIZADEH; Ismael (New York, NY, US)
    • SALUMETS; Andrus
    • GOLEMAC; Stjepan
  • Original Assignees
    • Lagrange Labs Inc. (New York, NY, US)
Abstract
A cryptographic method of performing a maintainable Merkle-based vector commitment is provided. The method comprises: a) computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive succinct non-interactive arguments of knowledge (SNARK), where the recursive SNARK is run directly on Merkle paths belonging to elements in the batch to perform the computation of the subset hash inside the computation of the Merkle verification, using operations comprising: i) verifying that the elements belong to the Merkle tree, ii) computing a batch hash for the elements in the batch using canonical hashing and iii) making the batch hash part of the public statement; and b) maintaining a data structure that stores previously computed recursive proofs; and c) updating the succinct batch proof, upon change of an element of the Merkle tree, in logarithmic time.
Description
BACKGROUND

A Merkle tree is a seminal cryptographic data structure that enables a party to commit to a memory M of slots via a succinct digest. A third party with access can verify correctness of any memory slot M[i] via a log-sized and efficiently-computable proof πi. Merkle trees can be used to verify untrusted storage efficiently, and have found many applications particularly in the blockchain space, such as in Ethereum state compression via Merkle Patricia Tries (MPTs) stateless validation, zero-knowledge proofs as well as in verifiable cross-chain computation.


Many applications that work with Merkle trees require the use of a Merkle batch proof. A Merkle batch proof πi is a single proof that can be used to prove multiple memory slots {M[i]}i∈I at once. When the slots are consecutive, Merkle batch proofs (also known as Merkle range proofs) have logarithmic size, independent of the batch size [I]. In the general case of arbitrary set though, a Merkle batch proof comprises -8 O(|I| log log n) hashes. For blockchain applications that have to prove thousands of transactions at once, the lack of succinctness of Merkle batch proofs can become an issue.


SUMMARY

A distinguishing feature of Merkle trees is their support for extremely fast updates: If a memory slot M [j]. of the committed memory M changes, a batch proof πI (as well as the whole Merkle tree) can be updated in logarithmic time with a simple algorithm. This is particularly useful for applications. For instance, when new Ethereum blocks are created and new memory is allocated for use by smart contracts, Ethereum nodes can update their local MPTs (which are q-ary unbalanced Merkle trees), very fast. However, while Merkle trees support blazingly-fast updates of batch proofs, their batch proofs are not succinct, i.e., their size depends on |I|.


The present disclosure addresses the above issues of the conventional Merkle tree by building succinct Merkle batch proofs that are efficiently updatable. In some embodiments of the present disclosure, Reckle trees are provided which is a new vector commitment based on succinct RECursive arguments and MerKLE trees. The Reckle trees herein may be capable of supporting succinct batch proofs that are updatable. This beneficially allows for new applications in the blockchain setting where a proof needs to be computed and efficiently maintained over a moving stream of blocks.


In some cases, the methods herein may comprise embedding a computation of the batch hash inside the recursive Merkle verification via a hash-based accumulator (i.e., canonical hashing). The unique embedding beneficially allows for batch proofs being updated in logarithmic time, whenever a Merkle leaf (belonging to the batch or not) changes, by maintaining a data structure that stores previously-computed recursive proofs. In the cases of parallel computation, the batch proofs are also computable in O (log n) parallel time-independent of the size of the batch.


In alternative embodiments, an extension of Reckle trees or Reckle+trees are provided. Reckle+trees provide updatable and succinct proofs for certain types of Map/Reduce computations. For instance, a prover can commit to a memory M and produce a succinct proof for a Map/Reduce computation over a subset of M. The proof can be efficiently updated whenever the subset or M changes


In one aspect, disclosed herein is a computer-implemented cryptographic method of performing a maintainable Merkle-based vector commitment. The method comprises: a) computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive succinct non-interactive arguments of knowledge (SNARK), where the recursive SNARK is run directly on Merkle paths belonging to elements in the batch to perform the computation of the subset hash inside the computation of the Merkle verification, using operations comprising: i) verifying that the elements belong to the Merkle tree, ii) computing a batch hash for the elements in the batch using canonical hashing and iii) making the batch hash part of the public statement; and b) maintaining a data structure that stores previously computed recursive proofs; and c) updating the succinct batch proof, upon change of an element of the Merkle tree, in logarithmic time.


In some embodiments, the succinct batch proof is computed in O(log n) parallel time. In some embodiments, the succinct batch proof size is 112 KiB, independent of the batch size and the size of the vector. In some embodiments, the update to the succinct batch proof is performed in about 16.61 seconds, or less, for a tree height of 27. In some embodiments, the verification is performed in about 18.2 ms, or less.


In some embodiments, the element of the Merkle tree changed is in the batch. In some embodiments, the element of the Merkle tree changed is not in the batch. In some embodiments, the method applied to dynamic digest translation. In some embodiments, the method applied to updateable Boneh-Lynn-Shacham (BLS) keys aggregation.


In a related yet separate aspect, disclosed herein is a computer-implemented system comprising at least one processor and instructions executable by the at least one processor to cause the at least one processor to perform a maintainable Merkle-based vector commitment by performing operations comprising: a) computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive succinct non-interactive arguments of knowledge (SNARK), where the recursive SNARK is run directly on Merkle paths belonging to elements in the batch to perform the computation of the subset hash inside the computation of the Merkle verification, using operations comprising: i) verifying that the elements belong to the Merkle tree, ii) computing a batch bash for the elements in the batch using canonical hashing and iii) making the batch hash part of the public statement; and b) maintaining a data structure that stores previously computed recursive proofs; and c) updating the succinct batch proof, upon change of an element of the Merkle tree, in logarithmic time


In some embodiments, the succinct batch proof is computed in O(log n) parallel time. In some embodiments, the succinct batch proof size is 112 KiB, independent of the batch size and the size of the vector. In some embodiments, the update to the succinct batch proof is performed in about 16.61 seconds, or less, for a tree height of 27. In some embodiments, the verification is performed in about 18.2 ms, or less.


In some embodiments, the element of the Merkle tree changed is in the batch. In some embodiments, the element of the Merkle tree changed is not in the batch.


In some embodiments, the maintainable Merkle-based vector commitment is applied to dynamic digest translation. In some embodiments, the maintainable Merkle-based vector commitment is applied to updateable BLS aggregation.


In another related yet separate aspect, disclosed herein is one or more non-transitory computer-readable storage media encoded with instructions executable by one or more processors to create an application for performing a maintainable Merkle-based vector commitment, the application comprising: a) a software module computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive SNARK, wherein the recursive SNARK is run directly on Merkle paths belonging to elements in the batch to perform the computation of the subset hash inside the computation of the Merkle verification, by operations comprising: i) verifying that the elements belong to the Merkle tree, ii) computing a batch hash for the elements in the batch using canonical hashing and iii) making the batch hash part of the public statement; b) a software module maintaining a data structure that stores previously computed recursive proofs; and c) a software module updating the succinct batch proof, upon change of an element of the Merkle tree, in logarithmic time.


In some embodiments, the succinct batch proof is computed in O(log n) parallel time. In some embodiments, the succinct batch proof size is 112 KiB, independent of the batch size and the size of the vector. In some embodiments, the update to the succinct batch proof is performed in about 16.61 seconds, or less, for a tree height of 27. In some embodiments, the verification is performed in about 18.2 ms, or less.


In some embodiments, the element of the Merkle tree changed is in the batch. In some embodiments, the element of the Merkle tree changed is not in the batch.


In some embodiments, the application further comprises a software module performing dynamic digest translation. In some embodiments, the application further comprises a software module performing updateable BLS aggregation.


The method herein can be applied in blockchain space. For example, the method for providing parallelizable and updatable batch proofs may be applied in a decentralized network. The method comprises: accessing a first block added to a blockchain; computing a proof of the first block by computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive SNARK, where the subset of k leaves corresponds to a subset of attestors of the decentralized network; and upon proofing the first block, issuing a token for the proof. The subset of attestors change when a second block is added to the blockchain and such change can be updated with improved efficiency. In some cases, the batch proof is performed by log n different circuits and each of the different circuits has a fixed size corresponding to a level of the Merkle tree.


In some embodiments, a decentralized protocol, run by a plurality of computers, issues a cryptocurrency token via a smart contract, where the token is traded on an exchange. The decentralized protocol incorporates the above system or method so that value of the token is influenced by the market interest in the above system. For instance, the market interest includes usage of the above system.


In some embodiments, the token is used in transacting with the above system, while using the above system, while operating the computer(s) running the above system, and/or rewarding the operators of the computers running the above system. For instance, the token is used to provide economic incentives to operators of the above system so as to enable its operation in a decentralized manner and/or for user of the system to pay for usage of the system.


Additional aspects and advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only illustrative embodiments of the present disclosure are shown and described. As will be realized, the present disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.


INCORPORATION BY REFERENCE

All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference. To the extent publications and patents or patent applications incorporated by reference contradict the disclosure contained in the specification, the specification is intended to supersede and/or take precedence over any such contradictory material.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the present disclosure are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present disclosure will be obtained by reference to the following detailed description that sets forth illustrative embodiments, and the accompanying drawings (also “Figure” and “FIG.” herein), of which:



FIG. 1 depicts a non-limiting example of a Merkle tree.



FIG. 2 depicts a non-limiting example of a batch proof recursive circuit for RECKLE TREES.



FIG. 3 depicts a non-limiting example of algorithm for implementation of RECKLE TREES.



FIG. 4 depicts a non-limiting example of an optimized batch proof circuits for RECKLE TREES.



FIG. 5 depicts a non-limiting example of a circuit for batch proof in q-ary tree Reckle tree



FIG. 6 depicts a non-limiting example of a Map/Reduce circuit for RECKLE+ TREES.



FIG. 7 depicts a non-limiting example of an optimized digest translation circuit.



FIG. 8 depicts a non-limiting example of an optimized BLS aggregation circuit.



FIG. 9 depicts a non-limiting example of a RECKLE+ TREES data structure to compute the aggregated BLS public key and cardinality of the attestor set.



FIG. 10 depicts a non-limiting example of distributed system architecture



FIG. 11 shows non-limiting graphs of proving times for several canonical methods vs Reckle methods.


FIG.12 depicts a non-limiting example of prover and update cost for dynamic digest translation.



FIG. 13 depicts a non-limiting example of prover and update cost for a BLS application. FIG. 14 depicts a non-limiting example of a cryptographic method.





DETAILED DESCRIPTION

While various embodiments of the invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions may occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed. Certain definitions


Unless otherwise defined, all technical terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this invention belongs.


Reference throughout this specification to “some embodiments,” or “an embodiment,” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase “in some embodiment,” or “in an embodiment,” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments


As utilized herein, terms “component,” “system,” “interface,” “unit,” “module” and the like are intended to refer to a computer-related entity, hardware, software (e.g., in execution), and/or firmware. For example, a component can be a processor, a process running on a processor, an object, an executable, a program, a storage device, and/or a computer. By way of illustration, an application running on a server and the server can be a component. One or more components can reside within a process, and a component can be localized on one computer and/or distributed between two or more computers.


Further, these components can execute from various computer readable media having various data structures stored thereon. The components can communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g, data from one component interacting with another component in a local system, distributed system, and/or across a network, e.g., the Internet, a local area network, a wide area network, etc. with other systems via the signal).


As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry; the electric or electronic circuitry can be operated by a software application or a firmware application executed by one or more processors; the one or more processors can be internal or external to the apparatus and can execute at least a part of the software or firmware application. As yet another example, a component can be an apparatus that provides specific functionality through electronic components without mechanical parts; the electronic components can include one or more processors therein to execute software and/or firmware that confer(s), at least in part, the functionality of the electronic components. In some cases, a component can emulate an electronic component via a virtual machine, e.g., within a cloud computing system.


Whenever the term “at least,” “greater than,” or “greater than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “at least,” “greater than” or “greater than or equal to” applies to each of the numerical values in that series of numerical values. For example, greater than or equal to 1, 2, or 3 is equivalent to greater than or equal to 1, greater than or equal to 2, or greater than or equal to 3.


Whenever the term “no more than,” “less than,” or “less than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “no more than,” “less than,” or “less than or equal to” applies to each of the numerical values in that series of numerical values. For example, less than or equal to 3, 2, or 1 is equivalent to less than or equal to 3, less than or equal to 2, or less than or equal to 1.


As used herein a processor encompasses one or more processors, for example a single processor, or a plurality of processors of a distributed processing system for example. A controller or processor as described herein generally comprises a tangible medium to store instructions to implement steps of a process, and the processor may comprise one or more of a central processing unit, programmable array logic, gate array logic, or a field programmable gate array, for example. In some cases, the one or more processors may be a programmable processor (e.g., a central processing unit (CPU) or a microcontroller), digital signal processors (DSPs), a field programmable gate array (FPGA) and/or one or more Advanced RISC Machine (ARM) processors. In some cases, the one or more processors may be operatively coupled to a non-transitory computer readable medium. The non-transitory computer readable medium can store logic, code, and/or program instructions executable by the one or more processors unit for performing one or more steps. The non-transitory computer readable medium can include one or more memory units (e.g., removable media or external storage such as an SD card or random access memory (RAM)). One or more methods or operations disclosed herein can be implemented in hardware components or combinations of hardware and software such as, for example, ASICs, special purpose computers, or general purpose computers.


Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.


As described above, a Merkle tree is a seminal cryptographic data structure that enables a party to commit to a memory M of slots via a succinct digest. A third party with access can verify correctness of any memory slot M [i] via a log-sized and efficiently-computable proof πi. Merkle trees can be used to verify untrusted storage efficiently, and have found many applications particularly in the blockchain space, such as in Ethereum state compression via Merkle Patricia Tries (MPTs) stateless validation, zero-knowledge proofs as well as in verifiable cross-chain computation. A distinguishing feature of Merkle trees is their support for extremely fast updates: If a memory slot M[j]. of the committed memory M changes, a batch proof πI (as well as the whole Merkle tree) can be updated in logarithmic time with a simple algorithm. This is particularly useful for applications. For instance, when new Ethereum blocks are created and new memory is allocated for use by smart contracts, Ethereum nodes can update their local MPTs (which are q-ary unbalanced Merkle trees), very fast. However, while Merkle trees support blazingly-fast updates of batch proofs, their batch proofs are not succinct, i.e., their size depends on |I|.


However, while Merkle trees support blazingly-fast updates of batch proofs, their batch proofs are not succinct, i.e., their size depends on |I|. Current approaches used to build batch proofs and Merkle computation proofs suffer from drawbacks. For instance, recursive batch proofs via tree-of-proofs approach which is an application of recursive SNARKs in computing succinct Merkle batch proofs-using a “tree of proofs” approach: Suppose one wants to compute a Merkle batch proof πI for an index set I. In the first step a SNARK circuit is built for verifying a single Merkle proof. This SNARK is executed |I| times, outputting a SNARK proof pi for every index i ∈ I. Then a binary tree is built with all proofs pi as leaves. For every node υ of the binary tree a recursive SNARK is executed (outputting a proof pυ) that verifies the proofs coming from υ's children. This process continues up to the root and the batch proof is defined as the final recursive proof pr of the root. Unfortunately, the produced batch proof is not updatable. If an element of the tree or the batch changes, all proofs at the leaves will be affected and therefore the whole procedure must be executed from scratch, requiring computational work that is proportional to the size of the batch. In addition, in such “tree of proofs” approach, if two indices share common structure in their Merkle proofs (e.g., successive indices), the same hash computations will be repeating within the two circuits corresponding to those leaves, unnecessarily consuming computational resources.


Another example approach for succinct batch proofs is vector-commitment approach. Succinct batch proofs can be computed using vector commitments. Vector commitments are typically algebraic constructions as opposed to hash-based Merkle trees. With vector commitments a batch proof for |I| elements has size either optimal O(1) or logarithmic, but always independent of the batch size |I|. However, while vector commitments achieve optimal batch proof sizes, they face other challenges. In particular, the majority of vector commitments are not updatable: As opposed to Merkle trees, whenever a single memory slot changes, Ω(n) time is required to update all individual proofs, which can be a bottleneck for many applications While there are some vector commitments that can update proofs in logarithmic time (while having succinct batch proofs), those suffer from increased concrete batch proof sizes, large public parameters and high aggregation and verification times (At the same time, their batch proofs are not updatable.) For example, a Hypeproof batch proof for a thousand memory slots requires access to gigabytes of data to be generated and approximately 17 seconds to be verified.


A third example for Merkle computation proofs is the Merkle-SNARK approach. Vector commitments can handle only memory content. To enable arbitrary computation over Merkle leaves one of course can build a SNARK that verifies Merkle proofs and then performs computation via a monolithic circuit, but this is typically very expensive (Such an approach also leaves very little space for massive parallelism since proof computation can be parallelized only to the extent that the prover of underlying SNARK can be parallelized.) For instance, computing a SNARK-based proof that verifies a thousand memory slots on a Poseidon-based 30-deep Merkle tree can take up to 20 minutes-and this excludes any computation that one might wish to perform on the leaves. To make things worse, any updates to the memory M would be required.


The present disclosure addresses the above issues of the conventional Merkle tree or Merkle computation proofs approaches by building succinct Merkle batch proofs that are efficiently updatable. In some embodiments of the present disclosure, Reckle trees are provided which is a new vector commitment based on succinct RECursive arguments and MerKLE trees. The Reckle trees herein may be capable of supporting succinct batch proofs that are updatable. This beneficially allows for new applications in the blockchain setting where a proof needs to be computed and efficiently maintained over a moving stream of blocks.


Map/Reduce proofs with RECKLE+ TREES. Merkle trees may not support updatable verifiable computation over Merkle leaves. Merkle trees can only be used to prove memory content, but no computation over it. However, in certain applications (e.g., smart contract), it is desirable to compute an updatable proof for some arbitrary computation over the Merkle leaves, e.g., counting the number of Merkle leaves u satisfying an arbitrary function f (υ)=1. For example, smart contracts can benefit by accessing historic chain MPT data to compute useful functions such as price volatility or BLS aggregate keys. Instead, such applications are currently enabled by the use of blockchain “oracles” that have to be blindly trusted to expose the correct output to the smart contract. In some embodiments, the present disclosure provides RECKLE+ TREES, an extension of RECKLE+ TREES that support updatable verifiable computation over Merkle leaves With RECKLE+ TREES, a prover can commit to a memory M, and provide a proof of correctness for Map/Reduce computation on any subset of M.


In some embodiments, the RECKLE+ TREES may be technically achieved by encoding, in the recursive circuit, not only the computation of the canonical hash and the Merkle hash (as we would do in the case of batch proofs), but also the logic of the Map and the Reduce functions. The final Map/Reduce proof can be easily updated whenever the subset changes, without having to recompute it from scratch.


In some cases, the methods herein may comprise embedding a computation of the batch hash inside the recursive Merkle verification via a bash-based accumulator (i.e, canonical hashing). The unique embedding beneficially allows for batch proofs being updated in logarithmic time, whenever a Merkle leaf (belonging to the batch or not) changes, by maintaining a data structure that stores previously-computed recursive proofs. In the cases of parallel computation, the batch proofs are also computable in O(log n) parallel time-independent of the size of the batch.


In alternative embodiments, an extension of Reckle trees or Reckle+ trees are provided Reckle+ trees provide updatable and succinct proofs for certain types of Map/Reduce computations. For instance, a prover can commit to a memory M and produce a succinct proof for a Map/Reduce computation over a subset of M. The proof can be efficiently updated whenever or M changes.


Whether succinct Merkle batch proofs can be built that are efficiently updatable relates to the notion of updatable SNARKs, that are disclosed herein. An updatable SNARK is a SNARK that is equipped with an additional algorithm π′←Update ((x, w), (x′, w′), π). Where the update function takes as input a true public statement x along with its witness w and its verifying proof « as well as an updated true public statement x′ along with the updated witness w′. It outputs a verifying proof π′ for x′ without running the prover algorithm from scratch and ideally in time proportional to the distance (for some definition of distance) of (x, w) and (x′, w′).


RECKLE TREES

In some embodiments of the present disclosure, a reckle tree is provided. The reckle tree may be a unique vector commitment scheme that supports updatable and succinct batch proofs using RECursive SNARKs and MerKLE trees. The term “vector commitment” as utilized herein generally refers to a cryptographic abstraction for “verifiable storage” and which can be implemented, for example, by Reckle trees or Merkle trees.


A recursive SNARK is a SNARK that can call its verification algorithm within its circuit. Some of the SNARKs may be optimized for recursion, e.g., via the use of special curves. The framework disclosed herein may be compatible with any recursive SNARK (e.g., Plonky2).


RECKLE TREES may work under a fully-dynamic setting for batch proofs: Assume a Reckle batch proof πI for a subset I of memory slots has been computed. RECKLE TREES can support the following updates to πI in logarithmic time: (a) change the value of element M [i], where i ∈I; (b) change the value of element M [j], where j ∉ I; (c) extend the index set I to I ∪ w so that M [w] is also part of the batch; (d) remove index w from I so that M [w] is not part of the batch; (e) remove or add an element from the memory altogether. In this case, RECKLE TREES can rebalance following the same rules of standard data structures such as red-black trees or AVL trees.


In some cases, updating a batch proof πI in RECKLE TREES is achieved through a batch-specific data structure ∧I that stores recursively-computed SNARKs proofs. RECKLE TREES can be naturally parallelizable. Assuming enough parallelism, any number of updates T>1 can be performed in O(log n) parallel time. For instance, a massively-parallel RECKLE TREES implementation can achieve up to 270×speedup over a sequential implementation.


RECKLE TREES may have particularly low memory requirements: While they can make statements about n leaves, their memory requirements (excluding the underlying linear-size Merkle tree) scale logarithmically with n. Additionally, Reckle trees have the flexibility of not being tied to a specific SNARK: If a faster recursive SNARK implementation is introduced in the future (e.g., folding schemes), Reckle trees can use the faster technology seamlessly.


In some embodiments, the method may comprise: letting I be the set of Merkle leaf indices for which to compute the batch proof, starting from the leaves l1, . . . ,l|I|that belong to the batch l, RECKLE TREES run the SNARK recursion on the respective Merkle paths p1, . . . , p|l|, merging the paths when common ancestors of the leaves in I are encountered. While the paths are being traversed, RECKLE TREES may verify that the elements in I belong to the Merkle tree as well as compute a “batch” hash for the elements in I, eventually making this batch hash part of the public statement.


In some cases, the batch hash is computed via canonical hashing, a deterministic and secure way to represent any subset of |I| leaves succinctly. FIG. 1 shows an example of a Merkle tree 100 of 16 leaves. The subset of Merkle leaf indices for which to compute the batch proof may be the subset {2, 4, 5, 15} as shown in FIG. 1. At every node, the canonical hash of that node with respect to the subset {2, 4, 5, 15} 101-1, 101-2, 101-3, 101-4, 101-5, 101-6, 101-7, 101-8, 101-9 is shown. For the nodes with no hash for the subset, their canonical hash is 0. Details about the canonical hash are described later herein.


It should be noted that any number-theoretic accumulator (or even the elements in the batch themselves) can be utilized. In some cases, the method herein may select canonical hashing to avoid encoding algebraic statements within the circuits. This beneficially ensures the circuits' size not depend on the size and topology of the batch. If the final recursive proof verifies, that means that the batch hash corresponds to a valid subset of elements in the tree, and can be recomputed using the actual claimed batch as an input. The Reckle tree herein may distinguish from the conventional Merkle tree construction that every node υ in a Reckle tree, in addition to storing a Merkle hash Cυ, can also store a recursively-computed SNARK proof πυ, depending on whether any of υ's descendants belongs to the batch I in question or not (For nodes that have no descendants in the batch, there is no need to store a SNARK proof.) The approach or methods herein can be easily extended to unbalanced q-ary Merkle trees (that model Ethereum MPTs) as described later herein.


Methods to Build Batch Proofs and Merkle Computation Proofs
Definitions and Notations for SNARKs, Merkle Trees and Vector Commitments:

Let λ be the security parameter and H: {0, 1}+→{0, 1}denote a collision-resistant hash function. Let [n]=[0, n)={0, 1, . . . , n···1}, and r ∈R S denote picking an element from S uniformly at random. Bolded, lower-case symbols such as a=[a0 . . . ,an−1] typically denote vector of binary strings, where ai ∈{0, 1}. ∀i ∈[n]. If ai's are arbitrarily long, we use the H function to reduce it to a fixed size. |a| denotes the size of the vector a.


Succinct Non-Interactive Arguments of Knowledge. Let custom-character be an Efficiently computable binary relation that consists of pairs of the form (x, w), where x is a statement and w is a witness.


Definition 2.1. A SNARK is a triple of PPT algorithms II=(Setup, Prove, Verify) defined as follows:


Setup (1λcustom-character,)→(pk, vk): On input security parameter λ and the binary relation custom-character, it outputs a common reference string consisting of the prover key and the verifier key (pk, vk).


Prove (pk,x,w), π: On input pk, a statement x and the witness w, it outputs a proof π.


Verify (vk,, x π)→(1/0: On input vk, a statement x, and a proof π, it outputs either 1 indicating accepting the statement or 0 for rejecting it.


It also satisfies the following properties:


Completeness: For all (x, w)∈custom-character, the following holds:







Pr

(


(


p

k

,
vk

)



Setup

[


1
λ

,


R

π



Prove



(


p

k

,
x
,
w

)






)

=
1




Knowledge Soundness: For any PPT adversary A, there exists a PPT extractor XA such that the following probability is negligible in λ:






P


r
(


(


p

k

,
vk

)



Setup



(


1
λ

,


R

(


(

x
,
π

)

;
w

)



A





X
A

(

(


p

k

,
vk

)

)





)








(The notation ((x, π); w)←A||XA ((pk, vk)) means the following: After the adversary A outputs (x, π), we can run the extractor XA on the adversary's state to output w. The intuition is that if the adversary outputs a verifying proof, then it must know a satisfying witness that can be extracted by looking into the adversary's state.)


Succinctness: For any x and w, the length of the proof π is given by |π|=poly(λ) polylog(|x|+|w|)


Merkle Trees. Let M be a memory of n=2l slots. A Merkle tree is an algorithm to compute a succinct, collision-resistant representation C of M (also called digest) so that one can provide a small proof for the correctness of any memory slot M [i], while at the same time being able to update C in logarithmic time whenever a slot changes. Assume the memory slot values and the output of the H function have size 2λbits each, the Merkle tree on an n-sized memory M can be constructed as follows: Without loss of generality, assume n is a power of two, and consider a full binary tree built on top of memory M. For every node υ in the Merkle tree, the T Merkle hash of υ, Cυ, is computed as follows:

    • 1. If υ is a leaf node, then Cυ=index (υ)||value (98 ) (For simplicity of notation, we use λ bits for index and λ bits for value.)
    • 2. If υ's left child is L and υ's right child is R, then Cυ=H(CL ||CR).


The digest of the Merkle tree is the Merkle hash of the root node. The proof for a leaf comprises hashes along the path from the leaf to the root. It can be verified by using hashes in the proof to recompute the root digest C.


BLS signatures. BLS signatures are signatures that can be easily aggregated. For BLS signatures we need a bilinear map e: G×G→GT over elliptic curve groups and a hash function H: {0, 1}*→G. Groups G and GT have prime order p. The secret key is sk ∈Zp and the public key is gsk ∈ G, where g is the generator of G. To sign a message m ∈{0, 1}* we output the signature H (m)sk ∈ G.


To verify a signature s ∈ G on message m ∈ {0, 1}* given public key PK ∈ G, the verifier checks whether e(s, g)=e(H(m), PK). Given public keys {gskl }i=1,...,t one can compute an aggregate public key Πit =1 gski . To verify a t-multisignature S on a message m given aggregate public key APK the verifier checks whether e(S, g)=e(H(m), APK). If APK is provably the product of t individual BLS public keys PK, it can be assured that t parties (in particular the owners of the respective secret keys) signed the message.


BLS aggregation can be performed by aggregating all keys involved in a BLS multi signature, to create a “aggregate key.” The aggregate key is created from scratch at each signature because the signer set can change. Recproofs as described herein beneficially allows for only updating the aggregate key for the signers that changed between two consecutive signing operators. The cost to prove the aggregate public key is only proportional to the number of signers that got added or remove from one multi-signature to another, i e. cost is proportional to update. This allows to build a fast BLS public key aggregation layer. This layer can improve decentralized application such as notary service that regularly signs common information. For example, Ethereum validators use BLS multisignature to sign on each block. The network needs to verify that the multi signature is correct, comes from valid authorized signers. This step of aggregating public keys together is a significant improvement to efficiency when the number of keys scales to thousands or million.


Vector Commitments (VCs). A vector commitment is a set of algorithms that allow one to commit to a vector of n slots so that later one can open the commitment to values of individual memory slots. One of the most popular implementations of vector commitments are Merkle trees. Vector commitments typically enable more advanced properties than Merkle trees, such as batch proofs (succinct proofs of multiple values). It formalizes VCs below, similar to Catalano and Fiore. The method extends the definition with batch proofs updatability, to capture the new properties introduced by RECKLE TREES.

    • Definition 2.2 (VC). A VC scheme is a set of PPT algorithms:
    • Gen (1λ, n)→pp: Outputs public parameters pp.
    • Compp (a)→C: Outputs digest C.
    • Openpp (i, a)→πi;: Outputs proof πi.
    • OpenAllpp (a)→(π0, . . . , πn−1): Outputs all proofs πi .
    • Aggpp ((ai, πi) i∈I)→(πl, ∧l): Outputs a batch proof πl , and a batch-proof data structure ∧I .
    • Verpp (C,(ai)i∈I, πI)→{0, 1}: Verifies batch proof πI against C.
    • UpdDigpp (u, δ, C, aux)→C′: Updates digest C to C′ to re-flect position u changing by δ given auxiliary input aux.
    • UpdProofpp (u, δ, πi, aux)→πi: Updates proof πi to πi′ to reflect position u changing by δ given auxiliary input aux.
    • UpdBatchProofpp (u, δ, πi, ∧I, aux)→(πi′, ∧I′): Updates batch proof πI to πi′ and the batch-proof data structure ∧I to ∧I′ to reflect position u changing by δ given auxiliary input aux.
    • UpdAllProofspp (u, δ, π0, . . . , πn−1)→(π0′, . . . , πn−1′) Updates all proofs πl to πi′ to reflect position u changing by δ.
    • It defines VC correctness in Definition 2.3 and VC soundness in Definition 2.4.
    • Definition 2.3 (VC Correctness). A VC scheme is correct, if for all λ∈N and n=poly (λ), for all pp←Gen (1λ, n), for all vectors a=[a0, . . . , an−1], if C=Compp (a), πi=Openpp (i, a), ∀i ∈[0, n) (or from OpenAllpp (a)), and πi=Aggpp ((ai, πi)i ∈I), ∀I⊆[n] then, for any polynomial number of updates (u, δ) resulting in a new vector a′, if C′ is the updated digest obtained via calls to UpdDigpp, πi′ proofs obtained via calls to UpdProofpp or UpdAllProofspp for all i, and πi′ are proofs obtained via calls to UpdBatchProofpp for all subsets/then:








Pr
[

1



Ver

p

p


(


C


,

{
i
}

,

a
i


,

π
I



)


]

=
1

,



i



[
n
]

.













Pr
[


1




Ver

p

p


(


C


,
I
,

a
i



)


i

I



,

π
l




)

]

=
1

,



I



[
n
]

.







At a high-level, correctness ensures that proofs created via Open or OpenAll verify successfully via Ver, even in the presence of updates and aggregated proofs.


Definition 2.4 (VC Soundness). ∀ PPT adversaries A,







P


r
[



p

p



Gen

(


1
λ

,
n

)


,



(

C
,


{

a
i

}


i

I


,


{

a
j


}


j

J


,

π
I

,

π
J


)



A

(


1
λ

,
pp

)


:


1



Ver
pp

(

C
,


{

a
i

}


i

I


,

π
I


)




1



Ver
pp

(

C
,


{

a
j


}


j

J


,

π
J


)






k



I


J



s
.
t
.


a
k






a
k








]





negl

(
λ
)

.





At a high level, soundness ensures that no adversary can output two inconsistent proofs for different values ak≠ak′ at position k with respect to an adversarially-produced C.


RECKLE TREES

Methods herein provide RECKLE TREES, a vector commitment scheme which extends Merkle trees to provide updatable batch proofs. Assuming, n=2l, where l is the height of the tree, computing the commitment C of vector a=[a0, . . . , an−1] in RECKLE TREES may comprise: computing the Merkle tree digest of a as described above (e.g., an example of basic implementation may use the Poseidon hash function.) The proof of opening for an index i in the vector is the Merkle membership proof of leaf ai. RECKLE TREES have a new algorithm to aggregate Merkle proofs and compute a batch proof, using recursive SNARKs. The algorithm outputs batch proofs that can be updated in logarithmic time.


Canonical Hashing

Canonical hashing is a deterministic algorithm to compute a digest of subset I of k leaves from a set of 2l leaves. The definition of the canonical hash of a node υ of Merkle tree T with respect to a subset I, denoted d(υ, I), recursively is:

    • 1. If υ is a leaf node we distinguish two cases: If υ's index is in I, then d(υ, I): =index (υ)||value (υ)=Cυ, otherwise d(υ, I): =0.
    • 2. If υ has left child L and right child R, then







d

(

υ
,
I

)

:=

H
(


d

(

L
,

I




d

(

R
,
I

)




)

,










if



d

(

L
,
I

)



d

(

R
,
I

)



0

,



otherwise



d

(

υ
,
I

)


:=


d

(

L
,
I

)

+


d

(

R
,
I

)

.







Thus, the canonical digest of subset I (denoted as dI or, simply, d) is the canonical hash of the root node of T for the subset I. Note that when the subset I is unambiguous from the context, it denotes d(υ, I) as dυ. FIG. 1 shows an example of a Merkle tree 100 of 16 leaves. The subset of Merkle leaf indices for which to compute the batch proof may be the subset {2, 4, 5, 15} as shown in FIG. 1. At every node, the canonical hash of that node with respect to the subset {2, 4, 5, 15} 101-1, 101-2, 101-3, 101-4, 101-5, 101-6, 101-7, 101-8, 101-9 is shown. For the nodes with no hash for the subset, their canonical hash is 0. In order to update a batch proof (of a subset I), when some leaves are changing, the batch data structure may be used. The batch data structure for a subset I, ∧l, consists of all the Merkle hash values, canonical hash values, and recursive SNARK proofs (as computed before) along the nodes (and their siblings) from the leaf nodes in I to the root.


Recursive circuit

A batch proof is a single short proof that simultaneously proves that a specific subset I of elements belongs in the vector. In the scheme herein, the method implements the batch proof using recursion. In particular, the precise circuit verifies, recursively, that the following NP statement (C, d) is true: “d is the root canonical digest with respect to some set of leaves of some Merkle tree whose root Merkle digest is C.”


Note that if there is a proof for the statement above, then it can easily prove that a specific subset of elements {ai}i∈l, belongs to the Merkle tree by just locally recomputing the root canonical digest. FIG. 2 shows an example of a recursive circuit B 200 for the statement above. The example shows what the public input of the circuit is and what the (private) witness is. Note that because the circuit is recursive (calling itself), the public input needs to contain a verification key that will be used by the verification call inside the circuit, otherwise, the prover could potentially use an arbitrary verification key (The verification key cannot be hardcoded either since it leads to circularity issues.)


The circuit B works for arbitrary Merkle trees, both balanced and unbalanced. The recursive circuit may take as public input the verifier key (vk), a succinct, collision-resistant representation C of M, and d a subset of C. The private input, called a witness taken as input is a proof π for both the left and right hand (πL and υR, respectively), the left and right hand side of C (CL and CR, respectively), and the right and left hand side of d (dL and dR, respectively). C is checked to be sure it matches the hash of its left and right hand side combined. The [operation] of dL and dR is checked in a binary fashion, if it is not equal to 0 then the equivalence of d and the combined hash of dL and dR is checked in a binary fashion, if it is not 0 then the equivalence of d with the sum of dL and dR is checked. Either subsequently or in a parallel fashion, dL is checked in a binary fashion, if not equal to 0 a verification algorithm is run on a verification key, a handed proof, and a combination of the verification key and the handed versions of C and d. If no checks fail the circuit returns true. Due to the recursive nature of the circuit, the public input may contain a verification key that may be used by the verification call inside the circuit, otherwise, the prover could potentially use an arbitrary verification key (The verification key cannot be hardcoded either since it leads to circularity issues).


Batch Proof

The method may comprise aggregating individual Merkle proofs πi for an arbitrary set of indices i ∈I so that to compute the batch proof. In some cases, a prover may compute the witnesses that are required to run the SNARK proof. For instance, the prover performs the following:

    • 1. The prover computes the tree Tl , formed by proofs {πi}i∈I. For each node υ of TI, the method stores the respective value Cυ, that that can be found in or computed from the proofs {πi}i∈I. For example, in FIG. 1, where I={2, 4, 5, 15}, the nodes of tree TI′ are drawn as rectangles.
    • 2. Let now TI′ ⊂TI, be the subtree of TI, that contains just the nodes on the paths from I to the root. The prover computes the canonical hash d(υ, I) for all nodes υ∈TI ′ with respect to I. By the definition of the canonical hash, all nodes not included TI′ will have canonical hash equal to 0. For example, in FIG. 1 the canonical hash 101-1, 101-2, 101-3, 101-4, 101-5, 101-6, 101-7, 101-8, 101-9 are computed. All nodes without a canonical hash for the subset have a canonical hash equal to 0.


After computing the witness, the method may proceed with computing the recursive SNARK proofs that eventually will output the batch proof. In some cases, the method may first produce the public parameters by running the setup of the SNARK (pkB, vkB←Setup (1λ, B) for the circuit B. Consider now the set of indices I for which the batch proof is calculated and let TI′ be the subtree as defined above. Let Vl be the nodes of TI′ at level l=1, . . . ,custom-character. To compute the batch proof, the method follows the procedure below:





for all levels l=1, . . . ,custom-character for all nodes υ∈Vl


Let L be υ's left child and R be υ's right child in Tl;


Set πυto be the output of









Prove



(


p


k
B


,

(


C
υ

,

d
υ

,

vk
B


)

,

(


(


C
L

,

d
L

,

π
L


)

,

(


C
R

,

d
R

,

π
R


)


)







(
1
)







The final batch proof for index set I will be πr, where r is the root of Tl . Note that πr proves the statement (C, d(r, I)) is true, where (C, d(r, I)) is defined above.


Computing the batch proof has parallel complexity custom-character, independent of |I|. This is because we perform the canonical hashing computation inside the Merkle verification.


How to ensure Merkle leaves are used as witness In some cases, the circuit illustrated in FIG. 2 may not necessarily consider the leaves of the Merkle tree. For example, given a Merkle tree of 10 levels, it can compute a valid batch proof using the circuit in FIG. 2 even if it discards some of the levels of the tree and just using only, for example, the first three levels. Note that this is not an attack since it is consistent with the public statement that is put forth. However, in the case when the proof is desired to always consider the leaves of the tree, the method may need to assume there is a function leaf ( ) that distinguishes between a Merkle leaf and a Merkle hash. For example, this function may be checking whether the node in question has a particular format, e.g., a bit indicating whether it is a leaf or not (For instance, in Ethereum MPTs, leaves always have a particular format.) In presence of the leaf ( ) function, the method can force a prover to consider leaves by changing lines (3) and (4) in FIG. 2 as follows.


If dL≠0 check Verify (vk, (vk, CL, dL), πL) V CL=dL Λleaf (CL);


and


If dR≠0 check Verify (vk, (vk, CR, dR), πR) V CR=dR Λleaf (CR);


Ensuring the leaves are used as witness may provide advantages in applications, to compute some function on the leaves and prove the result of this computation.


Updating the Batch Proof

In order to update a batch proof (of a subset I), when some leaves are changing, the method may use the batch data structure. The batch data structure for a subset I, ΛI, consists of all the Merkle hash values, canonical hash values, and recursive SNARK proofs (as computed before) along the nodes (and their siblings) from the leaf nodes in I to the root, as depicted in FIG. 1) The method may maintain a dynamic batch proof as follows: Whenever a leaf value aj changes to aj′, the method distinguishes two cases:

    • 1. If the leaf's index j belongs to I, then all canonical hashes, Merkle hashes and SNARK proofs of all nodes from leaf j to the root must be updated;
    • 2. If the leaf's index j does not belong to I, then at least one Merkle hash of a node υ′ of the batch data structure will change that will cause other Merkle hashes and related SNARK proofs to change, along the path from υ′ to the root.


A similar approach can be used when the size of the batch changes, either by adding or removing elements. In both cases, all updates can be performed in O(log n) time since the height of the batch data structure is log n. Note that the update time is independent of |I|, as opposed to previous approaches where the update requires work proportional to |I|. FIG. 3 shows an example of implementation of the RECKLE TREES algorithms. The implementation uses a collision-resistant hash function H and a SNARK (Setup, Prove, Verify) from Definition 2.1 as black boxes.


Security proof of the algorithm in FIG. 3. To prove that the RECKLE TREES VC scheme, whose detailed implementation is described in FIG. 3 satisfy soundness according to Definition Correctness as described above, it follows easily by inspection.


THEOREM 3.1 (SOUNDNESS OF RECKLE TREES). RECKLE TREES from FIG. 3 are sound per Definition 2.4 assuming collision resistance of the hash function H and knowledge soundness of the underlying SNARK (Setup, Prove, Verify).


PROOF.

Gen (1λ, n)→pp: Let pp contain the following:

    • The size of the vector n.
    • A collision-resistant hash function H.
    • (pk, vk)←Setup 180, B), where B is the circuit in FIG. 2.


Compp (a)→C: Return the Merkle root.


Openpp (i, a)→πi: Return the Merkle proof for element ai.


OpenAllpp (a)→(π0, . . . , Aπn−1): Return the Merkle tree.


Aggpp ((ai, πi)i∈I)→(πI, ΛI):

    • Using {πi}i∈I and I, compute the Merkle hash Cυ, and the canonical digest d(υ, I) for every node in υ∈E Ti, where TI is defined in Section 3.3.
    • Let TI′ be the subtree of TI, as defined in Section 3.3.
    • For all nodes υ in TI′ compute πυas in Equation 1.
    • Return πI, asπr where r is the root of TI and ΛI as TI, along with the values





{(Cυ,d(υ,l), πυ)}υ∈TI


Verpp (C, (ai)i∈I, πI)→{0, 1}:

    • Using (ai)i∈I, compute dI as in Section 3.1.
    • Return Verify (νkB, (νkB,C, dI), πI).


UpdDigpp (u, δ, C, aux)→C′:

    • Parse aux as πu.
    • Return the Merkle root after updating to au+δ.


UpdProofpp (u, δ, πi, aux)→πi′:

    • Parse aux as πu.
    • Recompute the Merkle path after updating to au+δ.
    • Update affected portions of πi.


UpdBatchProofpp (u, δ, πI, ΛI, aux)→πi′, ΛI′:

    • Parse aux as πu.
    • Recompute the Merkle path after updating to au+δ.
    • For every node in Λaffected by update do the following.
      • Update the Merkle hash value.
      • Recompute the canonical hash d(υ, l) if necessary.
      • Recompute the SNARK proof πυ, as in Equation 1.
      • Return the updated πI and ΛI.


UpdAllProofspp (u, δ, π0,. . . , πn−1)→(π0′, . . . , πn−1′):

    • Parse aux as πu.
    • Recompute the Merkle path after updating to au+δ.


Optimized RECKLE TREES Circuit

In some embodiments, the present disclosure provides an improved circuit which beneficially addresses issues that create significant overhead during implementation.


For instance, the circuit (as illustrated in FIG. 2) takes as input SNARK proofs πR and πL, that need to be verified. When using a SNARK whose proof size depends on the size of computation (such as in Plonky2 that is used in implementation), the circuit size changes during each recursive call. One method to address this issue is by having an upper bound on the input of the circuit, but this can create additional overhead. For a tree of n leaves, the circuit in FIG. 2 needs to be called n−1 times, equal to the number of internal nodes, leading to n−1 recursive calls. However, note that there is no need to recurse on the leaf nodes—this is the base case of the recursion. Given recursion is the most expensive operation, it is desirable to reduce the number of recursive calls.


The present disclosure may provide an improved circuit addressing the aforementioned issues. The circuit may comprise log n different circuits, one per each level of the tree. In some cases, the size of the circuit is fixed at each level which beneficially ensures the leaf circuit is simple and does not contain any recursive calls (The number of recursive calls will be n/2−1). FIG. 4 shows an example of the batch proof circuits for the Reckle tree. The different circuits may have a fixed size for each level of the tree. Note that the circuit Bi may hardcode the key νki−1 of circuit Bi−1.


Reducing Recursive Calls with Bucketing

Let r be the concrete cost of recursion and h be the concrete cost of hashing. The approximate concrete parallel complexity (both aggregation and update) of RECKLE TREES is






f
=




(

l
-
1

)

·
2


r

+


l
·
2


h






since at every level of the tree it is required two recursive calls and two hashes, except for the last level where two hashes are performed (not account for conditionals.)


In the implementation above the overhead of recursion is the dominant cost in the circuits, in terms of number of constraints in Plonky2.


To reduce the number of recursive call as much as possible, the method buckets p=2q leaves together into a monolithic circuit. In this way the following concrete parallel complexity is achieved:






g
=



(

l
-
q

)

·

(


2

r

+

2

h


)


+



(


2
q

-
1

)

·
2



h
.







This is because for the last q levels of the tree, the bucketing approach will have to compute more hashes (2q−1, as opposed to q) since a monolithic circuit is utilized.


The method may identify the optimal q such that the difference between these parallel complexities be maximum, i e., to maximize the function |f−g| with respect to q. In the example of implementation, where r=450 ms and h=15 ms the optimal q=5.48 (for custom-character=27) is identified.


q-ary Reckle Trees

In some embodiments, the method herein can extend RECKLE TREES to q-ary trees. q-ary Reckle trees model batch proof computation for MPT trees (Merkle Patricia Tries) that are used in Ethereum and other blockchain projects. Every node in a q-ary tree has degree at most q (In Ethereum MPT, q=16.) As with Merkle trees, it defines the Merkle hash cυ of a node in a q-ary tree as





H(C1||. . . ||Cq).


where Ci is the Merkle hash of its i-th child. If some child is missing (and therefore the degree is less than q), it sets the Merkle hash of this child to be null (it can also define the Merkle hash of a node to be the hash of the sorted list that contains the hashes of only those children that are present). The canonical hash dυof a node υ with respect to a subset of leaves I is naturally defined as the hash of the sorted list of children that are ancestors of I. The circuit Qk for batch proof in q-ary tree Reckle tree is shown in FIG. 5. Note that the circuit Qk is parameterized for k=0, . . . , q.


Note that Qk is parameterized by k=0, . . . , q, leading to a total of q +1 circuits. A node is represented with 0, 1, . . . , q active children, with respect to the batch I. This allows to avoid executing q recursive calls even when there are fewer than q active children. Note also that the circuit takes also as input the set of all verification key ccc V={νk0, . . . , νkq} to ensure that that prover always uses as verification key in the recursive call a key from a correct, predefined set of keys. In some cases, the method can implement that efficiently by providing a Merkle digest d(V) as public input along with Merkle proofs for the verification keys.


In Qk, Witness 1 contains the Merkle hashes and canonical hashes of ν's children, Witness 2 is the alleged subset of Witness 1 consisting of active nodes with respect to the batch, and Witness 3 contains the SNARK proofs and verification keys that will used in the recursive calls. By applying circuit Qk from the leaves of the batch I to the root, deciding which k to use based on the number of active children that the specific node has, it can prove the following statement. “d is the root canonical digest with respect to some set of leaves of some q-ary Merkle tree whose root Merkle digest is C.”


As described above, circuit Qk takes as input SNARK proofs need to be verified. Since the underlying SNARK is Plonky2, these proofs will have different sizes. One may address this issue either by having an upper bound on the input or, in the case where the q-ary tree has a fixed shape (e.g., a full q-ary tree) by hardcoding the respective verification keys, for the full binary tree.


RECKLE+ TREES FOR VERIFIABLE MAP/REDUCE

The RECKLE+ TREES, an extension of RECKLE TREES can be used to prove the correctness of Map/Reduce-style computations on data committed to by a Merkle tree. RECKLE+ TREES provide Map/Reduce proofs that are easily updatable and can be computed in parallel. Let D and R denote the domain of the input and output of the computation, respectively. Consider the following abstraction for Map/Reduce:

    • Map: D→R
    • Reduce: R×R→R


The Reduce operation can also take just a single input. In some cases, the Map/Reduce computation is executed on a subset I ⊆[n] of the memory slots which have d as their canonical digest. The recursive algorithm herein checks the validity of the following NP statement: “out is the output of the Map Reduce computation on some set of leaves which (i) have as their root canonical digest; (ii) belong to some Merkle tree whose Merkle root is C.”



FIG. 6 shows an example of map/reduce circuit for RECKLE+ TREES. In FIG. 6, the circuit M verifies the correctness of the Map/Reduce computation. There are various applications of RECKLE+ TREES. The optimized circuit can easily be transformed into log circuits in the same way that B (FIG. 2) was transformed into B0 and Bi (FIG. 4).


Applications

RECKLE+ TREES can enable powerful applications by allowing for proving the correctness of Map/Reduce computation over large amounts of dynamic on-chain state data. Recproofs are generalizable to the Merkle-Patricia Tries used by popular blockchains, such as Ethereum, to store smart contract states. The applications may include, without limitation, digest translation and BLS key aggregation.


Dynamic Digest Translation

Most blockchains such as Ethereum employ MPTs that use hash functions such as SHA-256 or Keccak. Unfortunately, these hash functions are particularly SNARK-unfriendly, meaning that they generate a large number of constraints when turned into circuits so that they can be used by SNARKs. Due to this, it becomes difficult and slow to prove any meaningful computation (e.g., to a smart contract that cannot execute the computation due to limited computational resources) over Ethereum MPT data. On the other hand, there are particularly SNARK-friendly hash functions such as Poseidon that can generate up to 100× less constraints than SHA-256 or Keccak, leading to tremendous savings in prover time. A need exists to enable a digest translation service that can provide proofs equivalence between a Keccak-based digest and a Poseidon-based digest, i.e., that both digests are computed over the same set of leaves. These proofs of equivalence should be easily updatable when new blocks are generated. After there is an equivalence proof, it can then compute a SNARK proof on Poseidon-hashed data much faster.


RECKLE+ TREES can be used for digest translation. The “Map” computation is applied at the leaves as the identity function, and the “Reduce” computation is applied at the internal nodes producing both Keccak and Poseidon hashes of their children (Note that for this application all the leaves are used as the batch index set I.) As new blocks are produced, and some of the Merkle leaves change, this equivalence proof can be updated fast, always being ready to be consumed by a SNARK working with Poseidon-bashed data.


In digest translation it is desirable to compute a cryptographic proof for the following public statement (Ck, Cp).


“(Ck, Cp) is the pair of Keccak/Poseidon Merkle digests on some same set of leaves.”


This is useful since it allows for extensively working with SNARK-friendly hashes (such as Poseidon) to compute SNARK proofs, while still being able to verify such Poseidon-based proofs (over dynamic data) against legacy SNARK-unfriendly hashes, such as SHA-256 or Keccak. It can be achieved by attaching the proof of equivalence (Ck, Cp) for the statement above.


In particular, since digest translation is an application of the general Map/Reduce framework, the method can instantiate the Map/Reduce functions as follows:

    • Map: It is the identity function. It takes as input a leaf and outputs the same leaf.
    • Reduce: Takes as input two children and outputs the Poseidon hash of these children (assume the initial Merkle tree is built with the Keccak digest.)


Note that while the digest translation for Keccak and Poseidon is described herein, the method has the flexibility to implement it for an arbitrary pair of hash functions. Additionally, note that since for digest translation the “batch” is the set of all the leaves, there is no need to compute the canonical digest. The optimized circuit for digest translation is shown in FIG. 7. FIG. 7 shows an optimized digest translation circuit.


Alternative ways to implement digest translation. The present disclosure provides other methods to implement digest translation. One way to do it is to build a monolithic circuit that builds both a Poseidon and a Keccak tree. Another approach is to have a recursive SNARK proof that takes the proof of digest translation from the previous state, inclusion proofs under both hash functions, and the updated values and digest to compute an updated proof of digest translation using recursive SNARKs. However, this approach is not parallelizable as each update to the tree has to be computed sequentially in the proof one at time. At the same time, updating the proofs cannot be done with a circuit that is proportional to the number of updates, since the shape of circuit depends on which leaves are updates (Anticipating all possible updates by computing different verification keys will lead to an exponential number of keys.) Due to the exponential issue of the second approach, the monolithic approach is used as the baseline in our implementation. However, the monolithic approach may have significant memory issues.


Another use case for the dynamic digest translation is a new cryptographic tool: multi-set hashing which is unique and specific to Lagrange technological stack. Unlike standard hash functions which take strings as input, multiset hash functions operate on multisets (or sets). They map multisets of arbitrary finite size to strings (hashes) of fixed length. They are incremental in that, when new members are added to the multiset, the hash can be updated in time proportional to the change. The functions may be multiset-collision resistant in that it is difficult to find two multisets which produce the same hash, or just set-collision resistant in that it is difficult to find a set and a multiset which produce the same hash. The dynamic digest translation provided herein beneficially improves the efficiency of the multi-set hashing.


Updatable BLS Key Aggregation.

Light clients are used to check the “local correctness” of a block. That is, given a block header h and an alleged set of transactions, a light client would check that these transactions indeed correspond to the specific header h. However, a light client does not have the security of a full node, since it does not check the specific block header h is correct, by going back to genesis. Therefore, the block header that a light client is checking may in principle be bogus. One way to address this issue is to have a certain number of (Ethereum) signers, picked from a fixed set of validators, sign the block header (These signers can be staked and slashed in case they sign a bogus header, offering cryptoeconomic security.)


In this scenario, the light clients, instead of just receiving the header h, they may receive an aggregate signature sig(h) allegedly signed by a set of at least t signers from a Merkle-committed set of validators with digest d. To verify this aggregate signature, a light client may need an aggregate key apk and a proof for the following public statement (apk, t, d) “apk is the aggregate BLS public key from ≥ t BLS keys derived from the set of BLS keys committed to by d.”


RECKLE+ TREES can be used to produce highly parallelizable and updatable proofs for the statement (apk, t, d). Here, the “Map” function selects the leaf public key and sets its counter to 1 if the specific leaf participates in the set of signers, and the “Reduce” function multiplies the aggregate keys of its children, adding their counters accordingly. Note here that the updatability property of RECKLE+ TREES becomes crucial since as new blocks are being produced, the set of signers (as well as the set of validators) may change, in which case computing the proof for the public statement (apk′, t, d′) could be much faster than recomputing the proof from scratch especially when the delta between the old and new signer/validator sets is small


Consider the following setting: BLS public keys of n validators are stored in the memory of a smart contract. The goal is to calculate the aggregated public key of a subset of validators, denoted as I, and the cardinality of this subset I to establish the fraction of validators that have signed the message. However, subset I can change across blocks. The problem of computing aggregated public keys and the cardinality on-chain is useful in emerging real-world blockchain systems (e.g., Proofs of Ethereum Beacon Chain consensus or Eigen Layer restaking). In these systems, validators attest to the results of some specific computational task, and new tasks can arrive periodically. An existing approach is to attach a SNARK proof along with the aggregated public key and the cardinality of the attest or subset. However, this requires recomputing the SNARK proof from scratch every time when the subset changes. Additionally, this also requires computing a new proof from scratch whenever the initial set changes even if the subset stays the same.


RECKLE+ TREES allows to prove that an alleged aggregate BLS public key apk is the product of a subset of t individual BLS keys from a set of BLS keys stored at the leaves of a Merkle tree whose digest is d. At the same time, this proof can be updatable in case this subset changes. For the BLS aggregation application, the Map/Reduce functions is defined as follows:

    • Map: Takes a public key gsk as input, and returns (1, gsk), if sk has signed the message, else (0, 1G), where 1; is the identity of group G.


Reduce: Takes two elements (cntL, gskL) and (cntR, gskR), and returns (cntL+cntR, gskL×gskR).


An example of a circuit for BLS key aggregation is shown in FIG. 8 and an example of the RECKLE+ TREES data structure for BLS key aggregation is shown in FIG. 9. The example shows a RECKLE+ TREES data structure to compute the aggregated BLS public key and cardinality of the attestor set. Consider a vector of size 8 and attestor subset I={2, 4, 5}. Every node υ in the path from root to leaves of I, stores the Merkle hash, the recursive SNARK proof, and results of Map or Reduce operation, and every sibling of υ stores just the Merkle hash. Note that similar to digest translation, there is no need to compute canonical hashing for the BLS key aggregation.


Other applications. The Map/Reduce framework as described herein can be applied to real-world applications involving decentralized finance (DeFi) that require computation over on-chain states that spans multiple concurrent blocks. Examples of these applications include calculating moving averages of asset prices, lending market deposit rates, credit scores or airdrop eligibility. In these applications, a computation must be applied across the states of a contract or group of contracts for the n most recent blocks of a given blockchain, where n is a fixed number. These computations have to then be updated continually whenever new blocks are added to the blockchain. The natural updatability of RECKLE TREES allows it to be extended to use-cases where a proof needs to be generated across tens of thousands of consecutive blocks of historical data.


For example, an on-chain options protocol may want to price an option using the volatility of an asset over the past n blocks on a decentralized exchange on Ethereum. The proof of the volatility has to be updated every 12 seconds as new blocks are added. The naive approach may require computing a proof of the volatility across the past n blocks from scratch, every 12 seconds With RECKLE TREES, this computation may only require updating the portion of the proof associated with the computation done on the oldest block with a proof of computation done on the new block.


Application in Verifiable Database for Smart Contract: A database may be built for each contract containing the values of the contract at each block, i.e. historical data. Contracts do not have the ability to access historical values, such as “what was the price of this pair 100 blocks ago”. Applications that require such data needs to handle complicated Merkle proof generation and verification and is often error prone and limited in the type of computation one can run. Running computation on-chain is expensive. Methods and systems herein provide the ability to directly extract all the relevant data, i.e. historical data, and run the computation at the same time on those data. The smart contract only receives the results of the computation and a proof that guarantees it that the data is computed from the right data. This allows smart contract to compute for example historical average for price of tokenized assets, in the realm of “DeFi” applications.


Evaluation

The performance of RECKLE TREES and the applications enabled by Reckle+ TREES is evaluated. Reckle trees are naturally parallelizable regardless of the underlying proof system used to compute recursive SNARK proofs. To realize the benefits of construction, a method for a large scale distributed system for distributed proof generation is disclosed herein


Distributed Proof Generation


FIG. 10 shows an example architecture of a distributed system, designed to enable efficient proof generation. our distributed system is driven by real-time events, which enables seamless addition of computation resources without degrading the performance. In some embodiments, the system may have five main components: Planner, Executor, Worker, Message Queue, and Storage. The Planner is responsible for dividing a task into subtasks and generating a computational graph structure that describes the dependencies between the subtasks. Using the computational graph from the Planner, the Executor schedules the subtasks for execution by injecting them in the Message Queue and retrieving results of the completed subtasks. The Workers are responsible for fetching subtasks from the message queue, completing the subtasks, and placing the results in the message queue.


The implementation is in Rust language and uses Redis server for messaging. The distributed system is deployed on Kubernetes cluster inside AWS datacenter. The system also heavily relies on AWS S3 storage to store intermediate job results. The test uses AWS c7i.8×large EC2 instances (32vCPU, 64 GiB memory) as worker nodes.


Batch Updates and Aggregation

The performance of aggregation, batch update, and batch proof verification is evaluated. The result shows that batch updates in the construction is 11× to 15× faster than baseline, and that the distributed systems can achieve up to 270× performance improvement over the sequential implementation of RECKLE TREES.


The evaluation also compares the performance of the batch operation with prior known Merkle proof aggregation using SNARKs and Inner-product arguments based aggregation in Hyperproofs, which improves upon the Merkle SNARKs aggregation techniques.


The implementation is based on Rust and Plonky2 is used for the batch proof construction. Thus a field element corresponds to a 64-bit value from the Goldilocks field. All experiments were run on Amazon EC2 c7i.8×large instance. In all the experiments, the parallelism offered by the underlying framework is utilized. The Merkle tree in the construction and the baseline uses the Poseidon hash function.


Experimental setup. The experiment set the vector size to n=227 and study the performance of the scheme for varying batch sizes k={22, 24, . . . , 212}. In each run, it randomly generated a Merkle tree and selected a random set of leaves to batch/update.


For baseline experiments, the following is used:


Merkle proof aggregation using Groth 16 SNARKs. Specifically, the fork of the Rust implementation that was used in Hyperproofs to benchmark Merkle SNARKs is utilized. Inner-product arguments based aggregation in Hyperproofs: it used the golang based implementation that was provided in Hyperproofs.


Note that both these constructions are not easily amenable to distributed proving. Thus baseline experiments were run on a single machine, but the experiment exploited all the parallelism offered by the underlying framework.


The experiment implemented the bucket variant of RECKLE TREES, with bucket size 25 and compare the performance against the above baselines in the following settings:

    • Standard: In this implementation of aggregation, each node of the batch proof data structure is constructed sequentially on the same machine.
    • Distributed proof generation: In this implementation of aggregation, the distributed system from is used to compute the SNARK proofs in the batch tree data structure. Specifically, in the experiments, a cluster of 192 workers is used, where each worker is responsible for computing at most three Plonky2 proofs simultaneously.
    • Public parameters. The proving keys in Hyperproofs and Merkle SNARKs are approximately around 12 GiB and 6 GiB, respectively. This is because, in Hyperproofs, the size of public parameters is linear in the size of the vector. Whereas, in the Merkle SNARKs it is linear in the size of the batch and depth of the Merkle tree. However, in RECKLE TREES, the proving is 3.62 GiB in total, as the prover key in the scheme herein depends only on the depth of the RECKLE TREES tree.


The verification key in the approach herein is at most 1.85 KiB. However, in Hyperproofs and Merkle SNARKs, the verification keys are in the order of around 10 MiB.

    • Prover time. In FIG. 11a, it shows that the distributed implementation of RECKLE TREES is 3.15× to 270× faster than the sequential implementation.


The distributed version RECKLE TREES has comparable proving time with Groth 16 based Merkle SNARKs aggregation and Hyerproofs. But, the sequential implementation of RECKLE TREES is substantially slower than the baselines. However, this is a one-time cost, but allows fast updates enabling many potential applications.


The aggregation of Hyperproofs outperforms other approaches in FIG. 11a. However, Hyperproofs trades this for a large proof size and increased verification times.

    • Verification time and proof size. The batch proof in our scheme is 112 KiB. To verify a batch proof, the verifier first needs to compute the canonical digest of the batch. Then the verifier invokes the Plonky2 verifier with the computed canonical digest and the digest of the vector to check the validity of the proof. In the experiments, for a batch size of 212 proofs, it takes 14.29 ms to compute the canonical digest and 3.91 ms to verify the Plonky2 proof. The cost of computing canonical digest is shown in FIG. 11b. However, a batch proof in Hyperproofs is 52 KiB and takes around 27.03 seconds to verify.
    • Batch updates. In the experiments, an element is randomly selected from the batch to update for a sample. The prover cost to update a batch proof is shown. Updating a batch proof in RECKLE TREES involves re-computing the recursive SNARK proofs just along the path from leaf to the root. Thus, it is possible to update the batch proof in logarithmic time. It is shown that the cost of updating a single proof within a batch of 212 values is 16.61 seconds.


However, both Merkle SNARKs and Hyperproofs do not support updatable batch proofs. Thus, both these schemes have to recompute the batch proof from scratch whenever an element in the batch changes. For a batch size of 212 values, Merkle SNARKs and Hyperproofs require 3.15 and 4.44 minutes, respectively, to recompute the batch proof. Thus, as shown in FIG. 11c, RECKLE TREES is around 11× and 15× faster than Hyperproofs and Merkle SNARKs, respectively when the batch is 212. In FIG. 11, the x-axis is the number of proofs being aggregated, and the experiment used the 128-bit variant of Poseidon.


Besides updating an element inside the batch, the construction herein can also efficiently update the size of the batch. In contrast, both Merkle SNARKs and Hyperproofs require an a priori bound on the maximum size of the batch. Additionally, Merkle SNARKs incur proving cost proportional to the maximum batch size regardless of the number of elements in the batch. Whenever the batch size is insufficient, Merkle SNARKs require a setup with new “powers-of-tau” and circuit specific parameters. RECKLE TREES does not suffer this limitation, allowing for flexibility in adjusting the batch size as required.


Applications of Reckle+ Trees

The performance of RECKLE+ TREES for the digest translation and BLS key aggregation is evaluated. Experimental setup. For both digest translation and BLS public key aggregation, the experiment implemented the RECKLE+ TREES and the baseline circuits in Plonky2. However, experiment used the distributed system with identical configuration, instance types, and the number of workers for the experiments.


For digest translation, the baseline circuit is a monolithic circuit that recomputes the Merkle tree inside the circuit using both Poseidon and Keccak hash functions. Since Plonky2 does not support distributed proving, the baseline is run on a single machine. It is shown that even for trees with 28 leaves, the baseline implementation runs out of memory (64 GiB) while computing the Plonky2 proof. Thus it extrapolates the performance of the baseline implementation for larger tree sizes. However, the distributed variant of RECKLE+ TREES scales even for 8 million leaves.


For the BLS public key aggregation (RECKLE+ TREES and the baseline), it is assumed that the public keys are stored in a Merkle tree of height 21. To implement RECKLE+ TREES, the experiment uses the Plonky2-BN254 library, which implements the BN254 group operations non-natively on Goldilocks field. However, to implement the baseline, the experiment repurposes RECKLE+ TREES's circuit and additionally includes the cost of hashing operations to simulate the membership proof verification. Similar to digest translation, it shows that the baseline implementation runs out of memory while computing a Plonky2 for modestly sized trees. Thus, the experiment extrapolates values for comparison.


Prover. It shows that distributed RECKLE+ TREES takes around 1.25 hours to compute the batch data structure for the digest translation application. However, FIG. 12a shows that RECKLE+ TREES is 200× faster than the baseline implementation. Similarly, for BLS public key aggregation, it shows that RECKLE+ TREES to be faster than the baseline for larger batch sizes. FIG. 12 shows prover and update cost for dynamic digest translation. For height of tree [1, 7], the baseline proving times are [0.34, 1.36, 2.80, 5.86, 12.15, 25.59, 54.21] seconds, respectively.


Updates. For the digest translation application, the cost of a single update is logarithmic in the capacity of the tree. However, the baseline implementation requires work linear in the capacity of the tree. In the experiments, it shows that for tree height of 7, the baseline approach requires 54.21 seconds to update a single leaf. However, RECKLE+ TREES requires 4.49 seconds, which is 12× faster than the baseline. Similarly, in the BLS application, it shows RECKLE+ TREES to be 6× faster than the baseline. The comparable verification times (3-6 ms) and proof sizes (110-120 KiB) are shown for the baseline and RECKLE+ TREES in both applications.


SOUNDNESS PROOF

PROOF OF THEOREM 3.1. Following the notation from Definition 2.4, suppose the adversary outputs a commitment C, two element sets {ai}i ∈I and {aj′}i ∈I, two batch proofs πI and πj such that the canonical digest of {ai}i∈I is d(I), the canonical digest of {aj′} is d(J) and





1←SNARK.Verify νkB, (νkB, C, d(I)),πj))


and





1←SNARK.Verify νkB, (νkB, C, d(J)), πj))


while there exists k ∈I ∩J, such that ak≠ak′.


Due to SNARK knowledge soundness, we can extract the batch-proof data structures ΛI and Λj. Since k ∈I ∩J, ΛI, and ΛJ will both contain path pk in common. Since both the recursive proofs verified, and unless the adversary is able to break collision resistance, all nodes υ on these paths (along with their sibling nodes) must have the same Merkle hash values Cυ. However, the last node that is extracted on ΛI's pk path has to be ak (by collision resistance with respect to canonical digest d(I)) and the last node that is extracted on Λj's pk path has to be ak′ (by collision resistance with respect to canonical digest d(J)). By assumption, ak≠ak′. This is a contradiction. Therefore soundness holds.



FIG. 14 depicts an example of a cryptographic method 1400. In some embodiments, the cryptographic method is for performing a maintainable Merkle-based vector commitment. The method comprises: computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive SNARK, where the recursive SNARK is run directly on Merkle paths belonging to elements in the batch to perform the computation of the subset hash inside the computation of the Merkle verification. The method comprises: verifying that the elements belong to the Merkle tree, computing a batch hash for the elements in the batch using canonical hashing, and making the batch hash part of a public statement. The method further comprises maintaining a data structure that stores previously computed recursive proofs and updating the succinct batch proof, upon change of an element of the Merkle tree, in logarithmic time. The method can be used to produce highly parallelizable and updatable proofs for the public statement.


The method herein can be applied in blockchain space. For example, the method for providing parallelizable and updatable batch proofs may be applied in a decentralized network. The method comprises: accessing a first block added to a blockchain; computing a proof of the first block by computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive SNARK, where the subset of k leaves corresponds to a subset of attestors of the decentralized network; and upon proofing the first block, issuing a token for the proof. The subset of attestors change when a second block is added to the blockchain and such change can be updated with improved efficiency. In some cases, the batch proof is performed by log n different circuits and each of the different circuits has a fixed size corresponding to a level of the Merkle tree.


In some embodiments, a decentralized protocol, run by a plurality of computers, issues a cryptocurrency token via a smart contract, where the token is traded on an exchange. The decentralized protocol incorporates the above system or method so that value of the token is influenced by the market interest in the above system. For instance, the market interest includes usage of the above system.


In some embodiments, the token is used in transacting with the above system, while using the above system, while operating the computer(s) running the above system, and/or rewarding the operators of the computers running the above system. For instance, the token is used to provide economic incentives to operators of the above system so as to enable its operation in a decentralized manner and/or for user of the system to pay for usage of the system.


While preferred embodiments of the present subject matter have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the present subject matter. It should be understood that various alternatives to the embodiments of the present subject matter described herein may be employed in practicing the present subject matter.

Claims
  • 1. A computer-implemented cryptographic method of performing a maintainable Merkle-based vector commitment, the method comprising: a) computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive SNARK, wherein the recursive SNARK is run directly on Merkle paths belonging to elements in the batch to perform the computation of the subset hash inside the computation of the Merkle verification, using operations comprising: i. verifying that the elements belong to the Merkle tree,ii. computing a batch hash for the elements in the batch using canonical hashing, andiii. making the batch hash part of a public statement;b) maintaining a data structure that stores previously computed recursive proofs; andc) updating the succinct batch proof, upon change of an element of the Merkle tree, in logarithmic time.
  • 2. The method of claim 1, wherein the succinct batch proof is computed in parallel in a distributed computing network.
  • 3. The method of claim 1, wherein the succinct batch proof is computed in O(log n) parallel time.
  • 4. The method of claim 1, wherein the succinct batch proof size is 112 KiB, independent of the batch size and the size of the vector.
  • 5. The method of claim 1, wherein the update to the succinct batch proof is performed in about 16.61 seconds, or less, for a tree height of 27.
  • 6. The method of claim 1, wherein the verification is performed in about 18.2 ms, or less.
  • 7. The method of claim 1, wherein the element of the Merkle tree changed is in the batch.
  • 8. The method of claim 1, wherein the element of the Merkle tree changed is not in the batch.
  • 9. The method of claim 1, applied to dynamic digest translation.
  • 10. The method of claim 1, applied to updateable BLS aggregation.
  • 11. A computer-implemented system comprising at least one processor and instructions executable by the at least one processor to cause the at least one processor to perform a maintainable Merkle-based vector commitment by performing operations comprising: a) computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive SNARK, wherein the recursive SNARK is run directly on Merkle paths belonging to elements in the batch to perform the computation of the subset hash inside the computation of the Merkle verification, by performing at least: i. verifying that the elements belong to the Merkle tree,ii. computing a batch hash for the elements in the batch using canonical hashing, andiii. making the batch hash part of a public statement;b) maintaining a data structure that stores previously computed recursive proofs; andc) updating the succinct batch proof, upon change of an element of the Merkle tree, in logarithmic time.
  • 12. The system of claim 11, wherein the succinct batch proof is computed in O(log n) parallel time.
  • 13. The system of claim 11, wherein the succinct batch proof is computed in parallel in a distributed computing network.
  • 14. The system of claim 11, wherein the succinct batch proof size is 112 KiB, independent of the batch size and the size of the vector.
  • 15. The system of claim 11, wherein the update to the succinct batch proof is performed in about 16.61 seconds, or less, for a tree height of 27.
  • 16. The system of claim 11, wherein the verification is performed in about 18.2 ms, or less.
  • 17. The system of claim 11, wherein the element of the Merkle tree changed is in the batch.
  • 18. The system of claim 11, wherein the element of the Merkle tree changed is not in the batch.
  • 19. The system of claim 11, wherein the maintainable Merkle-based vector commitment is applied to dynamic digest translation.
  • 20. The system of claim 11, wherein the maintainable Merkle-based vector commitment is applied to updateable BLS aggregation.
  • 21. One or more non-transitory computer-readable storage media encoded with instructions executable by one or more processors to create an application for performing a maintainable Merkle-based vector commitment, the application comprising: a) a software module computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive SNARK, wherein the recursive SNARK is run directly on Merkle paths belonging to elements in the batch to perform the computation of the subset hash inside the computation of the Merkle verification, by operations comprising: i. verifying that the elements belong to the Merkle tree,ii. computing a batch hash for the elements in the batch using canonical hashing, andiii. making the batch hash part of a public statement;b) a software module maintaining a data structure that stores previously computed recursive proofs; andc) a software module updating the succinct batch proof, upon change of an element of the Merkle tree, in logarithmic time.
  • 22. The non-transitory computer-readable storage media of claim 21, wherein the succinct batch proof is computed in O(log n) parallel time.
  • 23. The non-transitory computer-readable storage media of claim 21, wherein the succinct batch proof is computed in parallel in a distributed computing network.
  • 24. The non-transitory computer-readable storage media of claim 21, wherein the application further comprises a software module performing dynamic digest translation.
  • 25. The non-transitory computer-readable storage media of claim 21, wherein the application further comprises a software module performing updateable BLS aggregation.
  • 26. A computer-implemented method for providing parallelizable and updatable batch proofs in a decentralized network, the method comprising: a) accessing a first block added to a blockchain;b) computing a proof of the first block by computing a succinct batch proof of a subset of k leaves in a Merkle tree of n leaves using a recursive SNARK, wherein the subset of k leaves correspond to a subset of attestors of the decentralized network; andc) upon proofing the first block, issuing a token for the proof.
  • 27. The computer-implemented method of claim 26, wherein the subset of attestors change when a second block is added to the blockchain.
  • 28. The computer-implemented method of claim 26, wherein the recursive SNARK is run directly on Merkle paths belonging to elements in the batch to perform the computation of the subset hash inside the computation of the Merkle verification.
  • 29. The computer-implemented method of claim 26, wherein the batch proof is performed by log n different circuits.
  • 30. The computer-implemented method of claim 29, wherein each of the different circuits has a fixed size corresponding to a level of the Merkle tree.
Priority Claims (1)
Number Date Country Kind
20240100201 Mar 2024 GR national
CROSS-REFERENCE

This application is the by-pass continuation of PCT International Application No. PCT/US2024/028760, filed May 10, 2024, which claims the priority and benefit of U.S. Provisional Application No. 63/465,872, filed May 11, 2023, U.S. Provisional Application No. 63/465,874, filed May 12, 2023, U.S. Application No. 63/533,919, filed Aug. 22, 2023, and U.S. Provisional Application No. 63/551,475, filed Feb. 8, 2024, and Greek patent application Ser. No. 20240,100201,filed Mar. 15, 2024, each of which is incorporated herein by reference in its entirety.

Provisional Applications (4)
Number Date Country
63551475 Feb 2024 US
63533919 Aug 2023 US
63465874 May 2023 US
63465872 May 2023 US
Continuations (1)
Number Date Country
Parent PCT/US2024/028760 May 2024 WO
Child 18661204 US