This application relates to the field of electronic transactions and more particularly to the field of verifying electric transactions using cryptography.
Money has been around for thousands of years. In the past, it was very physical, as in the case of gold bars or coins. However, with the emerge of computer and network technologies, electronic forms of money and payment systems have been considered. (See, for example, D. L. Chaum, Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms, Commun. ACM, Volume 24, Number 2, Pages 84-90, 1981). In principle, money can be made totally electronic. If each money transaction goes through a single, trusted, central authority A, this authority could keep track and publish, at each time t, how much everyone owns and who owns to whom. On one hand, this approach to money has the big advantage of being very efficient for users, in that the public record kept by A can be very compact and easy to consult, and yet sufficient to enable users to make payments to one another with confidence. On the other hand, however, this centralized approach also has limitations. In particular, it is hard for a large community of users to find an entity A that everyone trusts. This, in the eyes of many, continues to be true even if A were chosen to be a government. For example, the authority A could wipe out a user U's ability to pay by simply publishing that he/she/it no longer owns any money, or could make it appear that U has made payments to someone else that U never made. Thus, this centralized approach could dramatically fail if A becomes corrupt, or is compromised by adversaries, or otherwise operates improperly.
To avoid deficiencies associated with using a single, trusted, central authority, cryptocurrencies like Bitcoin have been created that are very decentralized. However, these systems require a public file (“ledger”) that is very big and very inefficient to maintain and update. Moreover, Bitcoin requires a huge amount of computation and can be subverted if the majority of the computational power falls into the wrong hands. As a result, systems like Bitcoin may not be too useful, particularly if the number of users and transactions grows.
It is thus desirable to provide an electronic money system that has the advantages of the centralized approach, but without needing to trust a central authority that maintains a public record of transactions, and without suffering the inefficiencies of known decentralized approaches.
According to the system described herein, electronic payments are verified in an electronic payment system in which at each of multiple rounds there is a set of players V, such that a payment is valid if the payment is authenticated to be valid by a given majority of the players in V. Verifying the electronic payments includes having a player Vi in V receive authentication of multiple payments during one of the multiple rounds of the electronic payment system, having Vi determine which of the multiple payments are valid, having Vi authenticate a subset of the multiple payments that Vi determines valid to provide an authenticated payment record, and having Vi cause the authenticated payment record to become widely available to enable at least another entity to determine whether a given payment authenticated valid by Vi is authenticated to be valid by the given majority of the players in V. The authentication of at least one multiple payments may include a digital signature, determining which of the multiple payments are valid may include verifying the digital signature, authenticating the subset of multiple payments may include digitally signing data indicating the subset of multiple payments, and having Vi cause the authenticated payment record to become widely available may include posting the authenticated payment record on a website, sending the authenticated payment record to another entity that further disseminates the authenticated payment record, and/or sending the authenticated payment record to another entity that posts the authenticated payment record on a website. Digitally signing data indicating the subset of multiple payments may include using a single digital signature and the data may include information about the round, time information, and/or other additional information. Each valid payment may transfer money associated with a first public key to a second public key and each valid payment may be digitally signed relative to the first public key. Having Vi determine which of the multiple payments are valid may include determining if sufficient funds are available for each of the multiple payments. The set of players V may be randomly selected from a larger set of potential verifiers using a closeness-preserving selection process. The set of players V may be randomly selected from a larger set of potential verifiers using a natural and public random value associated with the one of the multiple rounds of the electronic payment system. Vi may be randomly selected from a set of potential verifiers by a special entity T that produces a digital signature showing that Vi has been selected and causes the signature to become widely available. The digital signature of T may authenticate information including a natural and public random value, information including time information, information including information about the one of the multiple rounds of the electronic payment system, and/or other information. Vi may be randomly selected from a set of potential verifiers by a set of special entities by combining digital signatures produced by the entities. Vi may be provided with a reward for authenticating the subset of the multiple payments that Vi determines valid. An amount of the reward may be based on a value of the multiple payments that Vi determines valid and/or a number of missed payment. The rewards may be paid by a portion of the valid payments and/or retailers that receive payments.
According further to the system described herein, verifying electronic payments in an electronic payment system includes receiving records for multiple payments from a plurality of players of the electronic payment system during a particular one of a plurality of rounds of the electronic payment system, determining which of the multiple payments are valid, authenticating valid payments to provide an authenticated payment record for each of the valid payments, and making the authenticated payment record available for accessing, where in the electronic payment system, a payment in a particular round is deemed valid if a given majority of a subset of the players authenticate as valid the particular payment to provide an authenticated payment record. Computer software, provided in a non-transitory computer readable medium, may verify electronic payments in an electronic payment system.
According further to the system described herein, facilitating verification of an electronic payment in an electronic payment system includes determining if authenticated payment records provided by a given majority of entities indicate validity of the electronic payment between a first player and a second player of the electronic payment system during a particular one of a plurality of rounds of the electronic payment system, generating an authenticated string vouching that the payment has been verified by the given majority of entities in response to the payment being verified by the majority of verification entities, and causing the authenticated string to become widely available. The authenticated string may be a digital signature and causing the authenticated string to become widely available may include posting the authenticated string on a website, transmitting the authenticated string to another entity that causes the authenticated string to become widely available, and/or transmitting the authenticated string to another entity that posts the authenticated string on a website. The authenticated payment records may be digitally signed. Computer software, provided in a non-transitory computer readable medium, may facilitates verification of an electronic payment in an electronic payment system.
According further to the system described herein, a digital certificate is issued for a particular player in a set of players V that make electronic payments in an electronic payment system in which at each of multiple rounds a payment is valid if the payment is authenticated to be valid by a given majority of the players in V. Issuing the digital certificate includes obtaining a public key PKX to be used in connection with electronic payments by the particular player, obtaining additional information to be certified, and providing certification of PKX and of the additional information for the digital certificate by digitally signing, with a digital signature of a distinguished entity, PKX and the additional information, where certification by the distinguished entity is recognized by a significant number of the given majority of the players in V that determine validity of payments made by the players in the electronic transaction system. The additional information may include identity information about the distinguished entity, identity information about the particular player, qualifying information about the particular player, time information relating to the digital certificate, monetary information associated with PKX, territorial information, and/or transaction restrictions associated with PKX. The monetary information associated with PKX may include an amount of money in the electronic transaction system that is possessed by the particular player. The identity information about the particular player may include the name of the particular player, a hash of the name of the particular player, an encryption of the name of the particular player, and/or an index to a data structure containing information that identifies the particular player. The identity information about the particular player may be an encryption of the name of the particular player and a governmental entity may use a decryption key to determine the identity of the particular player. Issuing the digital certificate may also include performing additional actions and, in response to a result of the additional actions being satisfactory, issuing the digital certificate containing the certification of PKX and of the additional information. The additional actions may include verifying at least some of the additional information, confirming willingness of the particular player to use PKX in the electronic transaction system, confirming that the particular player knows a secret signing key associated with PKX, assisting the particular player to obtain PKX, providing the particular player with PKX. confirming that a signing key corresponding to PKX has been escrowed, providing an initial amount of money to the particular player to be used in the by the electronic transaction system, determining an identity of the particular player, escrowing the information used to identify the particular player, and/or confirming that the particular player is eligible to have PKX certified. The additional actions may include confirming that the particular player is eligible to be a member of the given majority of the players in V. The distinguished entity may be provided with a reward for issuing the digital certificate and/or each electronic payment made by the particular player. The reward may be provided as an electronic payment to the distinguished entity through the electronic payment system. The reward may be paid by a retailer, the particular player, and/or a recipient of an electronic payment made by the particular player. The distinguished entity may be a financial institution. Computer software, provided in a non-transitory computer readable medium, may issue a digital certificate for a particular player in a set of players V that make electronic payments.
Embodiments of the system described herein will now be explained in more detail in accordance with the figures of the drawings, which are briefly described as follows.
The system described herein provides a mechanism for verifying electronic payments between parties the eliminates a need for a controlling central authority and also eliminates computationally intensive procedures.
Digital Signatures. A digital signature scheme consists of three fast algorithms: a probabilistic key generator G, a deterministic signing algorithm S, and a verification algorithm V.
Given a number k as an input (e.g., k=4,000), an entity x uses G to produce a pair of k-bit keys (i.e., strings): a “public” key PKX and a “secret” signing key SKX. A public key does not “betray” its corresponding secret key. That is, even given knowledge of PKX, no one other than x is able to compute SKX in less than an unrealistically large amount of time (e.g., thousands of years with the computing power of today's processors). The entity x uses SKXto digitally sign messages. For each possible message (binary string) m, x runs algorithm S on inputs m and SKX in order to produce a string, denoted by SIGX(m) or SIGPKx, referred to as x's digital signature of m, or a digital signature of m relative to PKX. It may be assumed that m is retrievable from SIGX(m) since a digital signature of m could always include m itself. The value of PKX can be used to verify the digital signatures produced by x. Specifically, on inputs (a) the public key PKX of an entity x, (b) a message m, and (c) an alleged digital signature of x for the message m, the verification algorithm V outputs either YES or NO, so as to satisfy the following properties:
Accordingly, to prevent anyone else from signing messages on its behalf, an entity x must keep the corresponding signing key SKX secret (hence the term “secret key”), and to enable anyone to verify the messages the entity does sign, x has an interest in publicizing key PKX (hence the term “public key”).
Authentication. Digital signatures are a wonderful way to authenticate information, because public keys may be made widely available and thus the validity of the information may be widely ascertained. Information, however, could be authenticated in different ways. For instance, if two parties A and B are connected by a secure channel, then having A send some information I to B over that channel authenticates I to B, even though it will not enable B to convince someone else of the authenticity of I. As for another example, if—say—everyone knows that only A has the ability to post information on a given website W, conceptual or real, then A could authenticate I by posting it on W. Alternatively yet, if A and B share a secret key s, A may authenticate I to B by also sending the value f(s,I) for a given function f, such as an encryption function or a hash function. These and all other forms of authentication can be used the system described herein.
Certificates. Digital certificates may be used to certify public keys. A public key PK is certified by another public key PK′ via a digital signature relative to PK′. For instance, a certificate for PK may take the form
SIGPK′(PK,I)
where I is any additional information deemed useful. For instance, the certifying public key PK′ may belong to another entity, such as a Bank, and I may specify the date of issuance of the certificate, the date of expiration of the certificate (if any), information about the owner of PK or PK′, information about PK or PK′ (e.g., information about the amount of money available to PK at a given moment, the creditworthiness of PK or its owner, etc.), information about the transactions (e.g., the numbers, values, etc.) done using PK, and so on, including no information. One possible interpretation of a certificate is that (the owner of) PK′ guarantees that the information I is true about PK in a way that cannot be altered by anyone else. Even if I is empty, a certificate for PK can be useful, for instance to enable a bank to guarantee that PK is part of the payment system. For simplicity, PK should be understood to signify the public key PK itself, the certified PK (or a certified PK, since the same public key may have multiple certificates), or a certificate for PK.
Collision-Resilient Hashing. A collision-resilient function H quickly maps arbitrarily long strings to strings of preferably fixed length (e.g., 256-bit strings) so as to guarantee that finding two different strings X and Y, such that H(X)=H(Y), requires an unrealistically large amount of time. Collision-resilient hashing functions may be used within digital signature schemes. If, for instance, the schemes may only sign messages consisting of at most 4,000 bits, and an entity x wishes to sign a longer message m, then the entity may sign H(m) instead. That is, SIGX(m) could be defined to consist of SIGX(H(m)), or of the pair (m,SIGx(H(m)), in order to ensure that m is retrievable from SIGX(m).
Implicit Hardware. As discussed elsewhere herein a computational hardware device (e.g., an electronic chip) may be necessary to compute a digital signature. Thus, reference herein to an entity x (capable of computing digital signatures), should be understood to include x together with a hardware device used to compute digital signatures. When a function H maps a string s, with a sufficient entropy, to a binary/alphanumeric string H(s), then each bit/character of H(s) is sufficiently random. By saying that H is a hash function (e.g., a collision-resilient or one-way hash function) it should be understood that H has such a randomizing property. Similarly, evaluating such a function H may be practically impossible by hand. Accordingly, it may be assumed that H(x) is computed via a computational hardware device, as discussed elsewhere herein, such as devices contained in a computer, laptop, cell phone, or other appropriate device.
Player/User. A player (or a user) is a collection of individuals, an entity, or a collection of entities. A player i may own one or more public keys that may be identified with the player. For instance if PKi is the public key of a particular player i, then the particular player may be refered to using PKi and vice versa.
Money. Money can be denominated in US dollars, in another existing currency, or in a currency of its own. A monetary unit denoted by a symbol, “#”, is used herein.
Time. Time is set forth herein in a time sequence, T=0,1,2, . . . . The time interval [ti, ti+1] may be the same for all players (e.g., two minutes or one minute), but can be adjusted dynamically depending on a number of players, a number of transactions, etc. A time unit may be chosen so that, despite reasonable clock shifts, most (or all) players know the current time, t.
Payments. Money is associated with individual public keys. Initially, some public keys are publicly known to have some given amount of money. Money is transferred, via a digital signature, from one public key to another. Note however that it is possible to provide multiple money transfers, of different amounts, to different public keys by means of a single digital signature. A payment P of an amount A from a key PK to a key PK′ at a time (time) might be expressed as:
P=SIG
PK(PK, PK′, #A, I)
where I represents any additional information deemed useful, such as an indication of the time of the payment, the reason for the payment, or possibly no information. PK (or the owner of PK) may be referred to as the payer and PK′ (or the owner of PK′) may be referred to as the payee. As discussed elsewhere herein, an issue is determining whether PK had #A to transfer to PK′.
At a high level, in Bitcoin and different variants of Bitcoin, at every point in time, (the owner of) a given public key PK owns a given amount of money. Some of this money may be transferred from PK to another public key PK′ via a digital signature, computed with a secret key corresponding to PK. Lacking trusted central authorities, this signed transaction, T, is broadcasted over the web in a decentralized manner. That is, anyone who sees Twill forward T to their neighbors, who will forward to their neighbors, and so on. Everyone seeing Tis responsible for verifying validity of T. This verification may include verifying that the digital signature used to sign T is valid. However, verifying T also includes verifying that the key PK had sufficient money to perform a transfer indicated by T.
Verifying that the key PK had sufficient money to perform a transfer indicated by T may require validating an entire transaction history, which may not be a trivial task if the number of total transactions is large. Moreover, since one cannot guarantee that everybody has seen all transactions, it may become necessary to achieve consensus on what the current transaction history is. To simplifying this tasks, transactions are aggregated into blocks, B1,B2, . . . . Each block includes a hash of a previous block, a collection of new transactions, and a solution to a cryptographic riddle. The riddle depends on the previous block and the new transactions.
A user seeing blocks B1 through Bk, and a collection of new (and valid) transactions, tries to aggregate the transactions into a new block Bk+1 by solving the proper riddle. The user is incentivized to do so because, if the user succeeds in generating the Bk+1 block before anyone else, then the user will will gain a monetary reward. Cryptographic riddles are sufficiently complex to solve. A single user may take a very long time to solve a given riddle. However, there are many users trying to produce a new block and thus to solve each riddle. Currently, the riddle complexity is chosen so that, in expectation, it takes 10 minutes for some user to find a solution. Ten minutes is ample time for every one to see a new block and thus gain consensus on what the transaction history is. Nonetheless, the possibility exists that two users solve the riddle sufficiently simultaneously. For instance, having seen the same block chain B1, . . . , Bk, one user may succeed in producing a new block B′k+1 and, almost simultaneously, another user may succeed in producing a block B″k+1. In this example, each of the two users may broadcast their own new candidate block in an attempt to get the associated reward. At some point later, when even more transactions are being generated, a third user U may see two possible chains B1, . . . , Bk, B′k+1 and B1, . . . , Bk, B″k+1. To create a new block Bk+2 and collect the associated reward, U needs to decide whether to try to solve the riddle associated with the new transactions and either block B′k+1 or block B″k+1. Even if he does this in parallel, because solving a riddle requires a computational effort and because the reward goes to the first user who generated the block Bk+2, U will broadcast the first solution U finds in order to secure the reward. Accordingly, when the user does so, some of the other users will see a k+2-long block chain B1, . . . , B′k+1, Bk+2, and others will see a chain B1, . . . , B″k+1, Bk+2.
Because users are asked to append to the longest chain, the k+1st block should eventually become unique. In practice, although the last (or even the second last) block a user sees may change, a user may safely assume that the first k blocks in a chain of length k+2 will no longer change. Accordingly, if a transaction, belonging to the third last block, transfers an amount X from public key PK to public key PK′, then the owner of PK′ can consider himself paid.
Bitcoin Statistics. Since Bitcoin is fully a peer-to-peer protocol, every participating entity must store the entire public ledger of transactions. As of February 2015, the size of the public ledger exceeded 28 GB, which grew from 5 GB two years prior to that. The number of public keys being used as of February 2015 was about 225000, which grew from 50000 two years prior to that. Moreover, the total number of transaction at every unit of time (i.e. 10 minutes) was about 650 as of February 2015, while 2 years prior to that the number of transactions at every unit of time was only around 450 transactions. Extrapolating, at some point in time the public ledger may not fit on even the most powerful cellphones.
Also, since in the Bitcoin protocol every entity must use computational cycles on solving certain cryptographic puzzles, the total computational power of all Bitcoin players combined recently broke 1 exaFLOPS. The measure exaFLOPS refers to a number of floating-point operations a computer can do per second. Or more simply, how fast a computer can tear through math problems. An exaFLOPS is 1018, or 1,000,000,000,000,000,000 math problems per second. Note that the top 500 most powerful supercomputers can only muster a mere 12.8 percent of the total computational power of the Bitcoin entities.
Weaknesses of Bitcoin The above discussion suffices to highlight some of the inefficiencies of Bitcoin (and its variants). These inefficiencies include:
In Centralcoin, a special player, the central verifier, CV, is responsible to verify which money transfers are valid and to report compactly the state of the system, and cannot cheat without publicly demonstrating guilt in a way that is provable. The public key of CV, PKCV, and the url of CV are publicly known.
Referring to
The central verifier 102 and the players 104-106 may be implemented using any appropriate combination of computer hardware and software. In an embodiment herein, the central verifier 102 and the players 104-106 are implemented using computer workstations, although other implementations are possible, including one or more of the central verifier 102 and the players 104-106 being data sites containing multiple computers/processors, storage, etc.
Centralcoin works in rounds. Each round t conceptually consists of three phases (e.g., 20 seconds each), and is completed within the time interval [t, t+1] (e.g., in one minute). At the beginning of the protocol (that is, at time t=0) all players know a compact list of public keys with their initial amounts of money.
Phase 1—All players download the two CV-signed lists of the previous round, PAYt−1 and STATUSt−1; verify CV's digital signatures; and verify that STATUSt−1 was updated correctly from STATUSt−2and PAYt−1. (Alternatively, a player may only verify a subset of the status report, corresponding to his own public keys.)
Phase 2—Each player generates his own round-t payments and causes them to become available to CV.
Phase 3—CV computes, digitally signs, and publishes (e.g., at a given url) the new lists of the current round: PAYt, which specifies all valid payments of round-t, and STATUSt, which specifies the account information at the end of round t. For instance, CV may publicize
As discussed elsewhere herein, a payment P may be of a form P=SIGPK(PK, PK′, #A, I). The list may be ordered by the paying public key first, by the paid public key PK′ second, and by the amount, A, third. A payment is valid if the signature of the paying key is valid, and if the amount is valid, relative to the amount of money that PK had at the end of round t−1 and all the preceding round-t payments of PK. For instance, if, according to status St−1, PK has #A at the end of round t−1, the first k payments (sorted by time) of PK in round t−1 have a valid signature of PK and have a total amount of A′<A, while the amount of the k+1st payment of PK is greater than the remaining balance A-A′, then the k+1st payment of PK may not be considered valid.
Referring to
Centralcoin provides ultra compact records where a full status of the system is relatively compact. A new public key PK′ may enter the system via a payment (PK, PK0, #A, I) appearing in PAYt at some round t. Alternatively, CV, or a different entity, may register new keys that first appear with a balance of zero in STATUSt at some round t.
Authenticating the lists PAYt and STATUSt may be very efficient, since CV computes a single signature for each list. However, a player wishing to keep an authenticated record only for a single payment at round t, would have to download the entire PAYt. Similarly, a player wishing to keep an authenticated record for the balance of money available to a given public key at the end of round t would have to download the entire STATUSt. To lessen this burden of such a player, CV may digitally sign each entry in PAYt or STATUSt. In this case, however, it may be challenging for CV to produce so many digital signatures within a single-round phase. Accordingly, it may be advantageous to to have the CV tree-hash (rather than simply one-way hash) the two lists, and then digitally sign just the root of each hash tree. Advantages of this approach are that CV can authenticate each entire list by means of a single digital signature and one ordinary hash for each entry in the list, the digital signature authenticating the list may be compact (i.e. independent on the number of entries in the list), and someone interested only in the authenticated record about a given item in either list may have to deal only with a minimal amount of data. Tree-hash-and-sign mechanisms are discussed in more detail elsewhere herein. See also U.S. Pat. No. 6,097,811, which is incorporated by reference herein.
Note that, in Centralcoin, CV need not be totally trusted, because it is transparent. When a player X is transparent, then if X acts dishonestly, X produce a widely available proof of the dishonesty. Since a provably dishonest party can be heavily punished (e.g., heavily fined or jailed, if a person), one can be relatively confident that a transparent player will act honestly (i.e., act in a prescribed way). Note that transparency is a useful, if not crucial, property in a financial system. Indeed, what one should really fear is dishonesty that goes undetected. In the system described herein, all players are essentially always transparent, or, in some cases, prevented from being dishonest.
Lacking knowledge of the secret key SK corresponding to a given public key PK, CV could not manufacture a payment from PK to some other public key PK′, even if CV wanted to do this and did not fear any punishment. In addition, CV cannot, without being caught, illegally diminish the amount of money PK has. Indeed, assume that CV does so, for the first time, at some round t. Then, in a previous round, CV has already correctly signed the correct STATUSt−1. Thus, the only legitimate way of decreasing the amount of money of PK in STATUSt consists of subtracting all the amounts that PK has paid to other keys in round t (and adding all the payments PK received in round t). Since these transfers are digitally signed by PK, when CV digitally sign a STATUSt in which he incorrectly lowers the money available to PK, CV digitally signs something patently false, generating a public proof of the dishonesty of CV.
In addition, CV could still, however, try to prevent money to be transferred from some key PK to some other key PK′. That is, CV may prevent the owner of PK (respectively, PK′) from using (respectively, receiving) any money. In fact, although timely receiving a valid round-t payment P=SIGPK(PK, PK′, #A, t, I), the CV may not include Pin PAYt. In such a case, it may be hard for the owner of PK (or PK′) to prove that he indeed did give P on time to CV. It is a question of whom one believes. One way to keep CV unambiguously accountable for this type of cheating is to utilize technology of U.S. Pat. No. 5,666,420, Simultaneous Electronic Transactions, which is incorporated by reference herein. In essence, this technology guarantees the exchange a message for a receipt so that (a) the recipient of the message learns the message simultaneously with (b) the sender getting a corresponding, very detailed, and digitally signed receipt. In the system described herein, the message consists of the payment P, the receiver is CV, and CV cannot learn P without also signing a receipt for P. In this fashion, whether the sender is the owner of PK or the owner of PK′, or someone acting on their behalf, CV could not, with impunity, ignore the payment P. Indeed, using Simultaneous Electronic Transactions, CV produces a digital signature proving that CV received P on time, and thus proves guilt if CV does not include P in PAYt, a list that CV digitally signs. Thus, with simultaneous electronic transactions, Centralcoin may be free of undetected cheating, and with treehash-and-sign, Centralcoin may guarantee efficient storage and retrieval of even individual authenticated records. However, Centralcoin may be prone to sabotage since CV is a unique point of vulnerability.
In Spreadcoin, there are multiple verifiers, V1, . . . , Vk. Each verifier V, has a public key, VPKi, and a corresponding secret key, VSKi. In some embodiments, k is odd: e.g., k=11. Spreadcoin only relies on an opinion of a a given majority of the verifiers. Spreadcoin works securely if such a majority of verifiers are at least transparent, let alone honest. For instance, if k=15, then Speadcoin relies on the opinion of at least 8 verifiers, if a simple majority is adopted, and on the opinion of at least 10 verifiers, if a ⅔ majority is adopted.
Referring to
The verifiers 302-304 and the other players 306-308 may be implemented using any appropriate combination of computer hardware and software. In an embodiment herein, the verifiers 302-304 and the other players 306-308 are implemented using computer workstations, although other implementations are possible, including one or more of the verifiers 302-304 and the other players 306-308 being data sites containing multiple computers/processors, storage, etc.
Spreadcoin works in rounds where, at each round t, a verifier Vi operates as follows:
Phase 1
Vi obtains a number of payments relative to round t.
For instance, if P is such a payment, then it may be the payer that sends P to Vi or causes P to be obtained by Vi for further processing, as the payer may want to make it clear in the system that the payer has indeed made the payment P. Alternatively, after receiving P, it may be the payee who may send P to Vi or causes P to be obtained by Vi, as it is in the interest of the payee to make it clear in the system that the payee has been paid.
Phase 2
Vi determines which of the received payments of round t Vi considers valid.
In particular, in identifying the received valid payments of round t, Vi may disregard duplicate payments. To identify as valid a round-t payment P, made by a public key PK to another public key PK′ for an amount A (e.g., P=SIGPK(PK, PK′, #A, I)), Vi may conduct several checks. For instance, Vi may verify that PK is properly certified, if certificates are used in the system, verify that the digital signature relative to PK is correct, verify (e.g., by consulting the proper information within STATUSt−1) that the amount of money A is indeed less than or equal to an amount of money available to PK in the previous round, and verify that P was received at the proper time for round t and/or P's time information is appropriate for round t. More generally, if Vi receives multiple round-t payments made by a public key PK, then Vi may verify that the sum of the amounts of all such payments is less than the amount available to PK in the previous round. When, in a given round, Vi receives multiple payments made by PK whose total amounts exceed the amount previously available to PK, then a variety of choices could be made by Vi. In particular, Vi may not include any payment made by PK among the valid received payments of round t. Alternatively, Vi may include only a subset payments made by PK, where a total amount of the subset does not exceed an amount of money previously available to PK. For example, Vi may include a longest subsequence of payments of PK (e.g., in lexicographic order) in which the total amount paid does not exceed the amount of money available to PK in round t−1.
Phase 3
Vi authenticates, preferably together, the round-t payments that Vi determines to be valid, and causes such payments to be available to at least another entity E, and preferably actually causes such payments to be made widely available.
Letting P1i, P2i, . . . be the round-t payment that Vi determines valid, Vi preferably computes and makes widely available PAYti=SIGVPK
By so doing, Vi helps a user U to determine which round-t payments are deemed valid by a given majority of verifiers. For instance, assume that there are 100 verifiers, and that each verifier Vi posts PAYti in a website for each verifier or in a common/public website of, for example, Google or Amazon. Then, U can easily reconcile the authenticated information so as to determine PAYt, essentially the round-t payments deemed valid by a given majority of the verifiers.
Note that U may be able to compute PAYt without obtaining PAYit from all verifiers i, and that U can be assured that a payment P is valid without obtaining PAYit from all verifiers i. For instance, assume that there are 100 verifiers and that the payee of a payment P tries to cause P to be obtained by all verifiers, but that only 90 verifiers only obtain P and authenticate P to be valid, and that U obtains PAYit from only 80 of these 90 verifiers. Then, U is still guaranteed that P is valid. For instance, assume that there are 100 verifiers, and that a payment P is valid if at least a simple majority of the verifiers deem P valid. Then if U obtains PAYti only from, for example, 80 verifiers i and 60 of them authenticate P as valid, then U is guaranteed that P is valid. This robustness is useful, because occasionally some of the verifiers may not be able to communicate a determination for any number of reasons, such as the verifiers being disconnected from the network or the computers of the verifiers being temporarily out of order.
Note also that a user U may not, at some round, be able to learn PAYt (e.g., because the user was disconnected from the network). In any case, since the information from the verifiers is made widely available, U could always catch up, for example, one or two rounds later. When U is unable to compute STATUSt−1, U may ask another user not to make a payment to U in round t but, instead, in a round later than round t. Alternatively, U may prefer not to rely on a payment P made to U by another user X in round t. For instance, if U was supposed to provide X with some merchandise or service in return for P, U may wait one or more rounds to do so, when U finally learns STATUSt−1. In sum, the system is robust enough that everyone can, albeit slightly later, compute PAYt.
Also note that, from PAYt and STATUSt−1, U can then readily compute STATUSt. In fact, if U had computed (or otherwise knew) STATUSs, for a round s<t−1, then, given PAYs+1j, . . . , PAYtj for sufficiently many verifiers j (values that have been made available anyway), the user could compute STATUSt.
Referring to
Following the step 406 is a test step 408 where it is determined if a majority of verifiers (as specified for a particular implementation) agree, as described elsewhere herein. If so, then control transfers from the step 408 to a step 412 where the round is confirmed (i.e., t is incremented). Following the step 412, control transfers back to the step 402 for another iteration. If it is determined at the step 408 that a majority of verifiers do not agree, as described elsewhere herein, then control transfers from the step 408 to a step 414 where the round is rejected (i.e., none of the transactions in the round are confirmed), as described elsewhere herein. Following the step 414, control transfers back to the step 402 for another iteration.
Optional Verifier Phase
Vi uses the values PAYti of other verifiers j to determine, authenticate, and make widely available PAYt. For instance, Vi posts SIGVPKi(PAYt). From all authenticated information available to him, Vi may also compute, authenticate, and make widely available STATUSt. For instance, Vi posts SIGi(STATUSt).
Note that, while in Bitcoin valid transactions are organized in blocks, and in Spreadcoin valid transactions are organized in rounds, it is possible to ensure that an expected time T to generate a block equals a duration of a round. However, an amount of computation expected during T is by design very large in Bitcoin and very modest in Spreadcoin. Note that, for convenience, in Phase 2, Vi authenticates the payments Vi considers valid all together, by means of a single digital signature. It is also possible, however, to have Vi authenticate the payments separately (e.g., one by one) using multiple digital signatures. Also, the number of payments that Vi obtains in Phase 1, could be all round-t payments, or part of the round-t payments. In particular, Vi could handle a given class of the round-t payments. For instance, Vi could handle round-t payments whose payer, or payee, belongs to a given set or is somehow associated with Vi. More generally, it is possible to evaluate a given function to a payment P in order to determine that Vi could handle P. The time or the amount of a payment P may determine whether Vi handles the payment P, and so on. No matter what the case may be, if some verifiers handle only some of the payments, whether a payment P is valid may depend on the opinion of only the verifiers handling P, rather than all verifiers.
Optional Use of Facilitators
Spreadcoin may also use one or more special entities E, such as a Bank or a large firm such as Google or Amazon, to make it even easier for an ordinary user U to learn the valid transactions or the system status at a round t. In particular, in the system described herein, for each round t after the above phases, such an entity E could execute the following:
Transparent Facilitator Phase
Using the authenticated round-t information made available by sufficiently many (although not necessarily all) verifiers, E may compute, authenticate, and make widely available PAYt, STATUSt, or both, or other information deemed useful, either by separately, or together, or in combination.
Alternatively, E may relay, possibly re-organizing, combining, authenticating, or further authenticating, the round-t information of at least some verifiers.
Alternatively yet, E may compute, possibly authenticate, and cause to make available information that, together with information made available by at least another entity E′, enables the computation, possibly in authenticated form, of at least one of: PAYt, STATUSt, both, or a combination thereof. For instance, E may post SIGE(PAYt, I), SIGE(STATUSt, I), SIGE(PAYt, STATUSt, I), SIGE(PAY11, . . . , PAYtk, I), or SIGE(STATUSt, PAY11, . . . , PAYtk, I), where I is any additional information, such as information about t, time information, or no information.
For instance, if there are 100 verifiers, a user (e.g., a new user) U may learn the information about a given round t directly from E, and in an authenticated form, without having to obtain relevant round-t information of sufficiently many verifiers. For instance, U may learn STATUSt from SIGE(STATUSt), or PAYt from SIGE(PAYt). Notice that E needs not be trusted very much, because E is a transparent player. Indeed, E publicizes and authenticates the round-t information that E produces, and a player can use current and past verifier-authenticated information, that is made available anyway, to check that information provided by E is correct. And if information provided by E is not correct, then E itself would be providing a proof of dishonesty by E. Thus, if E is a bank or a player with significant assets, E would have a lot to lose by being caught misbehaving. It is also possible to provide rewards to someone who catches E acting dishonestly.
Generally, the verifiers are responsible to determine which payments are valid, but, without being able to lie with impunity, E may facilitate dissemination of determinations by the verifiers. In some instances, separate entities may take the role of E. In this case (separately from possible fines), the current status may be determined from the information provided by each entity E. For instance, an entry (PKj, #Aj, Ij) may belong to STATUSt if and only if, for a given majority of the entities E, the entry belongs to the (preferably authenticated) round-t status reported by E. Note also that a verifier itself can take the role of E.
(In an embodiment herein, only one payment per player per round is allowed. This can be validated at any phase during the round by any of the players, verifiers, or by the facilitator. For instance, if an entity detects that a cheating player is trying to perform two or more transactions in a round, then the entity signals a special message to everyone in the system indicating the possible cheating. The special message may include the two payments from the cheating player, or any other information that can be used to prove the cheating attempt. The cheating player may be punished by penalties, or suspended from the system. In this case, the report STATUSt may be updated accordingly to reflect the punishment/penalties for all players involved in the cheating transactions—i.e., the cheating player and corresponding receiver(s) from the cheating player.)
With respect to efficiency, as with Centralcoin, the system Status generated by Spreadcoin is also relatively compact. Handling 650 transactions every 10 minutes, and 275000 keys (as in Bitcoin) is not a problem for Centralcoin. Even with a million public keys and a one million transactions every minute, Centralcoin would be much preferable to Bitcoin's public ledger.
With respect to security, it is noted that an individual verifier Vi in Spreadcoin need not be trusted. Indeed, Vi cannot fake from scratch a payment P of a different user. Each payment, in fact, must be digitally signed with the public key PK od a peyer, and thus requires knowledge of a corresponding secret key SK, which only the owner of PK should have. For the same reason, V cannot even alter the amount, the payee, the time, or anything else about a legitimate payment P. One issue that may preven Vi from being totally transparent is that (unless the previously discussed Simultaneous Electronic Technology is used) Vi can avoid recognizing a legitimate payment P: e.g., by not including the payment into PAYti. Even if it is noticed that P is recognized by other verifiers but not by Vi, Vi could always credibly claim not to have received P. Such lack of total transparency is not, however, serious given the way the system operates. If sufficiently many other verifiers j include P in Paytj, then by reconciling all the verifier data and taking the proper majority, anyone can correctly reconstruct PAYt, that is, all the legitimate payments of round t.
In particular, if a sufficient majority of the verifiers recognize legitimate payments, then the malicious verifiers, even if they collude and perfectly co-ordinate themselves, cannot subvert Spreadcoin.
In addition, as described in more detail elsewhere herein, rather than just relying on the abstract honesty of the majority of the verifiers, Spreadcoin may properly incentivize each verifier to recognize as legitimate all legitimate payments.
Spreadcoin also provides resilience to protect against verifiers that stop working. For instance, some verifiers may become disconnected from the network, and thus unreachable and unable to perform functions of the verifiers, or sites of some verifiers may be brought down by an adversary who wishes to subvert Spreadcoin. To subvert Spreadcoin, when simple majority is needed, an enemy would have to successfully bring down more than half of the sites, e.g., 6 of 11 sites, if the number of verifiers is k=11, or 51 out of 101, if k=101.
Spreadcoin can be enhanced by the use of facilitators. As discussed elsewhere herein, one or more entities E may be used, after the verifiers act at a given round, in order to facilitate the use of the system. Such facilitating entities may also help at the beginning of a round. For instance, to avoid sending a payment to multiple verifiers, a player may just send the payment to a facilitator E, who then distributes the payment to the verifiers, or posts the payment where the verifiers can pick it up. Of course, the players can always deal directly with the verifiers, if the facilitator(s) do not work satisfactorily. A same entity E can actually facilitate the system at the beginning and at the end of a round.
It is also possible to use a weighted majority. In the examples of majority discussed above (whether simple majority, ⅔ majority, and so on), the verifiers/verifier keys have been treated equally. However, a majority could also be a weighted majority where it is possible to give more weight to some verifiers/verifier keys than to others. For instance, if there are 100 verifiers/verifier keys and simple majority is used, then it is possible to weight results from a particular verifier/verifier key, V/VPK, ten times more than the other verifiers so that if a given round-t payment P may be considered valid if 51 of the verifiers/verifier keys consider P valid, P may also be considered valid if V/VPK and some other 41 verifiers/verifier keys declare P valid. A verifier key considers a payment P valid if P is authenticated as valid relative to the verifier key.
As discussed elsewhere herein, in Spreadcoin, a dishonest verifier Vi cannot fake or modify legitimate payments, but could purposely fail to include in PAYti a valid payment P. If sufficiently many verifiers did so for a payment P made by a given public key PK, P will not enter PAYt and the payer would have to make the payment again in the next round, if the payer so chooses. But if sufficiently many verifiers consistently do this at each round, then, although the owner of PK will not lose his money, he will lose the ability to spend it (and might need go to court to recover it). It is thus useful to ensure that, without involving courts or other expensive or inefficient processes, a verifier Vi is disincentivized from not identifying a valid round-t payment P. To this end, it is possible to reward Vi for work done by Vi, as follows:
Phase 4
Vi obtains a reward Rti that may depend on (a) the payments identified by Vi and/or other verifiers to be valid, and/or (b) other quantities Q, such as t, i, quantities of other rounds, etc. or no quantities.
For instance, letting At be the sum of the amounts of all payments in PAYt or in PAYti, Vi may get a percentage—e.g., 1%—of At. Thus, by not including a valid payment Pin PAYti, Vi risks of decreasing At and thus risks decreasing the reward for Vi. In particular, the total reward for all verifiers at a given round may be a given fraction c of At, and the reward due to verifier Vi may be
if there are k verifiers and the verifiers are treated equally. Alternatively, the reward of Vi may be a different portion of a total reward due to all of the verifiers.
It is also possible to use rewards that may cause Vi to incur a risk of receiving much less money even when Vi omits to identify as valid a single payment relative to a small amount of money. For instance, the reward of Vi may consist of a fraction ci·At if every payment in PAYt also appears in PAYti; it may consist of
if exactly one payment in PAYt does not appear in PAYti; it may consist of
if exactly two payments in PAYt do not appear in PAYit; and so on. In any case, the reward of Vi at a round t may be determined so as to cause Vi to incur the risk of receiving strictly less many whenever Vi omits of recognizing as valid a valid transaction, possibly not only in a given round, but also, for example, in future rounds too.
The system provides automatic reward payments. No matter how rewards may be computed, a verifier Vi may receive a reward in any way deemed appropriate, such as via a separate payment made by some entity E. Alternatively, as an ordinary user of the system, Vi may have a public key PKi, capable of holding money and paying money to others, that may be separate from a public key Vi uses to digitally sign the payments Vi believes valid. In this case, a current Status of the system may contain information about the amount of money available to PKi: for instance, the current Status may contain an item of the form (PKi, #a, I). Thus, there is no need to rely on a external payment in order to reward Vi, nor on a separate payment to PKi within the system. For instance, it suffices that everyone automatically updates the amount of money PKi holds in, for example, STATUSt or in a given later round, so as to reflect the reward due to Vi for work by Vi in round t. For instance, assume that STATUSt−1 contained the item (PKi, #a, I), that the reward of Vi for round t simply consists of a fraction c of At, and that Vi does not use PKi to make any payment in round t. Consider any party X that uses the verifier-digitally signed quantities PAYt1, . . . , PAYtk in order to compute STATUSt from STATUSt−1. In such a situation, U can trivially compute c·At from PAYti and can trivially replace the entry (PKi, #a, I) in STATUSt−1 with (PKi, #a+c·At, I) in STATUSt.
The system provides for budget balance rewards. The automatic reward system described above generates some “inflation”, in that, at each round, the reward system causes the total amount of money available to all public keys in the system to increase. However, that such “inflation” can be avoided by having the rewards due to the verifiers be paid by some/all/other users in the system. Such payments may also be automatic. For instance, assume, for simplicity, that the total reward of all verifiers Vi is 1% of the total amount of the valid payments of round t. Then, inflation may be avoided if, for each valid round-t payment P of an amount a, an additional payment of 1% of a is made to the verifiers by the payer or by the payee of P, (or in part by the payer, and the balance by the payee). To make this payment automatic, assume that P is made at round t from a public key PK to another public key PK′. Then an additional 1% of a may be automatically subtracted from PK in STATUSt by whoever compute STATUSt. Alternatively, 1% of a may be subtracted automatically from PK′; or part of this amount can be subtracted from PK, and the balance from PK′. Alternatively yet, the payment, automatic or not, may be spread across other entities who have the interest to keep the system inflation-free.
The system provides for rewards from retailers only. In some instances, having the payee (or the payer) of a payment P contribute a reward to the verifiers, or to some of the verifiers, may be viewed as a transaction fee. Paying a transaction fee may be acceptable by some users, such as retailers, but not by other users. For instance, an ordinary user U wishing to transfer $100 to another user ordinary U′ may find it unacceptable to pay $1 to the verifiers; and similarly U′ may find it unacceptable to receive de facto only $99. Accordingly, the rewards of the verifiers may be computed on the amounts paid to retailers, and come from the retailers themselves. For instance, if a public key PK is certified, then a certificate of PK can also specify (e.g., within the information field I) that PK indeed is owned by a retailer. This way it may be relatively easy to recognize whether a payment P is made to the public key of a retailer, and thus whether the verifiers can claim some fee on this transaction. In contrast, when a payment is from an ordinary user to another ordinary user, neither the payer nor the payee may be required to pay a reward to the verifiers. In principle, therefore, it is possible for a verifier V, to avoid including such payments in PAYti without incurring any monetary loss.
It is possible, however, to determine the reward of a verifier Vi at round t from the quantities specified in Phase 4, so that Vi is incentivized to report as valid every round-t payment P, that is indeed valid, whether or not, for example, the payee is a retailer. For instance, if all k verifiers are treated equally, and if the rewards of the verifiers are (automatically) paid by just the retailers, and the total rewards due are 1% of RAt (where, considering all payments in PAYt, RAt is the sum of all amounts paid to retailers), then it suffices for the reward of Vi to be:
where Tti is the total number of payments in PAYti that also appear in PAYt, and Tt is the total number of payments in PAYt.
Thus, although only the retailers de facto pay the reward of Vi, Vi risks loosing money if Vi does not include in PAYt all the round-t payments Vi deems valid, including payments from ordinary users to ordinary users. To make it more risky to omit even a single payment, no matter to whom, the reward of Vi could be chosen to be:
if every payment in PAYt occurs in PAYti;
if exactly one payment in PAYt does not occur in PAYti;
if exactly two payments in PAYt do not occur in PAYti;
and so on.
Democoin is a variation of Spreadcoin in which the verifiers of a given round are chosen at random, specifically by means of the following:
For each round t, the set of actual verifiers is randomly selected from a possibly larger set of potential verifiers by a selection process which is preferably closeness-preserving, that is, by a selection process that generates (sub)sets close to each other from sets close to each other.
Note that the term “random”, as used herein, should be understood to include “sufficiently random” or “pseudo-random”. Similarly, and the term “randomly” may also be understood to include “sufficiently randomly” or “pesudorandomly”.
Two sets A and B are close to each other (or essentially identical) if the elements in A but not in B are relatively few with respect to the elements in both A and B; and vice versa for B. For instance, the set of all numbers between 1 and 1,000, and the set of all numbers between 10 and 1,012 may be considered close to each other. Of course, two identical non-empty sets are always close to each other. Assume, for a moment, that each user may have a single public key in the system. Then, identifying a key with a user of the key, it is possible to view AV as the set of all actual-verifier keys and PV as the set of all potential-verifier keys. The concept of a verifier being honest is discussed elsewhere herein. Notice that if a given majority of PV is honest and AV is randomly select in PV, then, with high probability, when AV is large enough, a given majority of AV is also honest
Closeness. It is useful to the proper running of the Democoin system that the valid payments of a given round should be correctly determined and agreed upon. Since such payments are determined by a given majority of the actual verifiers, if the majority of AV are honest and all users agree on AV, then the system will run properly. Note that, if the majority of PV are honest, then, with high probability, so are the majority of AV, when the number of actual verifiers is sufficiently high, because AV is randomly selected from PV. Also note that it is not necessary that all users agree on AV. The system runs well even if each user U has his own set AVU that the user believes to be AV, provided that each such AVU is close to AV, or that all AVU are close, since any one of them could be considered to be AV. In essence, this is so because, with very high probability, if the overwhelming majority of PV are honest, then so are a given majority of AV and of any subset of PV close to AV.
One way to ensure that each AVU is close to AV (or that all AVU are close) is to ensure that the set PV is fixed, and have each user U use a closeness-preserving selection process to select his own AVU from PV. Another way, as explained elsewhere herein, is to choose the set PV so that, even though PV may vary over time, and each user U may honestly conclude that the set PVU owned by each user is the set of the potential verifier, it must be that PVU and PV are close (or that all sets PVU are close), so that also all sets AVU are close.
Incentives. As Democoin consists of Spreadcoin with a special way of selecting the verifiers, the same reward schemes discussed for Spreadcoin can be used in Democoin to ensure that, at a round t, the actual verifiers authenticate as valid all the round-t payments that are seen by the verifiers, except perhaps for the verifiers that possibly inoperative verifiers, such as verifiers that become disconnected from the network, or verifiers whose computer(s) malfunction, etc. Accordingly, an overwhelming majority of verifiers of a round t can be expected to report correctly the round-t valid payments.
Referring to
As with Spreadcoin, the players 502-507 may be implemented using any appropriate combination of computer hardware and software. In an embodiment herein, the players 502-507 are implemented using computer workstations, although other implementations are possible, including one or more of the the players 502-507 being data sites containing multiple computers/processors, storage, etc. Some or all of the players 502-507 may be physically co-located or located in different physical locations. The system described herein allows the players 502-507 to conduct financial transactions without the need to be in the same location or the need to rely on a central authority by communication between the players 502-507 (and possibly others) through the network 508.
Democracy and Budget Balance. Note that Democoin is very democratic in spirit. By choosing a set of potential verifiers to coincide with the set of all users, and by selecting, at every round, the actual verifiers at random among all possible users, everyone has a chance to serve as a verifier. This is similar to what happens in other civic tasks, like jury duty. Moreover, by choosing carefully the ratio of actual verifiers and all users (e.g., 1/1,000), every user will actually become a verifier quite a few times each year. Note that acting as an actual verifier is not much of a burden. Indeed, the verification procedure is quite automatic and can be carried out in the back ground, by a computer or cell phone of a user, without troubling the user at all. In addition, an actual verifier is monetarily incentivized to properly verify all the payments submitted to the verifier in a round. For example, the actual verifiers may be rewarded with 1% of the total amount of payments that is validated. Note that this aspect of Democoin is quite different, and in a more democratic direction, with what happens in a traditional system. For instance, in a credit-card system, the users (e.g., the merchants) must pay 2% or 3% to the money paid to them to outside entities responsible to oversee the validity of the payments. By contrast, in Democoin, the users pay 1% of the money transferred to each other, and any one of the users has a chance of becoming an actual verifier at every round. For example, if the reward due to the verifiers for a given payment comes from the payee, and a user receives on average the same amount of money, then, in expectation, no user in Democoin looses any money due to the system rewards.
Security. If the same set of initially honest people are always “in power”, the temptation of colluding may become too strong to resist. This, however, is not the case in Democoin. First, as discussed elsewhere herein, an actual verifier cannot fake a payment by another player (because the actual verifier does not know the private key of the player). Second, an actual verifier cannot declare valid an invalid payment with impunity. In fact, a declaration of a verifier is digitally signed, and so are all payments, valid or not. Thus by declaring valid an invalid payment, a verifier must produce digitally signed and public evidence of his own guilt. Third, honest behavior of an actual verifier i at a round t is financially rewarded, and so it is not in the interest of the verifier to omit a valid payment from PAYti. Fourth, the system relies on the opinion of a given majority of the verifiers, and as long as, for example 80% of all users are always honest, by selecting sufficiently many verifiers (e.g., 100) at each round, one is guaranteed that, with very high probability, the majority of the actual verifiers of a given round are honest. Finally, the set AV of the actual verifiers of a given round is randomly selected and becomes known at the start of the round. Thus, it is essentially useless for a dishonest verifier, upon learning the identity of the other members of AC, to try to convince the other members to collude with the dishonest verifier. The members of AC will not have the time needed to conspire; after, for example, just 10 minutes, a different random set of actual verifiers will be in charge to process the payments of the next round.
One may try to subvert the Democoin system by packing the set of potential verifier keys with public keys of malicious actors. Several ways exist, however, to protect against this potential attack. One way is to elicit an entry fee, or a (pro-rated) yearly fee from each public key in the system. Such a fee could be paid outside the system or within the system (e.g., via an automatic payment from each public key to a given key—or some given keys—in the system). This way, packing the set of public keys with artificially numerous and potentially controllable keys would be very costly.
A second way is having the probability of selecting a public key PK to become an actual verifier key at some round t depend also on the amount of money that PK as at, for example, the start of the round, that is, on the second field of the item (PK, #A, I) in STATUSt−1. This way ensures that a user does not get an advantage by having many public keys in the system for being selected as an actual verifier, because the probability of being so selected solely depends on the total amount of money the user has associated to all his public keys. The user thus may get the same probability by “keeping all his money in the system associate to a single key.” Note that this way users who are more heavily invested in the system also have more responsibility in running it (perhaps a virtue from some points of view).
A third way is having at least one special entity, deemed the verifier registration authority(s) (VRA), who certifies (anonymously or not) the public keys eligible to be selected as verifier keys. In this case, the VRA may easily ensure that each user has at most one key PK that can become a verifier key. This way, it becomes harder to pack the potential verifier keys with easily controllable keys. For instance, the VRA may ask a registrant to provide a proof of identity (and possibly insert some indication of the identity) in the certificate for the public key PK. Alternatively, the VRA could, from time to time, authenticate a single list of public keys eligible to become verifier keys.
Note that, when the number of players is large, if some of the above ways are adopted, subverting Democoin by controlling the majority of the potential verifier keys is very hard. (Much easier would be to subvert Bitcoin by controlling the set of miners, which already belong to a few consortia—“mining pools”—anyway.) Finally, another possibility is to have a mixture of verifiers: for example: (a) a fixed set of verifiers (possibly none); (b) a set of dynamically selected verifiers (possibly none); and (3) a set of registered over-time verifiers (possibly none).
Target Number of Actual Verifiers. As an example, but without any limitation intended, let k denote a target number of actual verifiers at a given round. The target number may be fixed or change from round to round, and may be approximate. For instance, k may depend on quantities such as the (possibly approximate) round number, the (possibly approximate) total amount of money in the system, the (possibly approximate) total number of users/public keys in the system, etc. For instance, the system may have 100 verifiers when the number of users, or the number of public keys, is around 100,000. Using approximate numbers in such dependency may be usefull to make it clear what the target number of verifiers is even to those who are imperfectly informed (e.g., players who know only the system status of a few rounds before the current one).
Communications with Actual Verifiers. As with the verifiers of Spreadcoin, the actual verifiers of a round t of Democoin receive and send information. For instance, an actual verifier i needs to receive the payments of round t, and communicate PAYti and possibly STATUSti, both computed by Verifier i. Communicating with a few, fixed number of verifiers may be trivial, because, for example, the network addresses of the verifiers may be publicly known. However, when a set of potential verifiers is very large, and when the set grows with time (as when the set of verifier keys consists of the set of all public keys in the system), determining how to communicate to a just selected verifier may be somewhat inconvenient. Learning the composition of AV is one thing, and knowing how to communicating to the members of AV is another. To facilitate such communication it is possible to use an intermediary entity E who is in a better position to know how to reach each potential verifier. For instance, a user may send a given round t payment P to E, who then forwards P to each actual round-t verifier i. Also, each actual verifier i may post in a widely available website PAYt or STATUSti computed by verifier i, and a user can then retrieve the posted information from the website, and verify authenticity of the information after the user learns the identity of the actual verifiers.
Alternatively, as discussed elsewhere herein, each record in STATUSt−1 consists of a tuple of the form (PK, #A, I), and thus the owner of PK might choose to include in the information field I information relative to how to communicating with the owner of PK: for instance, the url at which PAYti and STATUSti can be found whenever PK becomes the key of verifier i. Also note that the information field I can also be used to indicate if PK is eligible to be selected as a verifier key.
Initial Implementations. The set AV at a given round t can be derived form PV in a pre-determined manner from a random value vt associated to the round t. In particular, when vt may be a natural and public random value, meaning that vt is the widely available result of a random process that is hardly controllable by any given individual. For instance, vt may consist of the temperatures of various cities at a given time (e.g., at the start of round t, or at a given time of the previous round), or the numbers of stocks of given security traded at a given time at a given stock exchange, and so on, or a combination of such quantities q1, . . . , qm, or a combination of such quantities and other quantities of the system, such as the current round t. For instance vt=H(q1, . . . , qm), or vt=H(t, q1, . . . , qm), or vt is the concatenation of H(t, q1, . . . ,qm, 1), H(t, q1, . . . , qm, 2) . . . , H(t, q1, . . . , qm, s), where H is a collision-resistant hash function.
One way to derive AV from vt in a pre-determined manner is the following: Consider vt as a string of bits and let PV consist of a sequence of 2n potential verifiers, then the first log n bits of vt specify the first verifier (by specifying his corresponding number in the the sequence PV), the second log n bits specify the second verifier, and so on. Note that, in the above way, anyone who knows vt will end up selecting the same set of actual verifiers AV, and that AV will be randomly selected from PV because vt is random. Also note that the above way chooses a number of verifiers that is approximately k, since some verifier may be chosen twice, as for some i and j, the ith and jth log n-bit of vt may be the same. For instance, if one aims at selecting at random 100 actual verifiers from 100,000 potential verifiers, then it is possible to end up with just 96 verifiers, but this would be good enough. The probability of ending up selecting very few verifiers would be extremely small.
A more general way to derive AV from vt includes the steps of (1) obtaining in a pseudo-random manner a string R from vt—e.g., letting R=PRG(vt), where PRG is a psudo-random generator—and then (2) deriving AV from R (and/or PV). For instance, letting H be a collision-resilient or one-way hash function, R can be generated from vt by concatenating the following, for example, j strings: H(1, vt), H(2, vt), . . . , H(j, vt). This way, the random value vt could be short, even though one needs a longer string R to derive AV.
Additional Implementations. An alternative approach to select AV from PV involves one or more distinguished entities, referred to as trustees. If there were a trusted party T, T, acting as the single trustee, may randomly choose the actual verifiers V1, . . . , Vk of a round t, and make the set AV clear to all, preferably in authenticated form, at a prescribed time (e.g., slightly before or at the start of the round). For instance, T may post or cause to have posted, SIGT(AV) in a widely accessible website. This makes it clear who actual verifier of a given round are.
In a less direct, but still very clear way, T may choose a random value vt, from which AV is derived in a pre-determined manner (possibly by first producing a string R from vt, and then AV from R), and make vt widely available, preferably in an authenticated manner. For instance, T may post SIGT(vt) in a proper website, so as to enable others to retrieve vt and then derive AV from vt in a pre-determined manner.
If T is not trusted, however, vt may not be be selected at random, and thus the actual verifiers may not be randomly selected. To avoid this problem, the value vt may be taken to be a natural public random value associated to the round t. For instance, if T publicizes SIGT(vt), then everyone immediately learns what vt is, and thus immediately computes AV, and then can check whether the vt was indeed correct, by checking whether the agreed random process indeed produced vt. For instance, vt may be produced by checking what the number of stocks of a given security traded at a given time in a given stock exchange was. Accordingly, T need not be trusted much. By acting dishonestly T publicly incriminates himself by producing his own digital signature of the wrong value. Should this happened, T can be punished or fined, and the player who reports T's misbehavior may be rewarded. This system may work well when it is easier for at least some user to learn the value SIGT(vt) from a given website than to ascertain the actual value of vt.
Alternatively, vt may be produced by T in a way that does not enable T to have full control on the actual value of vt. For instance, vt may consist of SIGT(t), that is, vt may consist of the digital signature of T for the current round t, or of SIGT(t, I), or of SIGT(I), where I is some easy to determine information (including no information), and the underlying signature scheme used by T is deterministic. In fact, the digital signature of T may be de facto unpredictable to anyone but T, because only T knows the secret signing key associated to the public verification key of T, and thus, in particular, vt=SIGT(t) is sufficiently random, and also sufficiently authenticated. More generally, when the information I is not controllable or cannot be influenced by T and vt=SIGT(I), T cannot “shop around” for the vt that T wants, and thus, since AV is derived from vt in a pre-determined manner, T cannot shop around for the set of actual verifiers AV that T wants.
The mechanism discussed above, however, leaves open some possible cheating from a malicious T. Assume for simplicity that vt=SIGT(t). Then, T can compute his own digital signature of some future round t. Accordingly, although T may not be able to chose a AV as T pleases, T may: (a) realize what value AV will take in the future, (b) realize the players that will be the verifiers of future rounds, and (c) alert a set of players that they will indeed be the verifiers of a given future round. If this is the case, the verifiers of some future round t may be given plenty of time (rather then just—say—10 minutes) to collude with each other. To protect against this possibility, it may be better to chose vt=SIGT(t, I), ensuring that I includes the result of a random process at a time t, or close to t.
Further Implementations. Another alternative is to rely on multiple parties, T1, . . . Tj, trusting that at least one of them is honest. After each party T computes a value vti, in any one of the ways described above, the value vt could be taken to be a predetermined combination of various values vti (possibly with some additional information I ). Note that it is still possible for one party Ti to be able to control arbitrarily the final value vt, even when all values vtx, for x≠i, have already become known. For instance, assume that each party Ti is allowed to choose vti without any constraint, and that the final vt is the sum, modulo some large integer N, of all vtj. Then if party Ti announces his own value vti after Ti learns the values announced by all other parties, then Ti can choose vti so as to force vt to be any value mod N Ti wants. Preferably, a combination of multiple parties will prevent one party Ti from being be able to control arbitrarily the final value vt, even when all values vtx, for x≠i, have already become known. For instance, if vti=SIGi(t) for all i, then one can let vt=H(vt1, . . . , vtj, I), where H is a collision-free hash function.
Notice, however, that one the Ti's might be unable, at some round t, to produce a (properly authenticated) value vti. For instance, Ti's computer may not be working, or Ti may be disconnected from the network, or otherwise unable to convey vti. When this is the case, the combined value vt, and thus the derived set of actual verifiers, may not be able to be computed. To prevent this possibility, each vt could be produced so that not only it is sufficiently random and easy to verify, but also robust, in that, as long as a majority of the Ti's function properly, it is also easily and always computable and verifiably unique. For instance, it is possible to use multi-party secure computation, or threshold signatures, in order to produce and publicize a public verification key PK of a given deterministic digital signature scheme, and ensure that each party Ti knows a “piece” of the matching secret signing key SK. This way, letting for example (but without limitation) vt=SIGPK(t, I), it is possible ensure that, for all λ<j: (1) vt can be easily computed with the help of any λ+1 parties Ti's, and (2) vt is essentially unpredictable to any λ or less of the parties Ti's. This way, as long as more than λ of the above parties are honest/work properly, every one will correct learn vt, and vtwill continue to be unpredictable until round t (or at around round t).
Verifiers preferably authenticate information about payments via digital signatures using public keys of the verifiers, which are identified with owners of the public keys. It is possible to select, at a round t, a random set of actual verifier keys, AV, from a set of potential verifier keys, PV. When PV and vt are common knowledge (e.g., when the set PV is fixed), all users can derive the same random set AV, or close sets, of actual verifiers. For instance, AV can be taken to be the subset of PV determined by H(PV, vt), where H is a collision-resilient hash function. However, it may be more complex when PV is not common knowledge, and different users may have different ideas of what PV is. For instance assume that PV at a round t is taken to consist of all public keys currently in the system (or all public keys appearing in the STATUSt−1, or all public keys in in STATUSt−1 identified as potential verifier keys). Then, PV may continue to grow over time, as new keys are continually added to the system. At the same time, letting t be the current round, a user U1(e.g., because U1 could not monitor the system for a while) may only know the status of the system of “5 round ago”, STATUSt−5, while another user U2 knows STATUSt−1. Accordingly, U1 may believe that the set of potential verifier keys is PV1, while U2 may believe that the set of potential verifier keys is PV2, where PV2 contains, for example, 1% more keys than PV1. Then, even though PV1 and PV2 essentially coincide, the bits in H(PV1, vt) and H(PV2, vt) may be different in 50% of the positions, because a collision-resilient function H is sensitive to each bit. Thus, the set of actual verifier keys according to the two users, AV1 and AV2, may be dramatically different, so that the majority opinion in AV1 has very little to do with the majority opinion in AV2. This possibility is avoided, as discussed elsewhere herein, by choosing a selection procedure that, properly carried out by different users, produces random sets of actual verifiers that are close to each other.
For example, a robust selection procedure may assume, without loss of generality, that PV consists of all public key in the system H may be a (preferably collision-resilient) hash function, and PK a public key in the system. PK may become a verifier key at round t if and only if, for some given number p between 0 and 1, the following is true:
.H(PK, vt)≦p
Assume, without loss of generality, that H's outputs are 256-bits long. Then H(PK, vt) de facto is a random 256-bit number. Thus, putting a decimal point in front of this number, it is possible to interpret “.H(PK, vt)” as a binary expansion of a 256-bit random number between 0 and 1, and thus let PK become a verifier key if and only if the result of evaluating .H(PK, vt) is less than (or equal to) p, which will happen with probability p. If it is desirable that one in one thousand keys become an actual verifier key at round t, then p may be set equal to 1/1,000. In such a case, if there are 100,000 public keys in STATUSt−1, then it is expected that 100 of the keys will be determined to be actual verifier keys. Assume that STATUSt−1 is known to a second user, while a first user only knows the status of 4 rounds before, where 1,000 less keys were present. Then, the expected number of verifier keys is 99 for the first user, and 100 for the second user. In any case, the set of verifier keys essentially coincide for both users. In fact, if a key PK appears in STATUSt−4, the key PK also appears in STATUSt−1. Moreover, if PK is a verifier key according to the first user, then .H(PK, vt)≦p, and thus PK also is also a verifier key according to the second user, because the values vt and p are the same in both cases (for both users), and so is the result of comparing .H(PK, vt) and p. In sum, the verifier keys according to the second user coincide with those according to the second user, except that it is expected the latter to consider one more key to be a verifier key. Assume now that a payment P is valid if it is deemed valid by 51% of the actual verifiers. Then, if 80% of actual verifiers according to the second user act honestly and authenticate P as valid, then with high probability, at least 51% of the actual verifiers according to the first user will also authenticate P as valid. And vice versa. The above described verifier selection procedure is thus robust. In other words, despite the lack of synchrony and centralization, and despite the verifiers changing completely at each round, the verifier selection mechanism described above guarantees a remarkably accurate consensus in a very efficient manner.
Ideally, at each round t and for each round-t actual verifier j, every potential verifier i and every potential user i in the system may obtain (e.g., download) authenticated lists PAYtj, and possibly STATUStj. In this case, in fact, each user i knows/can easily compute STATUSt for each round t. Alternatively, if PAYt and STATUSt are made directly available (e.g., computed and posted by one or more given entities, preferably in an authenticated manner), then a potential verifier j or a user i trivially obtains PAYt and STATUSt at each round i. As explained below, it is sufficient for a verifier j or a user i to obtain the Status of the system only once in a while.
For concreteness, but without any limitation intended, assume that PV consists of all public keys in the system, that AV is computed using a robust selection procedure such as the mechanism discussed above, that a potential verifier j obtains status of the system once a month, and that suddenly, at round t, the verifier j is selected to become a verifier, that is, that the public key PKj of the verifier j is in included in AV. Notice that to realize that PKj becomes a member of AV at round t, the verifier j need not know any global status information, but only vt. Indeed, PKj is included in AV if and only if H(PKj, vt)≦p.)
To perform as an actual verifier, and receive the corresponding reward, the verifier j needs to know STATUSt−1. If a facilitator E, were available, then the verifier j could immediately retrieve STATUSt−1 (e.g., by downloading SIGE(STATUSt−1) the moment PKj is selected. In absence of any facilitators, the verifier j could immediately retrieve the published and authenticated lists STATUSit−1, for sufficiently many verifiers j of round t−1. To do this, however, the verifier j needs to determine what the verifier keys of rounds t−1 were.
Since each potential verifier key PK is chosen at round t−1 if .H(PK, vt−1)≦p, to make the determination the verifier j needs to know two pieces of information: (a) all public keys in STATUSt−1 and (b) the value vt−1. Note there is no problem for the verifier j to obtain the latter piece of information, because by definition the value vt−1 was made widely available, preferably in authenticated form. As for the piece of information (a), the verifier j can compute information (a) with sufficient precision from the Status information of a month ago held by the verifier j. For instance, assume that the number of potential verifier keys grows at the rate of 20%/year, and that there were 500,000 such keys a month before, when the verifier j obtained the full status-list of the system. Then, roughly 10,000 new potential verifier keys have been added in the last month without knowledge of the verifier j. Assume that the selection probability p was such that 101 verifiers were individually and randomly selected at round t−1. Then, the probability that one of the newly added 10,000 keys is actually selected as a verifier key is 0.0002. (Note, by the way, that the probability that one the the new 10,000 keys is selected when the target number of verifiers is 11, would be even lower: namely, 0.00002.) In any event, the set of actual verifiers of round t computed by the verifier j according to the one-month-old record, coincides, with high probability, with the true list of verifiers of round t, computed based on all potential verifier keys of round t−1. Moreover, with overwhelming probability, the number of actual verifiers selected from the newly added 10,000 keys will be very few. Accordingly, when a reasonable majority of the potential verifiers are transparent, with overwhelming probability, the majority of both sets of verifiers (i.e., the actual verifiers, and the verifiers according to one-month-old data of the verifier j) will be transparent. In sum, even if the verifier j obtains the full list of potential verifier keys once a month, after being selected, the verifier j would be ready to perform the verification functions accurately.
Alternatively, rather than obtaining the full status once a month, a potential verifier j could obtain once a month just PAYtj of the last 30 days, from which (given the previously computed full status information of a month ago held by the verifier j) the verifier j can easily reconstruct the current status information. Furthermore, if tree-hash-and-sign method is used (see discussion below), the verifier j could easily check that the reconstructed status is correct by simply (i) locally computing the alleged root hash of the tree for round t−1, and (ii) verifying that each verifier authenticated the same root hash at round t−1.
Alternatively yet, in order to facilitate updates, each selected verifier j of a given round t, in addition to PAYtj and STATUStj, could also authenticate and publish NEWtj, the list of newly added potential verifier keys at round t. Notice that, by so doing, the verifier j remains fully accountable for all the data published by the verifier j.
A player i without verifying responsibilities may only care about whether a given payment P made to player i is valid. Such a player can certainly perform monthly updates as a potential verifier. However, with no rewards and no liabilities, the player i may be satisfied with a much lighter check of the validity of P. For instance, if a facilitator is present, and tree-hash-and-sign is used, then the player i may obtain only the facilitator-authenticated information that enables the player i to determine whether P is a valid payment. Alternatively, if the player i obtains full status information once a month, then the player i may use one month-old information about the set of potential verifiers, and the new public value vt of round t, in order to compute only one alleged verifier j for round t; and obtain and verify only the compact j-authenticated record of payment P (assuming again that tree-hash-and-sign is used).
Centralcoin, Spreadcoin and Democoin can be used (a) as a separate (and separate floating) digital currency, or (b) be tied to (and floating with) a national currency, or (c) both; and certificates can be effectively used in all three payment systems.
In particular, a distinguished entity D, possibly belonging to a set or multiple entities, can be used to certify a public key PKX of an ordinary user X, or of a verifier X, or a potential verifier X, or a facilitator X, etc.
Since the public key PK of a distinguished entity D can be widely known in the system (e.g., because PK was certified via a public key universally known in the system), the certificates issued by D, and thus the public keys PKX certified by D, can be widely verified/recognized in the system. The certificates issued by D may be recognized by all or nearly all of the potential verifiers or by at least a significant number of the potential verifiers (i.e., enough to allow a given majority to validate payments made by a player using a public key certified by D). Entity D may produce certificates for all kinds of players/public keys, or for some class of players/public keys. For instance, D may certify the public keys of ordinary users, but not, for example, public keys of verifiers or potential verifiers (which may be certified by another distinguished entity). In particular, D may be a bank (or other financial institution), and may certify the public keys of customers of the bank, X.
As discussed elsewhere herein, a certificate C issued by D for a public key PKX may be of the form
C=SIG
D(PKX, I),
where I is some arbitrary information, including no information. For instance, I may include:
To issue C, D may take or have some other entity take one or more of the following actions:
When ordinary-user public keys are certified, to verify that a payment P by a given public key PK1 to another public key PK2 is valid, in addition to all other checks already discussed, preferably one should also check that both keys have valid certificates issued by a proper entity. To enable the verification of such a payment, one (e.g., the payer or the payee) may also provide a proper certificate C1 for PK1, and a proper certificate C2 for PK2. In fact, it may be assumed that, when keys are certified, any payment P by public key PK1 to public key PK2 also includes C1 and C2.
Of particular interest is the case when D is authorized to certify public keys of verifiers or potential verifiers. To issue a certificate C identifying PKX as a verifier (or potential verifier) public key, D may check that player X is indeed eligible to be a verifier and does not possess another verifier public key. Alternatively, D may check that D has not personally certified another verifier key for X. When verifier keys are certified, to verify the digital signature s relative to a verifier key PKV, preferably one should also verify that PKV has a valid certificate issued by a proper entity.
The payer and the payee of a payment made via check are readily identifiable. Accordingly, the check payment system is rarely used for money laundering or other illegal activities. It may be desirable that the same holds for the systems described herein. For this purpose, it may be desirable that a special entity S (e.g., a governmental entity, the police, or a judicial entity) be able to trace a public key PKX to an owner X of PKX. To this end, certificates can be very useful. If PKX is certified by a distinguished entity D, e.g., C=SIGD(PKX, I), then D may be required to keep and to provide to S, upon proper request, the identity of X.
Alternatively, the information I in C may contain information IX enabling an easier identification of X, in particular IX may consist of the name of X. This way, however, makes PKX easily traceable to X by many more people than just S, D, and a few other entities.
Another solution is to have IX=H(i), where i identifying information for X (e.g., X itself) and H is a collision free hash function. This way, an entity that does not know X or does correctly guess who X is, cannot trace PKX to X. On the other hand, if D has correctly issued C, D cannot later on lie about the identity of X, and in case of an audit may trivially hand X to S, who can then hash i and compare the result with IX.
Another solution is to have IX be an index to a table (or similar data structure) used to retrieve a value that identifies X (e.g., the name of X). The table may be held by D, by S, and/or by some other entity. In case of an audit, the entity having the table would be presented with IX and would produce the identity of X in response.
Another solution consists of having IX=Enc(i), that is, having IX consist of an encryption of i with a key of D. This way, if Enc is a sufficiently secure randomized encryption scheme, then even someone correctly guessing who X is cannot use IX to confirm the correctness of the guess. In case of an audit, D may also provide the random string used to produce Enc(i). However, S would still have to contact D in order to know who X is.
To avoid interacting with D, Enc(i) may be an encryption of i in a key of S, e.g. an encryption key shared by D and S, or a public encryption key of S, for which S, and preferably only S, knows the corresponding secret decryption key. Public key encryption is well known. This way S may automatically and directly trace PKX to X, and thus every payment to a particular payer and payee, very much as for check payment. However, while paying via a check makes the payer and the payee readily clear to anyone holding the check, the same is not true for the system described above.
The entity D issuing a certificate C=SIGD(PKX, I) may be rewarded in a number of ways. For instance, D may be paid by X for producing C. Also, D may receive a reward for each valid payment P (e.g., a reward of 0.1% of the amount A of P) if the payer/the payee key of P is certified by D. For instance, such a reward may be paid to D by the payee (or the payer), if the payee key of P is certified by D. As for another example, if rewards are paid only by retailers, and if the payee key of P is a retailer key certified by D, then the reward due to D for P is paid to D by the retailer. Rewards may be paid using the system described herein. For instance, if a public key of D can have money associated to the public key, or if D has another public key capable to have money associated to the public key, then D's reward may be a payment from the relevant public key of P (e.g., the payer or the payee key of P) to the relevant public key of D. Such payment to D can occur automatically, as discussed elsewhere herein. For instance, the reward may be paid when P becomes part of PAYt at some round t. The reward to D can also be made outside the system. For instance, if D is a bank, the reward to D for a payment P may include a payment from the payee X in P to D, and if X has a bank account with D, then when P enters PAYt, D is authorized to withdraw the reward from the account of X.
Democoin is very scalable, particularly with the help of a properly designed architecture for information flow. This is described below in connection with three sample instantiations referred to as “Urban”, “Regional”, and “National” that differ in the envisaged number of users and transactions. In all the examples, it is assumed that
Referring to
Below are four criteria for analyzing the scalability of Democoin:
Network Bandwidth: the bytes per second/month a player should be able to transfer. Note that several cell-phone or Internet providers cap the total number of data that a user can exchange in a month.
Connection Capacity: the maximum number of simultaneous connections a player can have.
Storage and Computation: the resources needed for a user to participate in the system.
The Urban Instantiation is defined to have 300,000 users and 1,000 transactions every 10 minutes. Thus, in the Urban Instantiation, users and transactions numbers are slightly larger than those in the current usage of Bitcoin.
Since a single payment consists of 100 Bytes, the approximate size of the corresponding PAY report is only 100 KB, the size of a full status report is about 30 MB, and (using tree-hash-and-sign mechanism, described below) the size of a self-sufficient authenticating record for a single public key is about 2 KB. Altogether, these are very reasonable sizes (and a 30 MB status report is certainly preferable to a 15 GB public ledger in Bitcoin).
To reduce network bandwidth, connection capacity, and storage it is possible to provide a simple, generalizable, and effective information flow, as described below:
At each round t, there may be 110 verifiers (chosen as before), organized in a two-level, 11-node tree: a root with 10 children (and thus 10 leaves in this case). The root may be regarded as having level 1, and each leaf as having level 2. The verifiers may be partitioned in 11 groups/buckets of cardinality 10 each. Each group may be conceptually assigned to a separate node in the tree. The 10 verifiers assigned to the root may be deemed top verifiers and to the other 100 verifiers may be deemed helper verifiers.
The information flow is as follows: At the beginning of the round, the top verifiers obtain the full status report of the previous round, that is, STATUSt−1. Since the status consists of a 30 MB, the top verifiers can obtain the full status report even with a cell phone.
There is a preferred information flow for each of the 1000 payments of round t. Consider a payment from a (payer) public key Pi to a (payee) public key Pj. Then, since a tree-hash-and-sign algorithm is being used, (the owner of) Pi retrieves from the cloud the 2 KB proving the individual status of Pi and forwards the proof to (the owner of) Pj together with a 100-Byte payment from Pi; (the owner of) Pj preferably verifies both pieces of information and forwards the information to each of the 10 helper verifiers in a bucket associated to one of the 10 leafs. That is, the payee selects a bucket B and forwards the relevant payment and individual status to each verifier in B. Note that all other information flows that convey the essential information should be considered part of the architecture of the system described herein. For instance, the payer may only send the payment to Pj; while Pj obtains from the cloud the individual status of PKi, and forwards the individual status of PKi and the payment to the helper verifiers of the selected group. Alternatively yet, the payee may only forward the payment to the helper verifiers, who then obtain the individual status of PKi of the previous round from the clouds. Of course, other possibilities/combinations exist.
Note that the computation involved so far is trivial: (the owner of) Pi generates one digital signature; while (the owner of) Pj verifies one digital signature and computes one hash to select the bucket B. Also, the bandwidth is relatively low: (the owner of) Pi obtains 2K bytes from the cloud and forwards 2.1K bytes to Pj; and (the owner of) Pj forwards 2.1K bytes to each of the 10 helper verifiers in bucket B.
The bucket B may be selected at random. In particular, to ensure that lazy payees do not instead always select, for example, the first bucket, B can be selected via a given cryptographic hash function H. For instance, (the owner of) Pj may hash the payment (possibly with additional information, such as vt) and use the last decimal digit of the hashed payment to determine which of the 10 possible buckets of help verifiers should be the selected B. In this case, a helper verifier in B, receiving and processing the payment information, may also use H to verify that the payment information was correctly sent to the bucket B to which the helper verifier belongs.
Since 1000 payments are distributed randomly across 10 buckets, each bucket of verifiers may be selected for approximately 100 payments. Accordingly, each helper verifier must be able to receive 2.1K bytes from approximately 100 users. Therefore, even via a standard cellphone a helper verifier can receive the data within 1 minute (even being capped at 10 simultaneous connections).
Each helper verifier checks that, for each payment being handled, all the relevant information is correct. That is, for each payment of #X from a public key Pi to another public key Pj, the helper checks that the digital signature of the payment, the digital signature(s) of individual status report of Pi for the previous round, and that the amount X does not exceed the money attributed to Pi in the report. Then, the helper preferably compiles all valid payments in a single, preferably ordered, list L, and digitally signs L together with an indication of the round t (e.g., the current time). Finally, the helper sends to each of the 10 top verifiers, the signed and dated list L, and preferably also the individual prior status of each paying public key.
To receive the information, each top verifier only need to open 100 connections to download countersigned payments. As mentioned elsewhere herein, even using a standard cell phone, this can be done within 1 minute. Note, that is may be tempting to store reports produced by the helper verifiers on the cloud and ask the top level verifiers to retrieve it from there. However, this is bad design decision since in this design the cloud may choose to deny a payment of an individual by erasing the corresponding reports from the helper verifiers. Each top verifier i uses the downloaded countersigned payments to produce the reports STATUSti and PAYti, and posts STATUSti and PAYti, on the cloud. The size of both of the reports is about 31 MB and can be uploaded to the cloud within 4 minutes by a cell phone. Additionally, the signed information received from the helper verifiers could be uploaded to the cloud, so as to keep everyone accountable.
It is possible to optimize the system described herein by uploading/downloading only self-contained records between the verifiers and the cloud. For instance, suppose at round t−1 all verifiers and the cloud hold up-to-date status report STATUSt−1. Suppose also that a top level verifier i wants to efficiently communicate STATUSti to the cloud. Using a tree-hash-and-sign mechanism (described elsewhere herein), a hash of every record of the report corresponds to a leaf in the tree; and only the root of the tree needs to be signed by the verifier. There are at most 2000 record changes between STATUSt−1 and STATUSt, given 1000 payments. Hence, assuming the cloud knows STATUSt−1, to communicate STATUSt−1i, the verifier only needs to communicate the changes in the tree (that is, 2000 new records), and the signature of the new root. Given STATUSt−1 and the new records, the cloud can reconstruct the entire tree and obtain the hash of the root. The cloud then can use the signature of the hash obtained from the verifier to reconstruct the full version of STATUSit. Using this mechanism, only approximately 210K bytes of data needs to be communicated between the verifiers and the cloud.
In sum, the Urban Instantiation can be run via a cell network with a network bandwidth and Capacity of 1 Mega-bit per second (Mb/s), capped by 2 GigaBytes (GB) per mont, and a connections Capacity of 10.
The Regional Instantiation.
The Regional Instantiation is defined to have 3,000,000 users and 10,000 transactions every 10 minutes. That is, in the Regional Instantiation, users and transactions numbers are 10 times higher than in the Urban Instantiation. The Regional Instantiation may be run via laptops.
For the Regional Instantiation, the total size of the status report is about 300 MB, that is, 10 times bigger than the report for the Urban Instantiation. Thanks to the tree-hash-and-sign mechanism (described elsewhere herein), however, the size of self-contained report about an individual public key is only 5K bytes. To maintain a good performance, it is possible to increase the number of helper verifiers by a factor of 10, so that the total number of verifiers, from 110, now becomes 1010. The verifiers may be partitioned in 101 groups (buckets) of 10, and conceptually each bucket may be assigned to a node of a 3-level 10-degree tree T. That is, the root of T has, as before, 10 children (of level 2), but now each of the children has 10 children (of level 3). The 10 verifiers assigned to the root are deemed top verifiers, and all other verifiers are helper verifiers.
There are now 100 buckets on the third level. Each payee (or payer) of a given current payment chooses at random (using a cryptographic hash function) which bucket will handle the payment. Therefore, on average, each level-3 helper verifier only handles 100 payments, which can be downloaded within a minute by a laptop computer. Once verified, the payments are forwarded to a bucket of level-2 helper verifiers (again, chosen at random). Similarly, each level-2 verifier needs to open only 10 connections and download 1000 payments, of size approximately 100 Kilobytes which can be done easily within a minute. After verifying and combining the information received from children buckets, and signing the combined information, each level-2 helper verifier sends the proper information to each of the top verifiers, that is, to each verifier in the group associated to the root, the only node of level 1. After verifying, combining, and signing all the received information, that is, after generating STATUSti and PAYti, each top verifier i uploads STATUSti and PAYti, to the cloud. This uploading could take about 4 minutes with a standard laptop. Thanks to the envisaged tree-hash-and sign method, each top verifier i can use the same “More Efficient Update” method, already discussed for the Urban Instantiation case, in order to dramatically cut down the amount of data to be uploaded, and thus the time of the uploading, without increasing at all the trust of the power of the cloud. Any player may update its state by querying the storage provider for relevant self-contained new records within a minute. The computational time needed from verifiers is also about a minute. Therefore, the expected duration of a round in this instantiation is approximately 8 minutes.
In sum, the Regional Instantiation can be run (within 10 minutes) with a network bandwidth and capacity of 10 Mb/s, capped by 80 GB per month, and a connections capacity of 10.
The National Instantiation is defined to have 30M users and 100k transactions every 10 minutes. This scales the urban example by a factor of 100. If there are 30 million users and 100 thousand transactions every 10 minutes, the size of the status reports grows to 3 GB, but the size of self-contained authenticated individual record remains small—about 7 KB. It is also possible to add a 4th level of 1000 buckets of helper verifiers. Therefore, they are now an additional 10000 verifiers at the 4th level, each receiving about 100 payments for verification. All other parameters scale correspondingly and can be easily handled by a standard laptop. Moreover, the entire round can easily be performed within about 20 minutes using the efficient updates mechanism described in the Urban Instantiation (the communication overhead for 100K payments is insignificant, but the extra 10 minutes is added to account for the computational time to verify the payments.
Generalizing the architecture discussed above, it is possible to handle Continental and Planetary Instantiations, respectively capable of handling 300M and 3B users, and 10× and 100× as many transactions as the National Instantiation. The adoption of the More Efficient Updating is useful in such cases. The rounds may become longer, but feasibility will still be maintained.
Democoin is a democratic monetary system in that the responsibility of running Democoin resides with the users themselves. For efficiency reasons, however, Democoin is not run by all users simultaneously. Rather, at each round, only some users are randomly selected to act as verifiers, so as to guarantee the integrity of the system. The verifiers of a given round are rewarded for their effort and availability. Indeed, the verifiers stand to be collectively paid 1% of the total money that changes hands in a given round. No payment is made by the users to outside parties for running the system except for the payment due to the cloud for providing accessible storage. However, payment to the cloud is a negligible amount relative to amounts traditionally due to “trusted parties” to run a financial system, such as credit card issuers.
Democoin is fair. A verifier stands to make a lot of money, and, at every round, all users have the same probability of becoming verifiers. In addition, as explained below, each user in the three instantiations described elsewhere herein are extremely unlikely to never be selected to be a verifier. Assume that the total 1% reward of a given round is distributed equally among all verifiers (i.e., that top verifiers and helper verifier are treated alike). Then, because the ratio of total number of users and the total number of verifiers is roughly the same in the Urban, Regional, and National Instantiation, in all these instantiations the probability for a user to become a verifier is the same at every round. Moreover, because in the first two instantiations each round consists of 8 minutes, it can be easily seen that in the Urban and Regional Instantiation a player is expected to become a verifier about 22 times a year, that is, just under 2 times a month. Altogether this is a non-trivial frequency. This frequency can be increased by increasing the verifiers-to-users ratio. Note also that Democoin is fair in a difference sense when the probability of being selected is proportional to the money a user owns, possibly over different public keys.
Democoin also provides significant security. The system works as expected if the majority of the verifiers in every bucket are honest and execute correctly according to the system specifications. Suppose that 90% of the users are honest and hence a status report (or payment) is valid if and only if 90% or more of the verifiers validate the status report (or payment). If a 90% or greater consensus is not reached in a given round, then de facto no money has changed hands in that round. In this case, the probability that a randomly selected player is malicious is 0.1, and the probability that a randomly selected pair of players are malicious players is 0.1*0.1=0.01. Continuing this calculation, it is possible derive the probability that 9 or more chosen verifiers, at a given bucket, are malicious, which is equals to about 10−8. That is, given ten selected players in a bucket, the probability that one of them is not-malicious and the rest are malicious is 10*(0.9*0.19). Accounting for the case when all chosen players are malicious which is 0.110 and summing up the two probabilities, it is possible to obtain that from the selected 10 players, the probability that 9 or more are malicious is about 10−8.
In the Urban Instantiation scenario, there are 11 buckets of verifiers. Hence, the probability that one of the buckets is malicious is at most 1.1*10−7. Hence, every 9 million rounds, there is expected to be one bad selection of the verifiers. The 8 minute rounds result in about 65,744 rounds in a year. Hence, it is expected that there will be one bad selection of the verifiers once every 137 years, which is enough for practical instantiations on the Urban Instantiation scenario.
However, suppose the number of verifiers in each bucket is increased to 50 and suppose 80% of the verifiers are honest. Then, the probability that a single bucket of verifiers is malicious (that is, 40 or more of the selected verifiers are malicious), is about 1.3*10−19. This probability p can be derived using formula:
p=Σ
i=10
50((i50)*0.2i*0.850−i).
where 0.2 is the probability of selecting a malicious verifier, and 0.8 is the probability of selecting an honest verifier; and the sum is over all of the “bad” selections.
Given 11 buckets in the Urban Instantiation, the probability that one of them is malicious is at most 1.43*10−18, or once every 7*1017 rounds. That is, a bad occurrence is expected once every 10 trillion years. Moreover, in Regional and National Instantiations, the frequency reduces somewhat, but still remains astronomically high to approximately every 1 trillion years and 100 billion years, respectively. Moreover, note that in order to mount a successful attack, it is not enough that at least 40 of the 50 selected verifiers are malicious, but that these malicious verifiers succeed in coordinating their actions within a round, that is, in a few minutes. (Recall in fact that the set of verifiers of a future round t cannot be predicted long in advance, because it depends on a variable vt that is totally unpredictable until round t.) Since a round is very short, this coordination may not be practically feasible. Moreover, by slightly increasing the number of verifiers, it is possible to achieve virtually any level of security deemed useful.
Tree-Hashing
As discussed elsewhere herein, tree-hash-and-sign is an effective mechanism to authenticate a large list of records with a single signature, while supporting efficient “local” verification (without the need to download the entire list). This mechanism is used in many existing payment systems, such as Bitcoin, tree-hash-and sign works as follows:
Suppose a verifier V wants to tree-hash-and-sign a list of payments PAY=(P1, . . . , Pn). Note that tree-hash-and-sign may be used analogously on any list of records, such as a list of players' accounts information. First, the verifier builds a Merkle tree of the list PAY, inductively starting from the leaves and converging at the root. The leaves of the tree are associated with level 0 and the root of the tree is associated with level q=log n (for simplicity, suppose that n=2q for some q). To compute the Merkle tree, the verifier first hashes the individual payments Pi and associates the hashes to the leaves of the tree: h0i=H(Pi). Then, inductively, the verifier computes a hash for a node by hashing two of the children of the node. In particular, to compute a hash hij for a node at level i (in range 1, . . . , q), leth
h
j
i
=H(hci−1, hc′i−1)
The hash hT=h1q, associated with the root of the tree, is a “commitment” to the entire list of payments PAY. For example, suppose PAY=(P1, P2, P3, P4), then the Merkle tree the verifier would compute to:
To authenticate the list, the verifier may publicize a single digital signature SIGV(t; hPAY), where t denotes the time. Now, a self-standing V-authenticated record about Pi consists of:
For instance, in the above example, the self-standing V-authenticated record about the payment P1 consists of:
(1) P1 (and optionally, h10, h11, hPAY),
(2) the associated sibling hashes h20 and h21 and the payments at the leaves P1, P2, and
(3) SIGV(t, hPAY).
To verify this authenticating record of Pi one can:
(1) verify the signatures of the individual payments at the leaves,
(2) compute the hash of the root, hPAY, by recomputing, in a bottom-up fashion, the hash of every node along the path to the root (In fact for each such node he has already computed the hash of one of its children, c, and has retrieved the hash of the other child, that is, of the sibling of c) and
(3) verify the signature of the root: SIGV(t; hPAY).
Again, referring to the above example, the owner would check that P1 and P2 are valid payments, whether
h
T
H(H(H(P1), H(P2), h21)
and that SIGV(t; hPAY) is a valid signature.
It is easy to see that the verifier's computation is very efficient. The verifier only needs to evaluate an efficient hash function (for example, SHA-512) to construct the Merkle tree. Hence, the total number of hashes the verifier needs to compute for n payments is 2n−1 (corresponding to the total number of nodes in the tree). Since hashing is very efficient, it would take less than a second for a standard computer to produce a Merkle tree for, say, a million payments. Then, the verifier needs to produce a single digital signature authenticating the entire list. For example, using one of the standard Elliptic-Curve Signature Algorithms, the time it would take to produce such a signature would be around 2 milliseconds, and would be around 200 bytes (including a large amount of useful information deemed useful).
It is also easy to see that the player needs to download very little information from the verifier and that computation of the tree is efficient. In particular, the player downloads the path consisting of log n hashes, the sibling log n hashes (and the two payments at the leaves in clear). Since a logarithmic function remains very small even for astronomically large n, the total number of hashes the player needs to download and recompute remains very small. Moreover, the player only needs to perform a few (three) signature verification algorithms.
Using a standard Elliptic-Curve Signature Algorithms (used in Bitcoin and other payment systems), below is a table highlighting the efficiency of tree-hash-and-sign. It is easy to see that even for 1 Billion payments, only (approximately) 31 Kilobytes needs to be downloaded by a player. Since even a cellphone with a weak internet connection can download at a rate of (at least) 1 Megabyte per second, this download rate can easily be handled. Moreover, the verification time remains very small, and is largely dominated by the verification of the signatures.
It should be realized by those skilled in the art that payments in the system described herein may transfer more than money from one user/public key to another and may transfer something different than money.
In particular, rather transferring money, a payment may transfer a given number of stock shares of a given security. For instance, at each round t, a key PK may be associated to/own a given amount of money, as well as a first number of stock shares of a first security, a second number of stock shares of a second security, etc.
For instance, an entry (PK, #A, I) in STATUSt may specify the number of stocks, #A, that PK owns of a given security specified in I. More generally, #A may specify (possibly an amount of money owned by PK and) a sequence of numbers of stock shares, and I the corresponding sequence of securities owned by PK.
For instance, in a payment P=SIG_PK(PK, PK′, #A, I), #A may denote the number of stocks transferred from PK to PK′, and the information I may also specify the security in question (and possibly the price paid by PK′ to PK). PK′ may countersign P to signal the consent of the owner of PK to pay the price to PK, or separately digitally sign a separate money-transfer payment P′ from PK′ to PK, in which the information field I′ links the payment to the payment P, so that verifiers may verify whether one payment can be deemed valid alone, or both or none can be deemed valid. More generally, #A may specify a sequence of numbers of stock shares, and I may specify the corresponding sequence of securities. Alternatively I may also specify the number of stock shares of various securities to be transferred by PK′ to PK.
When a verifier Vi checks the validity of such a payment P in round t, the verifier also checks that in the previous round PK indeed had a proper number of stocks of the specified security. In sum, the payment system described herein should be understood to include a settlement system.
Bitcoin relies on a very complex and space- and time-inefficient mechanism to ensure that the status of the system cannot be subverted, and that no sufficiently large fractions of the “mining computers” are controlled by a single entity, an assumption that may become more difficult to maintain as mining agents consolidate. By contrast, Democoin relies on a very simple and very efficient mechanism, and cannot be subverted if a reasonable majority of the users are honest. Moreover, even obsolete knowledge about the status of the system suffices to reconstruct true current status of the system with sufficient accuracy.
Various embodiments discussed herein may be combined with each other in appropriate combinations in connection with the system described herein. Additionally, in some instances, the order of steps in the flowcharts, flow diagrams and/or described flow processing may be modified, where appropriate. Subsequently, elements and areas of screen described in screen layouts may vary from the illustrations presented herein. Further, various aspects of the system described herein may be implemented using software, hardware, a combination of software and hardware and/or other computer-implemented modules or devices having the described features and performing the described functions.
Software implementations of the system described herein may include executable code that is stored in a computer readable medium. The computer readable medium may be non-transitory and include a computer hard drive, ROM, RAM, flash memory, portable computer storage media such as a CD-ROM, a DVD-ROM, a flash drive, an SD card and/or other drive with, for example, a universal serial bus (USB) interface, and/or any other appropriate tangible or non-transitory computer readable medium or computer memory on which executable code may be stored and executed by a processor. The system described herein may be used in connection with any appropriate operating system.
Other embodiments of the invention will be apparent to those skilled in the art from a consideration of the specification or practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with the true scope and spirit of the invention being indicated by the following claims.
This application claims priority to U.S. Prov. App. No. 62/117,138, filed Feb. 17, 2015, and entitled “A PUBLICLY VERIFIABLE AND JOINTLY SERVICED CRYPTOCURRENCY”, and to U.S. Prov. App. No. 62/120,916, filed Feb. 26, 2015, and entitled “DEMOCOIN: A PUBLICLY VERIFABLE AND JOINTLY SERVICED CRYPTOCURRENCY”, and to U.S. Prov. App. No. 62/142,318, filed Apr. 2, 2015, and entitled “DEMOCOIN: A PUBLICLY VERIFABLE AND JOINTLY SERVICED CRYPTOCURRENCY”, and to U.S. Prov. App. No. 62/218,817, filed Sep. 15, 2015, and entitled “ALTERNATIVE USES OF DEMOCOIN”, which are all incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US16/18300 | 2/17/2016 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62117138 | Feb 2015 | US | |
62120916 | Feb 2015 | US | |
62142318 | Apr 2015 | US | |
62218817 | Sep 2015 | US |