Unless specifically indicated herein, the approaches described in this section should not be construed as prior art to the claims of the present application and are not admitted to be prior art by inclusion in this section.
Confidential computing is an umbrella term used to describe a technology that enables a party or group of parties to reliably perform computation over secret (i.e., private) data. For example, assume there is a system comprising N input parties I1, I2, . . . , IN with private inputs x1, x2, . . . , xN respectively. The system is tasked with computing some function ƒ over these inputs and providing the result y=ƒ (x1, x2, . . . , xN) to an output party O. Confidential computing solves the following problem: how can the system transfer y to O, without O learning anything regarding x1, x2, . . . , xN (except what can be inferred from y)?
Current confidential computing schemes differ based on the types of assumptions they rely on. For example, one scheme known as multi-party computation (MPC) assumes the existence of n computation parties of which at most t are corrupt (and thus at least n—t are honest). Another scheme known as fully homomorphic encryption (FHE) assumes that the mathematical parameters used in the scheme do indeed protect the private inputs.
In the following description, for purposes of explanation, numerous examples and details are set forth in order to provide an understanding of various embodiments. It will be evident, however, to one skilled in the art that certain embodiments can be practiced without some of these details or can be practiced with modifications or equivalents thereof.
Embodiments of the present disclosure are directed to an improved scheme for confidential computing that combines aspects of MPC and FHE. With this improved scheme, output party O cannot learn anything regarding the private inputs to function ƒ unless the assumptions underlying both the MPC and FHE schemes are broken/compromised.
Generally speaking, the goal of confidential computing system 100 is for computation parties C1, . . . , Cn to compute an N-ary function ƒ over a set of private inputs x1, . . . , xN held by input parties I1, . . . , IN respectively and transfer the result y=ƒ(x1, . . . , xN) to output party O in a manner that ensures O does not learn anything regarding x1, . . . , xN, except whatever can be inferred from y (shown via reference numerals 108-112). Each input xi is private in the sense that it is known only by its corresponding input party Ii and should be kept secret from everyone else (including the computation parties, assuming they are different from the input parties). Two existing schemes for achieving this goal are multi-party computation (MPC) and fully homomorphic encryption (FHE). Each of these schemes is described in turn below.
MPC assumes that at most t of the n computation parties C1, . . . , Cn are corrupt and may collude (and thus at least n-t computation parties are honest). As long as this assumption holds, MPC guarantees that any subset of t computation parties cannot learn anything regarding private inputs x1, . . . , xN, while output party learns only y=ƒ(x1, . . . , xN).
In various embodiments, MPC employs a computation compilation algorithm (Π, Input, Output)←Compile (1κ, ƒ, n) where:
With this Compile algorithm in mind, MPC typically proceeds as follows:
3. Concurrently with (1) and (2), output party O initializes a vector of output values (out1, . . . , outn) to (⊥, . . . , ⊥) and, upon receiving an output message msgj from a computation party Cj, changes outj to msgj and runs output=Output(out1, . . . , outn). For each execution of the Output algorithm, O may decide to halt depending on the value of output. If O decides to halt, O uses output as the output of function ƒ (i.e., y) and the MPC process ends.
FHE assumes the existence of a secure homomorphic encryption scheme that consists of the following algorithms:
If it is configured correctly (i.e., employs correct/appropriate mathematical parameters), a secure homomorphic encryption scheme exhibits the following properties:
With the foregoing algorithms in mind, FHE typically proceeds as follows:
One issue with using either MPC or FHE in isolation to implement confidential computing is that the assumption(s) underlying the chosen scheme may be broken. For example, with respect to MPC, the assumption is that at most t computation parties are corrupt. However, an adversary that has knowledge of this may work hard to corrupt t+1 computation parties. If the adversary is successful in this endeavor, security is no longer guaranteed and the adversary may be able to learn the private inputs.
With respect to FHE, the assumption is that the mathematical parameters used by the FHE scheme ensure security and thus protect the private inputs. However, it is possible for one or more of these parameters to be configured incorrectly, thereby leading to leakage of the private inputs.
To address the foregoing, embodiments of the present disclosure provide an improved confidential computing scheme (referred to herein as “hybrid MPC/FHE”) that combines MPC and FHE in a novel way. As elaborated upon in section (2) below, hybrid MPC/FHE leverages the MPC framework but incorporates the following high-level changes:
With this improved scheme, the assumptions underlying both MPC and FHE (i.e., at most t corrupt computation parties and correct configuration of the FHE scheme) must be broken in order for private information to leak. If at least one of these two assumptions holds, hybrid MPC/FHE remains secure, which is a significantly stronger security guarantee than what is provided by MPC and FHE alone.
It should be appreciated that
In various embodiments, these workflows assume that output party O has executed the FHE algorithm (pk, sk, ek)←Gen(1κ) and has published pk to the input parties and ek to the computation parties. These workflows also assume that each party is associated with a message ingress queue so that the messages sent to it are saved until their turn to be processed.
Starting with workflow 200, at step 202 input party Ii can execute (Π, Input, Output)←Compile (1κ, Eval, n) where Eval is the Eval algorithm from the FHE scheme. Note that this is different from the standard MPC process in which Compile is provided function ƒ as one of the inputs. The outputs of this execution are the vector of next-message functions Π=(Π1, . . . , Πn), the Input algorithm, and the Output algorithm.
At step 204 input party Ii can execute cti←Enc(pk, xi), which takes as input the public key pk received from output party O and the input party's private input xi and outputs ciphertext cti.
At step 206 input party Ii can execute (msgi,1, . . . , msgi,n) Input(cti), which takes as input ciphertext cti and outputs a vector of messages (msgi,1, . . . , msgi,n) (step 206).
Finally, at step 208 input party Ii can send msgi,j to each computation party Cj.
Turning now to workflow 300, at step 302 computation party Ci can execute (Π, Input, Output)←Compile(1κ, Eval, n) where Eval is the Eval algorithm from the FHE scheme. This step is identical to step 202 of workflow 200.
At step 304 computation party Ci can initialize its state sti to null or ⊥.
At step 306 computation party Ci can enter a loop for each message msgin received from an input party or another computation party. Within this loop computation party Ci can execute its next-message function (msg, st′)←Πi(st, ek, msgin), which takes as input state sti, the evaluation key ek received from output party O, and msgin and outputs an output message msg and an output state st′ (step 308). In some embodiments Ci may incorporate ek into its initial state st, in which case ek does not need be provided as a separate input into its next-message function.
Upon executing the next-message function, computation party Ci can check whether output state st′ indicates halt (i.e., corresponds to a halt signal) (step 310). If the answer is yes, Ci can terminate its processing and workflow 300 can end.
However, if the answer at decision step 310 is no, computation party Ci can update its state sti with st′ (step 312) and interpret output message msg as a vector of output messages (msgout,1, . . . , msgout,n, msgout,O) (step 314). Ci can then send msgout,j to each other computation party Cj (step 316) and send msgout,O to output party O (step 318).
Finally, at step 320 computation party Ci can reach the end of the current loop iteration and return to the top of the loop to process the next incoming message.
Turning now to workflow 400, at step 402 output party O can initialize its vector of output values (out1, . . . , outn) to all null or ⊥ values. Then, upon receiving a message msgi from a computation party Ci, O change outi to msgi and execute intermediate_output←Output(out1, . . . , outn) (step 404).
If the execution of the Output algorithm is successful (or in other words, if the resulting intermediate_output is a valid value) (step 406), output party O can execute output=(Dec(sk, intermediate_output)), which takes as input the secret key sk and intermediate_output and generates a final output output (step 408). O can then output output as the result of function ƒ on the original private inputs x1, . . . , xN (step 410) and workflow 400 can end.
However, if the answer at decision step 406 is no, output party O can return to step 404 to process the next incoming message.
Certain embodiments described herein can employ various computer-implemented operations involving data stored in computer systems. For example, these operations can require physical manipulation of physical quantities—usually, though not necessarily, these quantities take the form of electrical or magnetic signals, where they (or representations of them) are capable of being stored, transferred, combined, compared, or otherwise manipulated. Such manipulations are often referred to in terms such as producing, identifying, determining, comparing, etc. Any operations described herein that form part of one or more embodiments can be useful machine operations.
Further, one or more embodiments can relate to a device or an apparatus for performing the foregoing operations. The apparatus can be specially constructed for specific required purposes, or it can be a generic computer system comprising one or more general purpose processors (e.g., Intel or AMD x86 processors) selectively activated or configured by program code stored in the computer system. In particular, various generic computer systems may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations. The various embodiments described herein can be practiced with other computer system configurations including handheld devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
Yet further, one or more embodiments can be implemented as one or more computer programs or as one or more computer program modules embodied in one or more non-transitory computer readable storage media. The term non-transitory computer readable storage medium refers to any storage device, based on any existing or subsequently developed technology, that can store data and/or computer programs in a non-transitory state for access by a computer system. Examples of non-transitory computer readable media include a hard drive, network attached storage (NAS), read-only memory, random-access memory, flash-based nonvolatile memory (e.g., a flash memory card or a solid state disk), persistent memory, NVMe device, a CD (Compact Disc) (e.g., CD-ROM, CD-R, CD-RW, etc.), a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices. The non-transitory computer readable media can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Finally, boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention(s). In general, structures and functionality presented as separate components in exemplary configurations can be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component can be implemented as separate components.
As used in the description herein and throughout the claims that follow, “a,” “an,” and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
The above description illustrates various embodiments along with examples of how aspects of particular embodiments may be implemented. These examples and embodiments should not be deemed to be the only embodiments and are presented to illustrate the flexibility and advantages of particular embodiments as defined by the following claims. Other arrangements, embodiments, implementations, and equivalents can be employed without departing from the scope hereof as defined by the claims.