Embodiments of the invention relate generally to improving homomorphic encryption. In particular, embodiments of the invention allow efficient Single-Instruction/Multiple-Data (SIMD) operations in a homomorphic encryption space to be executed interactively across multiple parties using multiple respective computer processors.
Homomorphic encryption provides data security that allows computer program operations to be executed on encrypted data, without revealing its underlying unencrypted data, to yield an encrypted result. Homomorphically encrypted results can be decrypted, only with a secret key, to produce the same result obtained if the operations were performed on the plaintext, unencrypted data. Without the secret key, the underlying unencrypted data remains secret, allowing homomorphic encryption programs to be collaboratively executed interactively among multiple parties, even when one or more of those parties is insecure or untrusted. Homomorphic encryption, however, adds extra layers of computations on top of operations that can become computationally cumbersome and significantly decrease processing efficiency.
Single-Instruction/Multiple-Data (SIMD) operations address this problem by increasing efficiency using parallelized operations across multiple data points to improve program speed. Not all homomorphic encryption operations, however, can be executed using SIMD operations. Packed SIMD fully homomorphic encryption (FHE) protocols, such as, Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski-Fan-Vercauteren (BFV), for example, typically only allow linear SIMD operations (SIMD additions and SIMD multiplications), while these schemes' native arithmetic have trouble computing a simple non-linear operation (e.g., a SIMD comparison >, <, or =) that usually require polynomial approximations.
For more non-linear operations in HE space, state of the art homomorphic programs typically use either FHEW/TFHE-schemes, which do not support SIMD processing and can further require multiple bootstrapping operations for each comparison, use polynomial approximations of operations, or use other multi-party computation (MPC) methods for specialized security models, leading to inefficient execution and slow processing speeds.
State-of-the-art systems therefore either operate using SIMD but cannot compute non-linear operations or compute non-linear operations but cannot operate using SIMD. There is therefore a need in the art of encryption to use SIMD for executing non-linear operations under homomorphic encryption to increase computer speed and efficiency.
Embodiments of the invention overcome this need in the art of encryption to provide SIMD programming for executing interactive non-linear operations under homomorphic encryption to increase computer speed and efficiency. Embodiments of the invention take SIMD FHE program with efficient parallelized execution, but with native capabilities unable to perform non-linear operations (e.g., BGV or BFV), and embed within the SIMD FHE program a garbled circuit able to perform the non-linear operation interactively between a garbler device and an evaluator device. This technique serves to wrap a garbled circuit to perform interactive operations in a SIMD FHE package and exploit both the efficiency benefits of SIMD and the non-linear operation capabilities of garbled circuits.
In some embodiments, however, embedding a garbled circuit into a SIMD FHE program may significantly reduce processing efficiency. To combine SIMD FHE and garbled circuits efficiently, embodiments of the invention may linearize (e.g., using distributed rounding) data shares input by the garbler and evaluator into the garbled circuit to allow the size of the data shares manipulated by the SIMD FHE operation to be reduced to a compact size (e.g., by modular reduction). The garbler and evaluator each linearizes and reduces the size of its respective data share, in parallel and independently, without access to the other's data share, and thus without inefficient communication therebetween. Once each party's data is linearized and reduced in size, these compact SIMD FHE data shares may be transferred in the multi-party garbled circuit for executing the non-linear operation efficiently under the SIMD FHE protocol.
According to one or more embodiments, there is a multi-party system comprising a garbler device and an evaluator device, a computer implemented method, and a non-transitory computer-readable storage medium storing instructions to execute one or more processors, each of which performs SIMD operations using garbled circuits under a FHE protocol interactively in the multi-party system. In some embodiments, the system, method and storage medium may provide the dual-benefits of increased computational speed and efficiency due to SIMD parallelized operations and broad operational capabilities (allowing non-linear operations, such as, comparisons) due to garbled circuits capabilities.
In some embodiments, the garbler device, in communication with an evaluator device, in a system of two or more parties, may store a unique garbler secret key share si, i=0 of a shared secret key s=Σisi, a ciphertext of one or more values encrypted in a SIMD FHE protocol, and a shared public key encrypting the ciphertext (e.g., the shared public key corresponding to and/or decrypted by the shared secret key). The garbler device may partially decrypt the ciphertext using the unique garbler secret key share s0 to generate a unique garbler data share, share0 . The garbler device may linearize the unique garbler data share from a non-linear data share (e.g., using distributed rounding) and reduce the linear unique garbler data share (e.g., modulo p, where p is where p is a plaintext modulus of each of the one or more unencrypted values) to be combined with other parties' shares at compact size. In some embodiments, maximal share size reduction (e.g., modulo p) may eliminate noise in the linear share to generate a noiseless (unencrypted) share. The garbler may thus generate and share an encrypted mask with the evaluator to combine both parties' shares under the mask. The garbler device may generate, and send to the evaluator, a garbled circuit defining an operation (e.g., a non-linear operation such as a comparison) on the one or more values (e.g., to each other or to a fixed value), a garbling of the linear unique garbled data share, a garbled mask, and garbled potential wires by which the evaluator is adapted to garble its linear unique evaluator data share by oblivious transfer. The garbler may define the garbled circuit, evaluated by the evaluator device, to execute a SIMD program to combine, in parallel, multiple indices of the linear garbler and evaluator unique data shares to generate an encrypted result of the garbled circuit operation on the one or more encrypted values.
In some embodiments, the evaluator device, in communication with a garbler device, in a system of two or more parties, may store a unique evaluator secret key share si, i=1 of a shared secret key s=Σisi, a ciphertext of one or more values encrypted in a SIMD FHE protocol, and a shared public key encrypting the ciphertext (e.g., the shared public key corresponding to and/or decrypted by the shared secret key). The evaluator device may partially decrypt the ciphertext using the unique evaluator secret key share s1 to generate a unique evaluator data share, share1. The evaluator device may linearize the unique evaluator data share from a non-linear data share (e.g., using distributed rounding) to enable the unique evaluator data share to be combined with other parties' shares at compact size (e.g., modulo p). The evaluator device may receive from the garbler device, a garbled circuit defining an operation on the one or more values, a garbling of the linear unique garbled data share, and garbled potential wires. The evaluator device may garble its linear unique evaluator data share by the garbled potential wires of the garbled circuit using oblivious transfer. The evaluator device may evaluate the garbled circuit using a SIMD execution to combine, in parallel, multiple indices of the linear garbler and evaluator unique data shares to generate an encrypted result of the garbled circuit operation on the one or more encrypted values.
In some embodiments, the multi-party system may include more than two parties. The operation may be iteratively executed pair-wise between pairs of parties in a linked sequence of the more than two parties. In each iteration of the iterative pair-wise execution, the evaluator device from a previous iteration may be reset to be the garbler device in a current iteration and a third party of the more than two parties may be set to be a new evaluator device in the current iteration.
According to one or more embodiments, there is a multi-party system, a computer implemented method, and a non-transitory computer-readable storage medium storing instructions to execute one or more processors, for performing interactive bootstrapping in a FHE protocol. FHE schemes typically use noisy encryptions to provide security. Each time homomorphic operations are executed on ciphertexts, the noise increases and the ciphertext quality decreases. Bootstrapping may be used to convert an “exhausted” ciphertext (having relatively high and/or above-threshold noise preventing further homomorphic operations to be performed on it) to a substantially equivalent “refreshed” ciphertext (having relatively low and/or below-threshold noise enabling further homomorphic operations to be performed on it). Additionally or alternatively, bootstrapping may be used for other purposes including to turn approximate homomorphic encryption schemes into fully homomorphic encryption schemes, increase the plaintext modulus of a ciphertext, switch between different encryption keys, and/or switch between different FHE schemes.
Each party in the multi-party system may collaborate to interactively bootstrap in a FHE protocol. Each party may store a unique secret key share si of a shared secret key s=si, an initial (e.g., exhausted) FHE ciphertext c of an encoded message (e.g., with insufficient computational depth to perform FHE operations thereon), and a shared public key P encrypting the initial ciphertext. Each of the parties may, in parallel and without communication therebetween, partially decrypt the initial ciphertext using the unique secret key share si to generate a unique data share, sharei, linearize (e.g., by distributed rounding) the unique data share (e.g., from a non-linear data share), eliminate noise in the linear share to generate a noiseless (unencrypted) share ci of compact size, reduce the share size by taking modulo p of the linear shares (e.g., where p is the unencrypted plaintext modulus, i.e., the modulus of the unencrypted encoded message), re-encrypt the noiseless share Encp(ci) using the shared public key to add noise (e.g., that enhances computational depth, encrypting and increasing the share size by modulus Q, where Q is the modulus of the FHE protocol, p<<Q). Each party may then send its re-encrypted share to a different party, or receive re-encrypted shares from the other of the two or more parties, to bootstrap by linearly combining the re-encrypted shares of the two or more parties to generate an updated (e.g., refreshed) ciphertext c′=ΣiEncp(ci) of the same encoded message (e.g., with sufficient computational depth to perform FHE operations thereon).
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
Some SIMD FHE schemes (e.g., based on RLWE such as BGV or BFV), while efficient, only allow for linear SIMD additions and SIMD multiplications, but are unable to perform more complex non-linear SIMD operations, such as SIMD comparisons, that usually require polynomial approximations or switching to schemes which handle Boolean operations more easily, e.g., FHEW/TFHE schemes.
Garbled circuits, on the other hand, can handle non-linear operations, such as comparisons, e.g., since they work on the bit-level, while operating at faster run times than FHEW/TFHE since they usually only rely on block ciphers like AES and oblivious transfer. However, garbled circuits are typically inefficient for deep computations because, for example, the encoded circuits can become quite large, and/or each circuit conventionally requires the garbler and/or the evaluator to communicate adding layers of processing to ensure inter-party security causing significant delays in processing speed.
Embodiments of the invention significantly increase the speed and efficiency of executing FHE programs by integrate garbled circuits into SIMD FHE programs to exploit the dual benefits of garbled circuit's non-linear computing capabilities and SIMD FHE programs' efficient parallelized program execution. Systems and methods are provided for performing SIMD operations in homomorphic encryption space using garbled circuits interactively between two parties—a garbler device and an evaluator device. Embodiments of the invention may increase FHE program speed proportionally to the size of the multiple datablocks on which SIMD operations are parallelized. For example, an interactive SIMD FHE program that parallelizes operations on N indices i=1, . . . , N generates an N times speed up in program processing e.g., per message.
Interactive computations between the two parties involve a server splitting a shared secret key, s=s0+s1 into two separate secret key shares, s0 and s1, and transmitting to each of the parties: (i) a ciphertext encrypted using an associated publicly available homomorphic encryption key pk; (ii) a unique share of the split secret key shares si; and (iii) the public encryption key pk. The garbler device is the only interactive party with possession of its unique garbler secret key share s0, the evaluator device is the only interactive party with possession of its unique evaluator secret key share s1, and both parties have possession of the combined shared public key pk. The ciphertext encrypts one or more values or messages (m and/or α) for a non-linear operation (e.g., comparison m>α, or equivalently the sign of the difference m−α). Each party computes a partial decryption of the ciphertext using its unique private key share s0 and s1 to generate a unique data share sharei=i·b+asi, respectively.
For increased efficiency, initially and prior to executing the garbling circuit, each party may linearize its partially decrypted data share to convert the data share from a non-linear polynomial to linear data (e.g., by distributed rounding). After linearization, each party significantly reduces the size of its data share to a relatively more compact data size (e.g., reduced from mod q to mod p, where p is the plaintext modulus of the one or more unencrypted values and q is the modulus of the FHE scheme, q>>p, so that the reducing the share size from mod q to mod p decreases the computational load and increases efficiency). Linearization and size reduction allows the data shares to be combined, later by the garbler circuit, efficiently at compact size. The parties linearize their respective data shares independently, in parallel, without communication therebetween. Avoiding inter-party communication to linearize data shares independently eliminates extra computational layers for secure interaction, including re-encryption and transmission, significantly reducing computational time and complexity. Once linearized and size reduced, the compact linear data shares may be efficiently combined (e.g., under the security of a mask) using SIMD computations by the garbled circuit. The garbled circuit is programmed to execute the non-linear operation on the message values (e.g., comparison m>α). Oblivious transfer in the garbled circuit allows the garbler and evaluator to combine their compact linear data shares without either party knowing what data the other party contributes. The evaluator evaluates the garbled circuit to execute the non-linear operation on the combined garbler and evaluator inputs. The evaluator then outputs a ciphertext encrypting the operation result c of the SIMD execution of the non-linear operation under homomorphic encryption.
Inter-party communication and combination may be executed under the security of a mask. The garbler may generate the mask and embed it in the garbled circuit that combines the two parties' unique data shares under the mask (e.g., see EQN 1). Once the circuit is complete, the evaluator device may unmask the encrypted result to generate the output ciphertext encrypted under the shared secret key.
An example SIMD program for executing interactive operations under homomorphic encryption is shown in Table 1. In Table 1, the left-hand side of the table shows operations executed by a first party server 0, S0 (e.g., the garbler device using computer processor 250-1 of
In the example of Table 1, BGV encryption is used, although other SIMD FHE protocols may be used such as BFV. For example, Table 1 can be adapted to support BFV encryption by switching a BFV ciphertext input to a BGV ciphertext homomorphically with standard techniques (e.g., scalar multiplications). Comments regarding correctness are provided and interactions are described by arrows in the middle of the table.
The garbler and evaluator may interactively execute the example SIMD program of table 1 to compute an interactive functional bootstrapping of a comparison m>α e.g., for some fixed value, e.g., scalar α∈p. The comparison operation is provided only for example, and any other non-linear operation may be used. Table 1 describes the protocol version where S0 knows α. Adapting the protocol to the case where only S1 knows the fixed value α (have a subtracted from S1's share), both S0 and S1 know the fixed value α (have α encrypted by a third party), or where neither knows the fixed value α (have α subtracted homomorphically with it encrypted as an input).
Each party S0 and S1 may store a unique secret key share si of a shared secret key s=Σisi, a ciphertext, ct=(a, b)∈Rq2, encrypting a message of one or more values under a shared secret key, s=s0+s1∈R (e.g., with small norm) in a SIMD FHE protocol, and a shared public key encrypting the ciphertext (e.g., the shared public key corresponding to/decrypted by the shared secret key). The next step can be for each party S0 and S1 to compute independently, without communication therebetween, a partial decryption of the ciphertext using its secret key share s0 and s1 to generate a data share share0 and share1, respectively. Each party S0 and S1 may then perform linearization (e.g., by distributed rounding) on each message share, respectively (line 1 of Table 1) independently, without communication therebetween. For example, party i computes sharei=i·b+asi then [sharei]dist where [z]dist is the linearization operation (e.g., distributed rounding described herein). Linearization allows the parties to reduce the size (e.g., mod p, where p is the plaintext modulus of each of the one or more unencrypted values) to be able to later add the shares in the garbled circuit over the integers at compact size (e.g., without taking the result modulo q, where q is the modulus of the FHE protocol, p<<q). Each party can independently, without communication therebetween, reduce its share modulo p (line 2 of Table 1) and can perform the modulo p discrete Fourier transform, DFT, on its modulo-p share (line 3 of Table 1). At this point, the shares can be added modulo p to get the unpacked plaintext message.
The garbled circuit computation begins in line 4 of Table 1, where S0 is the garbler with inputs x∈pk and mask∈{0,1}k, where k is the number of plaintext slots, and S1 is the evaluator with input y∈pk. The garbled circuit may be performed interactively under the security of a mask encryption. The garbler, S0, generates a mask (lines 4-6 of Table 1), e.g., by first generating a binary mask randomly, in a cryptographically-strong manner (e.g., using a NIST-approved 128-bit secure PRNG) (line 4 of Table 1), converting it to a signed mask (encoding it in {±1}) (line 5 of Table 1), then creates a packed FHE plaintext by calling the inverse DFT modulo p on the resulting vector (line 6 of Table 1). Line 7 of Table 1 shows the garbler then encrypts this ciphertext under the shared public key that corresponds to the shared secret key, s=s0+s1. Lines 8 and 9 of Table 1 show the garbler garbling the circuit where (, ) are the garbled potential wires for the evaluator's input y∈pk and {tilde over (x)}, represent the garbler's input and C is a Boolean circuit which performs the following in parallel for each (xi, maski, yi)i=1, . . . ,k:
z″
i←[(xi+yimodp)>0]⊕maski∈{0,1}. EQN. 1
(Table 1 abbreviates the vector of comparison bits (xi+yimodp)i∈{0,1}k as comp; tildes indicate garbled data.)
Next, the garbler engages the evaluator in an oblivious transfer (OT) protocol (lines 10-12 of Table 1). The garbler sends over its garbled inputs to the evaluator—the garbled linear reduced data share {tilde over (x)}, garbled mask , garbled circuit {tilde over (C)}, and/or encryption of the FHE-encoded mask c″ (line 10 of Table 1). The garbler provides the evaluator with garbled potential wires (, )j for the evaluator to garble its linear unique evaluator data share {tilde over (y)}, by oblivious transfer (OT), without revealing to the garbler which element was queried (the value of the pre-garbled linear unique evaluator data share y) (lines 11-2 of Table 1).
The evaluator, S1, then evaluates the garbled circuit, operating on the garbled masked linear unique garbler and evaluator data shares {tilde over (x)} and {tilde over (y)}, to get an evaluation output that is a masked encryption of the combination of the linear garbler and evaluator unique data shares (e.g., vector z″ ∈{0,1}k in EQN. 1) (lines 13-14 of Table 1). The evaluator uses SIMD processing to combine, in parallel, multiple indices i=1, . . . , k of the linear garbler and evaluator unique data shares ((xi,maski,yi)i=1, . . . ,k) to efficiently generate the encrypted result z″ of the garbled circuit (EQN. 1). The evaluator may then switch from a binary mask {0,1}k to encode this vector in a signed mask {±1}k by executing z′i←(−1)z″⊕1, in parallel, for each index i=1, . . . , k (line 15 of Table 1). Next, the evaluator may encode the previous vector, z′, in a FHE plaintext with the inverse DFT modulo p (line 16 of Table 1) and performs a plaintext-ciphertext multiplication with the result and the encrypted mask c″ (e.g., and re-randomize the ciphertext by EP(0) for security) (lines 17-18 of Table 1). Next, the evaluator performs an affine transformation to map the unmasked result from {±1}k to {0,1}k in the slots of the FHE ciphertext (lines 19-20 of Table 1). Finally, the evaluator sends a ciphertext c under homomorphic encryption of the result of executing the SIMD program for executing interactive operations in the garbled circuit (line 21 of Table 1).
In some embodiments, the garbler device and the evaluator device each linearizes its respective share by distributed rounding. An example of distributed rounding may be performed as follows. An input to the distributed rounding algorithm, denoted as DistRound, may be a (e.g., BGV or BFV) ciphertext (a, b)∈Rq2, where, for example, each a and b are integer vectors of dimension N, N is a power of two, representing a polynomial in Rq=q[X]/(XN+1) where Q is the set of integers modulo q. Further, the integers modulo q can be represented with balanced residues, e.g.,
if q is odd. Given these two polynomials, the following can be computed on each coefficient c∈q in parallel: a fixed value (e.g., q/2) is added if an absolute value of the partial decryption of the ciphertext is greater than a predefined value, (e.g., |c|>q/4) and return c, e.g.:
The significance of DistRound(·) is when |a+bs mod q|«q since it can allow for additive two-party FHE decryption over the integers. Distributed rounding may be performed as disclosed in U.S. patent application Ser. No. 17/964,335 filed Oct. 12, 202, published as U.S. Patent Application Publication No. 2023/0112840 on Apr. 13, 2023, the disclosure of which is incorporated herein by reference in its entirety.
Operations, equations and symbols are shown as examples, and other implementations of these as well as protocols, programs, and/or steps (or orders thereof) may also be used.
Bootstrapping can be an important operation in homomorphic encryption schemes and/or when performing operations on homomorphically encrypted data. Embodiments of the invention may use bootstrapping to turn approximate homomorphic encryption schemes into fully homomorphic encryption schemes, reduce the noise of the encryption, increase the plaintext modulus of a ciphertext, switch between different encryption keys, switch between different FHE schemes, and/or other purposes. Switching between different FHE schemes may require some additional operations to consider different encodings used by BGV, BFV, Cheon-Kim-Kim-Song (CKKS) and other schemes. Typically, bootstrapping is computationally intensive. As such, it can be desirable to perform computationally efficient bootstrapping in homomorphic encryption.
Embodiments of the invention provide interactive multi-party bootstrapping in a FHE protocol. In a system with multiple (N) parties, each party may store a unique secret key share si of a shared secret key s=Σisi, an exhausted FHE ciphertext c of an encoded message with insufficient computational depth to perform FHE operations thereon, and a shared public key encrypting the ciphertext. The party may partially decrypt the ciphertext using the unique secret key share si to generate a unique data share, sharei, linearize the unique data share to enable shares to be combined at compact size, eliminate noise in the linear share to generate a noiseless unencrypted share ci of compact size, re-encrypt the noiseless share ci using the shared public key to add noise (e.g., that enhances computational depth) EncP(ci). The party may then send the re-encrypted share to a different party, or receive re-encrypted shares from others of the two or more parties, to bootstrap by linearly combining the re-encrypted shares of the two or more parties that generates a refreshed ciphertext c′=ΣiEncP(ci) of the encoded message with sufficient computational depth to perform FHE operations thereon.
In one example, a client (a first party S0) sends encrypted data in a message to a server (a second party S1) to run operations, for example, a machine learning (ML) program. The server outputs an encrypted result of the ML program as an exhausted ciphertext (a,b). Embodiments of the invention bootstrap the exhausted ciphertext (a,b) to generate a refreshed ciphertext (a′,b′). Some embodiments of the invention perform rerandomization to obfuscate the refreshed ciphertext (a′,b′), e.g., with an encryption of zero (0) or other mask or noise. Rerandomization provides a double-blind environment in which neither the client (providing the data) nor the server (providing the ML program operating on the data) have access to the other's unencrypted information. For example, the client may submit its encrypted data to the server, the server unable to access the unencrypted data because it does not have the client's secret key. The server then applies its ML program to the ciphertext, bootstraps to refresh the ciphertext and rerandomizes to obfuscate the refreshed ciphertext, ensuring the client cannot know its ML program.
In some embodiments, the invention can involve an interactive method for bootstrapping ciphertexts encrypted using an FHE protocol, such as, BGV, BFV or CKKS, a homomorphic encryption scheme for approximate number arithmetic. In some embodiments, the invention can involve determining additive shares of a secret key to input to two or more semi-honest computer processors. The two or more semi-honest computer processors, having each received as input additive shares of the secret key, can each receive input of a common ciphertext, use the share of the secret key to decrypt the common ciphertext, resulting in the same (e.g., approximately the same) plaintext message underlying the ciphertext. The two or more semi-honest computer processors can compute encryption of the same (e.g., approximately the same) plaintext message underlying the ciphertext, but with a modulus larger than the ciphertext modulus. This may have the effect of increasing a plaintext space and/or reducing a relative noise.
Typically, in homomorphic encryption, the set of integers modulo a is denoted as Zq, represented as integers in the range, e.g., {−q/2, . . . , q/2}. For any power of two N=2k (k being an integer), R=Z[X]/(XN+1) may denote the corresponding cyclotomic ring, and Rq may denote the quotient ring with coefficients reduced modulo q. The absolute value of a ring element |c| may be defined as the magnitude of the largest coefficient, e.g., the norm |{right arrow over (c)}|∞ of the corresponding vector. For example, a ciphertext with modulus a may include a pair of cyclotomic ring elements (a, b)∈Rq2. A decryption keys may be a ring element s∈R with small (e.g., less than 64 bits) coefficients. Small keys may be useful to, for example, perform rounded division, sign evaluation and/or comparison operations. In some embodiments, the decryption key is any s∈Rq with possibly large (e.g., greater than 64 bits) entries.
Typically, in homomorphic encryption, upon input of a ciphertext (a, b) and a decryption key s, a BGV, BFV or CKKS approximate decryption function can be performed (e.g., by a computing device 100A as shown below in
Embodiments of the invention may use ciphertexts with different moduli q<Q. The encryption and decryption may also include a scaling factor Δ used to represent fixed point numbers. The first step (and typically not the remaining steps) of a decryption procedure for the encryption can depend on the modulus q. In the first step of the decryption, c can be recovered. Once c is recovered, then the remaining steps of the decryption procedure can be independent of the modulus q and/or the secret key s, which may allow the steps to be done in parallel.
Embodiments of the invention may provide a system for distributing bootstrapping in homomorphic encryption schemes.
In an example FHE protocol, such as, BGV, BFV or CKKS, assume s∈R be a secret key modulo q and P a public key modulo Q (here P may correspond to the same secret key s, or a different one (under key switching e.g., PRE): this makes no difference for the example or the applicability of embodiments of the invention).
Embodiments of the invention may solve the following example bootstrapping problem: Given an exhausted ciphertext (a, b)∈Rq2 which encrypts a message c=Decs(a,b)=as+b (with |c|<β), how to obtain a refreshed encryption EncP(c) of the same (encoded) message c under P. As will be known to those skilled in the art, β may represent a noise parameter typically fixed based on security standards.
Embodiments of the invention may solve the above example bootstrapping problem using a method (e.g., described in reference to
s=s
0
+s
1 (mod q) EQN. 3
With reference to the method of
The servers may also receive the public key P modulo Q, and receive the input ciphertext (a, b), from a client C (in this example, server 210 of
The client C may compute a homomorphic sum according to EQN. 4 below:
EncP(c0)+EncP(c1)=EncP(c0+ci) EQN. 4
Here, EncP may be a BGV, BFV or CKKS encryption using a larger modulus Q, or any other linearly homomorphic public key encryption scheme.
Here, EncP may be a BGV, BFV or CKKS encryption using a larger modulus Q, or, any other linearly homomorphic public key encryption scheme, as the computation performed by the servers use EncP as a black box.
Example 1: Protocol For BGV, BFV or CKKS Bootstrapping
Let i∈{0, 1} be the index of server Si. Each server may perform the following operations:
In this example, each coefficient of ci may satisfy |ci|∉(q/4±β), where |c|<β by assumption. When, q is larger by approximately 40 bits than β, the protocol may reduce a total noise of the bootstrapping. For typical implementations of BGV, BFV or CKKS, which typically perform all operations in the residue number system (RNS), this relationship between q and β suggests an extra RNS modulus can be added to the modulus, typically resulting in two RNS limbs in the ciphertext before calling an interactive bootstrapping procedure such as described herein, as compared to a single RNS limb in the case of noninteractive BGV, BFV or CKKS bootstrapping.
Example 2: Protocol Based On Threshold FHE
In this example, it is assumed that there is an existing protocol (e.g., the method of
To perform interactive bootstrapping, the following protocol may be executed:
Example 3: Protocol Based on Threshold FHE with Rerandomization
This example is a modification of Example 2, where a ciphertext rerandomization may be added. One option is to do the rerandomization of the input ciphertext before starting the main bootstrapping protocol. The modified protocol may be executed as follows:
Another option is to rerandomize interactively using the secret shares of each party, similar to the protocol for generating the joint public key in threshold FHE.
Reference is made to
Multi-party system 200 may include a server 210 and two or more interactive parties, each operating a distinct one of a plurality of n computer processors 250-1, 250-2, . . . , 250-n, where n is an integer, for example an integer greater than or equal to two. Each individual processor 250-i may be one or multiple processors or processor cores. Server 210 and party devices may each be a computing device 100A as described in
For simplicity, each ith party is described to operate an ith processor 250-i, but in general, any combination of the plurality of n computer processors are part of, located on, and/or operably connected to server 210 or a party device. For example, server 210 may include m computer processors 250-1, . . . , 250-m, a first party device may include p computer processors 250-m+1, . . . , 250-m+p, a second party device may include q computer processors 250-m+p+1, . . . , 250-m+p+q, and so on (e.g., m, p, and q are any integers). For simplicity, the number of parties, processors and keys are numbered n, but their numbers may be different.
In some embodiments, server 210 is configured to split a shared secret key, s=Σisi into two or more separate secret key shares, si, for the two or more respective parties and/or processors. The number of shares may be chosen based on the number of parties, the size of the ciphertext, the size of the decryption key, a number of available computer processors, a number of available computer processors trusted to perform operations with the shares securely, a desired processing time, a desired level of encryption noise, or any combination thereof. In some embodiments, the number of secret key shares may be determined by a number of parties, for example, how many banks are collaborating in situations where they are sharing data to build models of financial crime.
As shown in
s=s
1
+s
2
+ . . . +s
n(mod q) EQN. 5
Server 210 may be configured to transmit to each of the plurality of n parties and/or n computer processors 250-1, 250-2, . . . , 250-n a ciphertext 230 denoted (a, b). In some embodiments, ciphertext 230 includes a pair of ring elements from a cyclotomic ring. In some embodiments, ciphertext 230 is encrypted using homomorphic encryption, for example, ciphertext 230 may be a BGV, BFV, or CKKS ciphertext encrypted using the BGV, BFV, or CKKS approximate homomorphic encryption scheme.
Server 210 may be configured to transmit to each of the plurality of n parties and/or n computer processors 250-1, 250-2, . . . , 250-n a unique share of the plurality of n shares of the decryption key. For example, server 210 may transmit share 225-1 of decryption key 220 to first party or computer processor 250-1, transmit share 225-2 of decryption key 220 to second party or computer processor 250-2 etc. and transmit share 225-n of decryption key 220 to nth party or computer processor 250-n. In some embodiments, the server transmits a unique share of the plurality of n shares of the decryption key to more than one party or computer processor. For example, server 210 may transmit the same share 225-1 to both computer processor 250-1 and computer processor 250-2 or first and second party. This may allow for a redundancy in embodiments of the invention and/or allow for comparison of received results among parties or computer processors receiving the same input information, for example in order to ensure the parties or processors are not deviating from the operations required by embodiments of the invention. In some embodiments, different parties or processors may not share unencrypted data, secret data or keys, e.g., for increased security in an untrusted or semi-trusted environment.
Server 210 may be configured to transmit an indication of a publicly available encryption key 240 denoted P to the n parties and/or n computer processors. The indication of the encryption key 240 may include the encryption key P itself, and/or an indication of where encryption key 240 may be accessed by the n parties and/or n computer processors, such as a file location and/or IP address. In some embodiments, an encryption key may have previously been distributed using any communication method known in the art, for example delivery by post of a USB drive containing the encryption key, which may then be inserted into a computing device containing one or more computer processors so that the encryption key is accessible by the one or more computer processors for use in accordance with embodiments of the invention. In some embodiments, encryption key 240 (e.g., P) corresponds to s the decryption key 220. For example, s and P may be respective decryption and encryption keys for the same encryption, with s used in undoing (e.g., decrypting) the encryption of P. In other embodiments, encryption key 240 (e.g., P) does not correspond to s the decryption key 220.
Server 210 may be configured to transmit to each of (i) the ciphertext 230; (ii) a unique share si 225-i of the plurality of n shares of the shared secret (decryption) key s=Σisi 220; and/or (iii) the indication of a publicly available encryption key 240, to each of the plurality of n parties and/or n computer processors substantially, in parallel, for example, approximately simultaneously, concurrently, or within a bounded time period of one another such as 5 seconds or less. Each of the plurality of n parties and/or n computer processors 250-1, 250-2, . . . , 250-n, may receive and store in one or more associated memories the data transmitted thereto.
In some embodiments, each of the plurality of n parties and/or n computer processors 250-1, 250-2, . . . , 250-n, are configured to partially decrypt ciphertext 230 using the party's unique secret key share 225-1, 225-2, . . . , 225-n to generate the party's unique data share, sharei, 260-1, 260-2, . . . , 260-n. For example: first party or computer processor 250-1 may calculate a partial decryption 260-1 of ciphertext 230 using share 225-1 of decryption key 220 (for example evaluating c1=Decs
In some embodiments, each of the plurality of n parties and/or n computer processors 250-1, 250-2, . . . , 250-n, are configured to linearizing the unique data share (e.g., from a non-linear data share) to enable the party's unique data share to be combined with other parties' unique data shares at compact size. Linearization may include rounding the decryption of the ciphertext if an absolute value of the decryption of the ciphertext is greater than a predefined value. For example, computer processor 250-1 may check if an absolute value of decryption value 260-1 is greater than a predefined value, for example if |c1|>q/4. If so, first party and/or computer processor 250-1 may round decryption value 260-1 by performing an operation as shown below in EQN. 6:
c
1
←c
1+(q/2) (mod q) EQN. 6
The operation may be performed coordinate-wise (e.g., in parallel using a SIMD program), on each coefficient of c1 independently. Similarly, second party and/or computer processor 250-2 may check if |c2|>q/4 and, if so, may perform an operation as shown below in EQN. 7:
c
2
←c
2+(q/2) (mod q) EQN. 7
Such checks and rounding may be performed for each party and/or computer processor such that computer processor 250-n may check if |cn|>q/4 and, if so, may perform an operation as shown below in EQN. 8:
c
n
→c
n+(q/2) (mod q) EQN. 8
In some embodiments, each of the plurality of n parties and/or n computer processors are configured to take a reduced modulus of its linear (rounded) share ci. In some embodiments, taking a reduced modulus may reduce the size of the data shares to a compact size to increase processing efficiency. In some embodiments, taking a reduced modulus may eliminate noise in the linear share to generate a noiseless unencrypted share ciof compact size. The modulus of the linear shares may be reduced (e.g., from modulo q to modulo p, where p is the plaintext modulus of each of the one or more unencrypted values and q is the modulus of the FHE scheme, q>>p). The modulus may be the same or different for each of the plurality of n computer processor, for example first computer processor 250-1 may use a modulus q1, second computer processor 250-2 may use a modulus q2, etc. and nth computer processor 250-n may use a modulus qn. The moduli q1, q2, . . . , qn may be used by computer processors 250-1, 250-2, . . . , 250-n during a check of the decrypted values 260-1, 260-2, . . . , 260-n and any required rounding operations as a result of the check (e.g., described in EQNs. 6-8).
After each of the plurality of n parties and/or n computer processors generates a linear unique garbled data share at reduced size, they are prepared to collaborate efficiently under interactive multi-party FHE schemes according to various embodiments (e.g., described in reference to
According to some embodiments of the invention (e.g., described in reference to
In some embodiments, the garbler party or processor 250-1 may generate, and send to the evaluator party or processor 250-2, a garbled circuit defining an operation (e.g., a non-linear operation such as a comparison) on one or more values (e.g., to each other or to a fixed value) encrypted in the ciphertext 230, a garbling of the garbler's linear unique data share, and garbled potential wires by which the evaluator party or processor 250-2 garbles its own linear unique data share by oblivious transfer. The evaluator party or processor 250-2 may evaluate the garbled circuit to execute a SIMD program to combine, in parallel, multiple indices of the linear garbler and evaluator unique data shares to generate an encrypted result of the garbled circuit operation on the one or more encrypted values (e.g., according to EQN. 1). The evaluator party or processor 250-2 may transmit the encrypted result to the server 210 or another third party that stores the shared secret key s=Σisi to decrypt the result (though decryption is not required).
According to some embodiments of the invention (e.g., described in reference to FIG.
In some embodiments, each of the plurality of n parties and/or n computer processors 250-1, 250-2, . . . , 250-n, are configured to re-encrypt the noiseless share ci using the shared public key P to add noise EncP(ci). For example: first party or computer processor 250-1 may re-encrypt decryption 260-1 using publicly available encryption key 240 to arrive at encrypted value 270-1 (for example evaluating EncP(c1)); second party or computer processor 250-2 may re-encrypt decryption 260-2 using publicly available encryption key 240 to arrive at encrypted value 270-2 (for example evaluating EncP(c2)), etc.; and nth party or computer processor 250-n may re-encrypt decryption 260-n using publicly available encryption key 240 to arrive at encrypted value 270-n (for example evaluating EncP(cn)).
In some embodiments, each of the plurality of n parties and/or n computer processors 250-1, 250-2, . . . , 250-n, are configured to transmit its re-encrypted share to the server 210 or another third party. In some embodiments, each of all but one of the plurality of n parties and/or n computer processors 250-1, 250-2, . . . , 250-n, are configured to transmit its re-encrypted share to one party and/or computer processor 250-i.
Server 210, third party or party and/or computer processor 250-i may be configured to receive, from each of the plurality of n parties and/or n computer processors 250-1, 250-2, . . . , 250-n, n encrypted values 270-1, 270-2, . . . , 270-n denoted EncP(c1), EncP(c2), . . . , EncP(cn). The encrypted values may be received substantially in parallel, for example, approximately simultaneously, concurrently, or within a bounded time period of one another such as 5 seconds or less.
Server 210, third party or party and/or computer processor 250-i may be configured to bootstrap by linearly combining the re-encrypted shares of the n parties and/or n computer processors 250-1, 250-2, . . . , 250-n to generate an updated ciphertext c′ of the encoded message. Server 210, third party or party and/or computer processor 250-i may compute a homomorphic sum 280 c′=ΣiEncP(ci) of the n encrypted values 270-1, 270-2, . . . , 270-n to obtain an encryption of the sum of n decrypted values 260-1, 260-2, . . . , 260-n such that a bootstrapping of the encryption is distributed (e.g., among the plurality of n computer processors). Computing the homomorphic sum may be performed as shown below in EQN. 9:
EncP(c1)+EncP(c2)+ . . . +EncP(cn)=EncP(c1+c2+ . . . +cn) EQN. 9
In some embodiments, each of the n encryption values 270-1, 270-2, . . . , 270-n are independent of decryption key 220. For example, P the publicly available encryption key 240 may correspond to a different secret key than s the decryption key 220. This may allow system 200 to perform key switching.
Operating system 115A may be or may include code to perform tasks involving coordination, scheduling, arbitration, or managing operation of computing device 100A, for example, scheduling execution of programs. Memory 120A may be or may include, for example, a random access memory (RAM), a read only memory (ROM), a Flash memory, a volatile or non-volatile memory, or other suitable memory units or storage units. At least a portion of Memory 120A may include data storage housed online on the cloud. Memory 120A may be or may include a plurality of different memory units. Memory 120A may store for example, instructions (e.g., code 125A) to carry out a method as disclosed herein. Memory 120A may use a datastore, such as a database.
Executable code 125A may be any application, program, process, task, or script. Executable code 125A may be executed by controller 105A possibly under control of operating system 115A. For example, executable code 125A may be, or may execute, one or more applications performing methods as disclosed herein, such as splitting a decryption key into a plurality of n shares. In some embodiments, more than one computing device 100A or components of device 100A may be used. One or more processor(s) 105A may be configured to carry out embodiments of the present invention by for example executing software or code.
Storage 130A may be or may include, for example, a hard disk drive, a floppy disk drive, a compact disk (CD) drive, a universal serial bus (USB) device or other suitable removable and/or fixed storage unit. Data described herein may be stored in a storage 130A and may be loaded from storage 130A into a memory 120A where it may be processed by controller 105A. Storage 130A may include cloud storage. Storage 130A may include storing data in a database.
Input devices 135A may be or may include a mouse, a keyboard, a touch screen or pad or any suitable input device or combination of devices. Output devices 140A may include one or more displays, speakers and/or any other suitable output devices or combination of output devices. Any applicable input/output (I/O) devices may be connected to computing device 100A, for example, a wired or wireless network interface card (NIC), a modem, printer, a universal serial bus (USB) device or external hard drive may be included in input devices 135A and/or output devices 140A.
Embodiments of the invention may include one or more article(s) (e.g., memory 120A or storage 130A) such as a computer or processor non-transitory readable medium, or a computer or processor non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory encoding, including, or storing instructions, e.g., computer-executable instructions, which, when executed by a processor or controller, carry out methods disclosed herein.
In one implementation, host device(s) 110 may include one or more servers, database(s) 115 may include one or more storage devices comprising memory/memories 113, and party device(s) 140 and 150 may include one or more computers or mobile devices, such as, smart tablets or cellular telephones. Party device(s) 140 and 150 may include respective memories 148 and 158 for storing data owner information. Party device(s) 140 and 150 may include one or more input devices 142 and 152, respectively, for receiving input from a user, such as, two encrypted numbers. Party device(s) 140 and 150 may include one or more output devices 144 and 154 (e.g., a monitor or screen) for displaying data to the data owner provided by or for host device(s) 110. Server 210 of
Database(s) 115 may be a storage device comprising one or more memories 113 to store encrypted data 117, such as, two encrypted numbers. In alternate embodiments, database(s) 115 may be omitted and encrypted data 117 may be stored in an alternate location, e.g., exclusively in memory unit(s) 148 and 158 of the respective entity devices, or in host device memory 118.
Any or all of system 100 devices may be connected via one or more network(s) 120. Network 120 may be any public or private network such as the Internet. Access to network 120 may be through wire line, terrestrial wireless, satellite, or other systems well known in the art.
Each system device 110, 115, 140, and 150 may include one or more controller(s) or processor(s) 116, 111, 146, and 156, respectively, for executing operations according to embodiments of the invention and one or more memory unit(s) 118, 113, 148, and 158, respectively, for storing data (e.g., client information, server shares, private keys, public keys, etc.) and/or instructions (e.g., software for applying computations or calculations to encrypt data, to decrypt data, and other operations according to embodiments of the invention) executable by the processor(s).
Processor(s) 116, 111, 146, and/or 156 may include, for example, a central processing unit (CPU), a digital signal processor (DSP), a microprocessor, a controller, a chip, a microchip, an integrated circuit (IC), or any other suitable multi-purpose or specific processor or controller. Memory unit(s) 118, 113, 148, and/or 158 may include, for example, a random access memory (RAM), a dynamic RAM (DRAM), a flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.
Reference is made to
In operation 300, each of the garbler and evaluator (first party or processor 250-1 and second party or processor 250-2), independently without communication therebetween, may receive from server 210 and store the party's unique secret key share si of a shared secret key s=Σisi, a ciphertext of one or more values encrypted in a SIMD FHE protocol, and a shared public key encrypting the ciphertext.
In operation 302, each of the garbler and evaluator, independently without communication therebetween, may partially decrypt the ciphertext using the respective party's unique secret key share si to generate the party's unique data share, sharei.
In operation 304, each of the garbler and evaluator, independently without communication therebetween, may linearize (e.g., by distributed rounding) the party's respective unique data share (e.g., from a non-linear data share) and reduce size of the party's unique data share (e.g., by modulo p), for example, to enable both data shares to be combined at compact size. In some embodiments, maximal share size reduction (e.g., modulo p, where p is the plaintext modulus of each of the one or more unencrypted values) may eliminate noise in the linear share to generate a noiseless (unencrypted) share. The garbler may thus generate and share a mask with the evaluator to combine both parties' shares under the mask.
In operation 306 the garbler may send, and in operation 308 the evaluator may receive, a garbled circuit defining an operation on the one or more values (e.g., a non-linear comparison of the one or more values to a fixed value), a garbling of the garbler's linear unique data share, a garbled mask, and garbled potential wires. The evaluator may garble its linear unique evaluator data share by the garbled potential wires of the garbled circuit using oblivious transfer.
In operation 310, the evaluator may evaluate the garbled circuit using a SIMD execution to combine, in parallel, multiple indices of the linear garbler and evaluator unique data shares to generate an encrypted result of the garbled circuit operation on the one or more encrypted values. In some embodiments, the garbled circuit may combine the two parties' unique data shares under the mask and the evaluator may unmask the output to generate the encrypted result of operation 310.
After operation 310, the evaluator may transmit the encrypted result to server 210 or another third party that stores the shared secret key to decrypt the result (though decryption is not required).
In a system with more than two parties interacting to execute the homomorphic SIMD operations, operations of
Other operations or orders of operations may be used and operations may be omitted.
Reference is made to
A server may split a decryption key into a plurality of n secret key shares (e.g., 225-1, 225-2, . . . , 225-n described in
The plurality of parties may operate respective processors 250-1, . . . , 250-n of
The ciphertext may be a ciphertext such as ciphertext 230 described in
In operation 400, each of a plurality of parties (e.g., operating respective processors 250-1, . . . , 250-n), independently without communication therebetween, may receive from the server and store in its memory: (i) the initial ciphertext c of the encoded message, (ii) the party's unique secret key share si of a shared secret key s=Σisi, and (iii) an indication of a publicly available encryption key encrypting the initial ciphertext.
In operation 402, each party, independently without communication therebetween, may partially decrypt the initial ciphertext using the respective party's unique secret key share si to generate the party's unique data share, sharei.
In operation 404, each party, independently without communication therebetween, may linearize (e.g., by distributed rounding) the party's respective unique data share (e.g., from a non-linear data share).
In operation 406, each party, independently without communication therebetween, may eliminate noise in the linear share to generate a noiseless unencrypted share ci of compact size (e.g., reducing the size of the party's unique data share by maximum modulo p, where p is the plaintext modulus of unencrypted encoded message), for example, to enable both data shares to be combined at compact size.
In operation 408, each party, independently without communication therebetween, may re-encrypt the noiseless share ci using the shared public key to add noise EncP(ci).
In operation 410, each party, independently without communication therebetween, may send the re-encrypted share to a different party (e.g., server 210, a third party or another interactive party or processors 250-i), or receive re-encrypted shares from others of the two or more parties, to bootstrap by linearly combining the re-encrypted shares of the two or more parties that generates an updated ciphertext c′=ΣiEncP(ci) of the encoded message. The party that receives all parties' re-encrypted shares and generates the updated ciphertext may or may not possess the shared secret key to decrypt the updated ciphertext.
Each of the n re-encrypted shares may be re-encryptions of a decryption of the ciphertext, the re-encryption performed by each of the plurality of n computer processors using the publicly available encryption key, and the decryption of the ciphertext performed by each of the plurality of n computer processors using the unique share of the plurality of n shares of the decryption key transmitted to each of the plurality of n computer processors. The encrypted values may be encrypted values such as encrypted values 270-1, 270-2, . . . , 270-n shown in
The party may receive all parties' re-encrypted shares may linearly combine the re-encrypted shares as a homomorphic sum of the n encrypted values to obtain an encryption of the sum of n decrypted values, such that a bootstrapping of the encryption is distributed. The homomorphic sum may be a homomorphic sum such as homomorphic sum 280 shown in
Each party may perform operations 402-408 using a SIMD program, such that, in each operation, a single instruction may execute, in parallel, on multiple indices of the data (e.g., data shares, linear shares, noiseless shares, re-encrypted shares, linearly combined shares, initial ciphertexts, and/or updated ciphertexts).
Bootstrapping may be used to turn approximate homomorphic encryption schemes into fully homomorphic encryption schemes, reduce the noise of the encryption, increase the plaintext modulus of a ciphertext, switch between different encryption keys, switch between different FHE schemes. In one embodiment, the initial FHE ciphertext may be an exhausted FHE ciphertext of the encoded message with insufficient computational depth to perform FHE operations thereon, the added noise enhances computational depth, and the updated FHE ciphertext is a refreshed FHE ciphertext of the encoded message with sufficient computational depth to perform FHE operations thereon.
The shared secret key may correspond to the shared public key (decrypting a ciphertext encrypted by the public key) or the shared secret key may correspond to a different public key than the shared public key (decrypting a ciphertext encrypted by a different public key, not decrypting a ciphertext encrypted by the shared public key).
In some embodiments, the initial ciphertext may be decrypted by each of the plurality of n computer processors using the unique share of the plurality of n shares of the decryption key transmitted to the plurality of n parties or computer processors, and a modulus. The modulus may be different for each of the plurality of n computer processors, for example a first modulus q1, a second modulus q2, etc. and an nth modulus qn.
In some embodiments, each of the n encryption values may be independent of the decryption key. For example, the publicly available encryption key may correspond to a different secret key than the decryption key, which may allow the bootstrapping in
In some embodiments, rounding of the decryption of the ciphertext is executed if an absolute value of the decryption of the ciphertext is greater than a predefined value. The rounding may be as described above with reference to EQNs. 2 and/or 6-8.
In some embodiments, the server may transmit, to each of the plurality of n parties or computer processors, a hash function. The decryption of the ciphertext (e.g., by the plurality of n computer processors) may include using the hash function. The hash function may be a hash function H: Rq2→Rq (e.g., modelled as a random oracle). In some embodiments, the server may send a different hash function to each of the plurality of n computer processors, e.g., n hash functions H1, H2, . . . , H.
Other operations or orders of operations may be used and operations may be omitted.
According to one or more embodiments of the invention, there is provided a computer program product containing instructions which, when executed by at least one processor (such as a processor in a server) cause the at least one processor to carry out methods described herein (e.g., in reference to
Non-linear operations may include power or exponential operations or one or more comparison operations such as inequalities (e.g., m>α and/or m<α or equivalently α>m) and/or equalities (e.g., m=α and/or m≠α). In some embodiments, equalities may be determined by running two comparisons on inequalities (e.g., if both m>α and m<α, then m=α). A comparison m>α is equivalent to the sign of the difference m−α. A program may include one or more operations. A SIMD program may include one or more iterations of a single operation executed, in parallel or simultaneously, over multiple data points to generate multiple respective independent operation results.
According to one or more embodiments of the invention, there is provided:
Unless specifically stated otherwise, as apparent from the foregoing discussion, it is appreciated that throughout the specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
Embodiments of the invention may include an article such as a computer or processor readable non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory encoding, including, or storing instructions, e.g., computer-executable instructions, which when executed by a processor or controller, cause the processor or controller to carry out methods disclosed herein.
It should be recognized that embodiments of the invention may solve one or more of the objectives and/or challenges described in the background, and that embodiments of the invention need not meet every one of the above objectives and/or challenges to come within the scope of the present invention. While certain features of the invention have been particularly illustrated and described herein, many modifications, substitutions, changes, and equivalents may occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes in form and details as fall within the true spirit of the invention.
In the above description, an embodiment is an example or implementation of the inventions. The various appearances of “one embodiment,” “an embodiment” or “some embodiments” do not necessarily all refer to the same embodiments.
Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment.
Reference in the specification to “some embodiments”, “an embodiment”, “one embodiment” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions.
It is to be understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purpose only.
The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures, and examples.
It is to be understood that the details set forth herein do not construe a limitation to an application of the invention.
Furthermore, it is to be understood that the invention may be carried out or practiced in various ways and that the invention may be implemented in embodiments other than the ones outlined in the description above.
It is to be understood that the terms “including”, “comprising”, “consisting” and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers or groups thereof and that the terms are to be construed as specifying components, features, steps, or integers.
If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional elements.
It is to be understood that where the claims or specification refer to “a” or “an” element, such reference is not to be construed that there is only one of that element.
It is to be understood that where the specification states that a component, feature, structure, or characteristic “may”, “might”, “may” or “could” be included, that a particular component, feature, structure, or characteristic is not required to be included.
Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described.
Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.
Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. The present invention may be implemented in the testing or practice with methods and materials equivalent or similar to those described herein.
While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents.
This application is a continuation-in-part of U.S. patent application Ser. No. 17/964,335 filed Oct. 12, 2022, which claims the benefit of and priority to U.S. Provisional Patent Application No. 63/255,062 filed Oct. 13, 2021, and this application also claims the benefit of and priority to U.S. Provisional Patent Application No. 63/390,007 filed Jul. 18, 2022, all of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63255062 | Oct 2021 | US | |
63390007 | Jul 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17964335 | Oct 2022 | US |
Child | 18353430 | US |