Machine learning is often performed by training a deep neural network (DNN) on data (called “training data”). Training the neural network involves changing the weights w of the DNN until the DNN gives accurate inference. For example, a DNN could be trained on medical images in order to predict the presence or absence of certain diseases, alerting a doctor for further evaluation. Later, the trained DNN can be used to actually formulate a prediction on new data (called herein “evaluation data”) not used during training.
Whether training the DNN or evaluation using the DNN, it is often important to preserve privacy. Recently, there have been many works that have made advances towards realizing secure inference [4, 6, 14, 18, 20, 23, 30, 42, 47, 48, 50, 54, 56]. In the nomenclature used within this application, when numbers are included within square brackets, this application is referencing correspondingly numbered documents in the bibliography included in the provisional application that has been incorporated herein, and reproduced further below in Section 11. Emerging applications for secure inference are in healthcare where prior work [4, 44, 54] has explored secure inference services for privacy preserving medical diagnosis of chest diseases, diabetic retinopathy, malaria, and so on.
Consider a server that holds the weights w of a publicly known deep neural network (DNN), F, that has been trained on private data (e.g., actual medical images of patients). A client holds a private input x (a new patient's medical image); in a standard machine learning (ML) inference task, the goal is for the client to learn the prediction F (x,w) (e.g., a possible diagnosis) of the server's model on the input x. In secure inference, the inference is performed with the guarantee that the server learns nothing about x and the client learns nothing about the server's model w beyond what can be deduced from F (x,w) and x.
One work that considered the secure computation of machine learning inference algorithms was that of [15] who considered algorithms such as Naïve Bayes and decision trees. SecureML [50] considered secure neural network inference and training. Apart from the works mentioned earlier, other works in this area include works that considered malicious adversaries [21, 35, 63] (for simpler ML models like linear models, regression, and polynomials) as well as specialized DNNs with 1 or 2 bit weights [4, 54, 56].
The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments describe herein may be practiced.
Embodiments disclosed herein relate to secure inference over Deep Neural Networks (DNNs) using secure two-party computation to perform privacy-preserving machine learning. This privacy means that the provider of the deep neural network does not learn anything about inputs to the deep neural network, and the provider of inputs to the deep neural network does not learn anything about weights of the deep neural network beyond that which can be inferred from the output of the deep neural network and the inputs to the deep neural network.
The secure inference uses a particular type of comparison that can be used as a building block for various layers in the DNN including, for example, ReLU activations and divisions. The comparison securely computes a Boolean share of a bit representing whether input value x is less than input value y, where x is held by a user of the DNN, and where y is held by a provider of the DNN.
A computing system of one party to the comparison parses x into q leaf strings xq−1 . . . x0, where each of the q leaf strings is more than one bit, and where x is equal to the concatenation xq−1∥ . . . ∥x0. Meanwhile the computing system of the second party parses y into q leaf strings yq−1 . . . y0, where each of the q leaf strings is more than one bit, and where x is equal to the concatenation yq−1∥ . . . ∥y0. Note that each leaf string constitutes multiple bits. This is much more efficient than if the leaf strings were individual bits. Accordingly, the secure inference described herein is more readily adapted for using in complex DNNs.
Each party computing system then computes shares of inequality 1{xn<yn} for each of at least some n from q−1 down to 1, where y is equal to the concatenation yq−1∥ . . . ∥y0, by in each case using oblivious transfer. In addition, the systems computes their respective shares of equality 1{xn=yn} for each of at least some n from q−1 down to 1 also in each case by using oblivious transfer. The systems recursively calculates their respective shares of inequality of internal nodes according to the following equation: 1{xC<yC}=1{xB<yB}⊕(1{xB=yB}∧1{xA<yA} (where xC=xB∥xA, and yC=yB∥yA), and their respective shares of equality of internal nodes until their respective Boolean share of 1{x<y} is determined.
This comparison can be performed at many layers in the DNN to thereby traverse the garbled binary circuit that represents the DNN. Furthermore, each party computing system has access to only their respective share of the information at each internal node in the garbled circuit. Accordingly, the computing systems mutually perform the DNN layers in a manner that their respective data input into the process (e.g., the training data or the evaluation data for the first party computing system, and the weights for the second party computing system) are kept from being disclosed to the opposite party. Thus, privacy is preserved.
Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. Features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting in scope, embodiments will be described and explained with additional specificity and details through the use of the accompanying drawings in which:
Embodiments disclosed herein relate to secure inference over Deep Neural Networks (DNNs) using secure two-party computation to perform privacy-preserving machine learning. This privacy means that the provider of the deep neural network does not learn anything about inputs to the deep neural network, and the provider of inputs to the deep neural network does not learn anything about weights of the deep neural network beyond that which can be inferred from the output of the deep neural network and the inputs to the deep neural network.
The secure inference uses a particular type of comparison that can be used as a building block for various layers in the DNN including, for example, ReLU activations and divisions. The comparison securely computes a Boolean share of a bit representing whether input value x is less than input value y, where x is held by a user of the DNN, and where y is held by a provider of the DNN.
A computing system of one party to the comparison parses x into q leaf strings xq−1 . . . x0, where each of the q leaf strings is more than one bit, and where x is equal to the concatenation xq−1∥ . . . ∥x0. Meanwhile the computing system of the second party parses y into q leaf strings yq−1 . . . y0, where each of the q leaf strings is more than one bit, and where x is equal to the concatenation yq−1∥ . . . ∥y0. Note that each leaf string constitutes multiple bits. This is much more efficient than if the leaf strings were individual bits. Accordingly, the secure inference described herein is more readily adapted for using in complex DNNs.
Each party computing system then computes shares of inequality 1{xn<yn} for each of at least some n from q−1 down to 1, where y is equal to the concatenation yq−1∥ . . . ∥y0, by in each case using oblivious transfer. In addition, the systems computes their respective shares of equality 1{xn=yn} for each of at least some n from q−1 down to 1 also in each case by using oblivious transfer. The systems recursively calculates their respective shares of inequality of internal nodes according to the following equation: 1{xC<yC}=1{xB<yB}⊕(1{xB=yB}∧1{xA<yA} (where xC=xB∥xA, and yC=yB∥yA), and their respective shares of equality of internal nodes until their respective Boolean share of 1{x<y} is determined.
This comparison can be performed at many layers in the DNN to thereby traverse the garbled binary circuit that represents the DNN. Furthermore, each party computing system has access to only their respective share of the information at each internal node in the garbled circuit. Accordingly, the computing systems mutually perform the DNN layers in a manner that their respective data input into the process (e.g., the training data or the evaluation data for the first party computing system, and the weights for the second party computing system) are kept from being disclosed to the opposite party. Thus, privacy is preserved.
A solution for secure inference such as the one described herein that scales to practical machine learning (ML) tasks would open a plethora of applications based on MLaaS (ML as a Service). Users can obtain value from ML services without worrying about the loss of their private data, while model owners can effectively offer their services with no fear of breaches of client data (they never observe private client data in the clear).
Secure inference is an instance of secure 2-party computation (2PC) and cryptographically secure general protocols for 2PC have been known for decades [31, 62]. However, secure inference for practical ML tasks, e.g., ImageNet scale prediction [25], is challenging for two reasons: a) realistic DNNs use ReLU activations (ReLU(x) is defined as max(x, 0)) that are expensive to compute securely; and b) preserving inference accuracy requires a faithful implementation of secure fixed-point arithmetic. Conventional implementations [6, 30, 42, 47, 48, 50] of ReLUs can include replacing the activation with approximations that are more tractable for 2PC [23, 30, 48], which his approach results in significant accuracy losses that can degrade user experience. The only approaches known to the inventors to evaluate ReLUs efficiently require sacrificing security by making the untenable assumption that a non-colluding third party takes part in the protocol [7, 44, 49, 55, 60] or by leaking activations [13]. Moreover, some prior works [44, 48-50, 60] even sacrifice correctness of their fixed point implementations and the result of their secure execution can sometimes diverge from the expected result, i.e. cleartext execution, in random and unpredictable ways. Thus, correct and efficient 2PC protocols for secure inference over realistic DNNs remain elusive.
1.1 Our Contribution
In this work, we address the above two challenges and build new semi-honest secure 2-party cryptographic protocols for secure computation of DNN inference. Our new efficient protocols enable the first secure implementations of ImageNet scale inference that complete in under a minute! We make three main contributions:
First, we give a new comparison protocol that enables us to securely and efficiently evaluate the non-linear layers of DNNs such as ReLU, Maxpool and Argmax.
Second, we provide new protocols for division. Together with new theorems that we prove on fixed-point arithmetic overshares, we show how to evaluate linear layers, such as convolutions, average pool and fully connected layers, faithfully.
Finally, by providing protocols that can work on a variety of input domains, we build a system PIE that supports two different types of secure inference protocols where linear layers can be evaluated using either homomorphic encryption (PIEHE) or through oblivious transfer (PIEOT).
We now provide more details of our main contributions.
New millionaires' protocol. Our first main technical contribution is a novel protocol for the well-known millionaires' problem [62], where parties P0 and P1 hold -bit integers x and y, respectively, and want to securely compute x<y (or, secret shares of x<y). The theoretical communication complexity of our protocol is ≈3× better than the most communication efficient prior millionaire's protocol [22, 28, 31, 61, 62]. In terms of round complexity, our protocol executes in log rounds (e.g. 5 rounds for =32 bits); this is much better than prior works except for those based on Yao's garbled circuits that require optimal 2 rounds, but have prohibitively high communication complexity (see Table 1 for a detailed comparison).
= 32
Under GMW3, we state the communication numbers for GWM [31] for a depth-optimized circuit. The circuit that would give the best communication would still have a complexity of >2λ and would additionally pay an inordinate cost in terms of rounds, namely . Further, under SC34, Couteau [22] presented multiple protocols; we compare against the one that has the best communication complexity.
Using our protocol for millionaire's problem, we build new and efficient protocols for computing non-linear activations such as ReLU and Maxpool for both -bit integers (i.e., ZL, L=) and general rings Zn. Providing support for -bit integers ZL, as well as arbitrary rings Zn, allow us to securely evaluate the linear layers (such as matrix multiplication and convolutions) using the approaches of Oblivious Transfer (OT) [8, 50] as well as Homomorphic Encryption (HE) [29, 42, 48], respectively. This provides our protocols great flexibility when executing over different network configurations. Since all prior work [42, 47, 48, 50] known to the inventors for securely computing these activations rely on Yao's garbled circuits [62], our protocols are much more efficient in both settings. Asymptotically, our ReLU protocol over ZL, and Zn communicate 8× and 12× less bits than prior works [42, 47, 48, 50, 61, 62] (see Table 2 for a detailed comparison). Experimentally, our protocols are at least an order of magnitude more performant than prior protocols when computing ReLU activations at the scale of ML applications.
, = 32
n, η = 32
Fixed-point arithmetic. The ML models used by all prior works known to the inventors on secure inference are expressed using fixed-point arithmetic; such models can be obtained from [38, 41, 44, 51]. A faithful implementation of fixed-point arithmetic is quintessential to ensure that the secure computation is correct, i.e., it is equivalent to the cleartext computation for all possible inputs. Given a secure inference task F (x,w), some prior works [44, 48-50, 60] give up on correctness when implementing division operations and instead compute an approximation F′(x,w). In fixed-point arithmetic, each multiplication requires a division by a power-of-2 and multiplications are used pervasively in linear-layers of DNNs. Moreover, layers like average-pool require division for computing means. Loss in correctness is worrisome as the errors can accumulate and F′(x,w) can be arbitrarily far from F (x,w). Recent work [48] has shown that even in practice the approximations can lead to significant losses in classification accuracy.
As our next contribution, we provide novel protocols to compute division by power-of-2 as well as division by arbitrary integers that are both correct and efficient. The inputs to these protocols can be encoded over both -bit integers ZL as well as Zn, for arbitrary n. The only known approach to compute division correctly is via garbled circuits which we compare with in Table 3. While garbled circuits based protocols require communication which is quadratic in or log n, our protocols are asymptotically better and incur only linear communication. Concretely, for average pool with 7×7 filters and 32-bit integers, our protocols have 54× less communication.
n
, = 32
n, η = 32
Scaling to practical DNNs. These efficient protocols, help us securely evaluate practical DNNs like SqueezeNet on ImageNet scale classification tasks in under a minute. In sharp contrast, all prior works on secure 2-party inference ([4, 6, 14, 18, 20, 23, 30, 42, 47, 48, 50, 54, 56]) has been limited to small DNNs on tiny datasets like MNIST and CIFAR. While MNIST deals with the task of classifying black and white handwritten digits given as 28×28 images into the classes 0 to 9, ImageNet tasks are much more complex: typically 224×224 colored images need to be classified into thousand classes (e.g., agaric, gyromitra, ptarmigan, etc.) that even humans can find challenging. Additionally, our work is the first to securely evaluate practical convolutional neural networks (CNNs) like ResNet50 and DenseNet121; these DNNs are at least an order of magnitude larger than the DNNs considered in prior work, provide over 90% Top-5 accuracy on ImageNet, and have also been shown to predict lung diseases from chest X-ray images [44, 64]. Thus, our work provides the first implementations of practical ML inference tasks running securely. Even on the smaller MNIST/CIFAR scale DNNs, our protocols require an order of magnitude less communication and significantly outperform the state-of-the-art [42, 48] in both LAN and WAN settings (see Table 5 in Section 7.2).
OT vs HE. Through our evaluation, we also resolve the OT vs HE conundrum: although the initial works on secure inference [47, 50] used OT-based protocols for evaluating convolutions, the state-of-the-art protocols [42, 48], which currently provide the best published inference latency, use HE-based convolutions. HE-based secure inference has much less communication than OT but HE's computation increases with the sizes of convolutions. Since practical DNNs have large Gigabyte-sized convolutions, at the onset of this work, it was not clear to us whether HE-based convolutions would provide us the best latency in practice.
To resolve this empirical question, we implement a cryptographic library PIE that provides two classes of protocols, PIEOT and PIEHE. In PIEOT, inputs are in ZL (L=, for a suitable choice of ). Linear layers such as matrix multiplication and convolution are performed using OT-based techniques [8, 50], while the activations such as ReLU, Maxpool and Avgpool are implemented using our new protocols over ZL. In PIEHE, inputs are encoded in an appropriate prime field Zn. Here, we compute linear layers using homomorphic encryption and the activations using our protocols over Zn. In both PIEOT and PIEHE faithful divisions after linear layers are performed using our new protocols over corresponding rings. Next, we evaluate ImageNet-scale inference tasks with both PIEOT and PIEHE. We observe that in a WAN setting, where communication is a bottleneck, HE-based inference is always faster and in a LAN setting OT and HE are incomparable.
1.2 Our Techniques
Millionaires'. Our protocol for securely computing the millionaire's problem (the bit x<y) uses the following observation (previously made in [28]). Let x=x1∥x0 and y=y1∥y0 (where ∥ denotes concatenation and x1,y1 are strings of the same length). Then, x<y is the same as checking if either x1<y1 or x1=y1 and x0<y0. Now, the original problem is reduced to computing two millionaires' instances over smaller length strings (x1<y1 and x0<y0) and one equality test (x1=y1). By continuing recursively, one could build a tree all the way where the leaves are individual bits, at which point one could use 1-out-of-2 OT-based protocols to perform the comparison/equality. However, the communication complexity of this protocol is still quite large.
We make several important modifications to this approach. First, we modify the tree so that the recursion is done log(/m) times to obtain leaves with strings of size m, for a parameter m. We then use 1-out-of-2m OT to compute the comparison/equality at the leaves. Second, we observe that by carefully setting up the receiver's and sender's messages in the OT protocols for leaf comparisons and equality, multiple 1-out-of-2 m OT instances can be combined to reduce communication. Next, recursing up from the leaves to the root, requires securely computing the AND functionality that uses Beaver bit triples [8] (This functionality takes as input shares of bits x, y from the two parties and outputs shares of x AND y to both parties). Here, the AND function takes as input shares of bits x, y from the two parties and output shares of x AND y to both parties. To the best of our knowledge, prior work required a cost of 2λ bits per triple [5, 24] (where λ is the security parameter and typically 128). Now, since the same secret shared value is used in 2 AND instances, we construct correlated pairs of bit triples using 1-out-of-8 OT protocols [43] to reduce this cost to λ+8 bits (amortized) per triple. Finally, by picking m appropriately, we obtain a protocol for millionaires' whose concrete communication (in bits) is nearly 5 times better than prior work.
ReLU activation. The function ReLU(a) is defined as a·ReLU′(a), where ReLU′(a)=1 if a>0 and 0 otherwise. Hence, computing ReLU reduces to computing ReLU′(a). Let a be additively secret shared as a0, a1 over the appropriate ring. Note that a>0 is defined differently for -bit integers (i.e., ZL) and general rings Zn. Over ZL, ReLU′(a)=1⊕MSB(a), where MSB(a) is the most significant bit of a. Moreover, MSB(a)=MSB(a0)⊕MSB(a1)⊕carry. Here, carry=1 if a0′+a1′≥, where a0′, a1′ denotes the integer represented by the lower −1 bits of a0, a1. We compute this carry bit using a call to our millionaires' protocol. Over Zn, ReLU′(a)=1 if a∈[0, ┌n/2┐). Given the secret shares a0, a1, this is equivalent to (a0+a1)∈[0, ┌n/2┐)∪[n, ┌3n/2┐) over integers. While this can be naïvely computed by making 3 calls to the millionaires' protocol, we show that by carefully selecting the inputs to the millionaires' protocol, one can do this with only 2 calls.
Division and Truncation. As a technical result, we provide a correct decomposition of division of a secret ring element in ZL or Zn by a public integer into division of secret shares by the same public integer and correction terms (Theorem 4.1). These correction terms consist of multiple inequalities on secret values. As a corollary, we also get a much simpler expression for the special case of truncation, i.e., dividing -bit integers by a power-of-2 (Corollary 4.2). We believe that the general theorem as well as the corollary can be of independent interest. Next, we give efficient protocols for both general division (used for Avgpool, Table 3) as well as division by a power-of-2 (used for multiplication in fixed-point arithmetic). The inequalities in the correction term are computed using our new protocol for millionaires' and the division of shares can be done locally by the respective parties. Our technical theorem is the key to obtaining secure implementation of DNN inference tasks that are bitwise equivalent to cleartext fixed-point execution.
1.3 Organization
We begin with the details on security and cryptographic primitives used in Section 2 on preliminaries. In Section 3 we provide our protocols for millionaires' (Section 3.1) and ReLU′ (Section 3.2, 3.3), over both ZL, and general ring Zn. In Section 4, we present our protocols for general division, as well as the special case of division by power-of-2. We describe the various components that go into a neural network inference algorithm in Section 5 and show how to construct secure protocols for all these components given our protocols from Sections 3 and 4. We present our implementation details in Section 6 and our experiments in Section 7. We conclude discussion of these general principles in Section 8. Section 9 describes a computing system that may employ the principles described herein. Section 10 is an appendix. Section 11 is a bibliography.
Notation. Let λ be the computational security parameter and negl(λ) denote a negligible function in λ. For a set W,
denotes sampling an element w, uniformly at random from W. [] denotes the set of integers {1, . . . , }. Let 1{b} denote the indicator function that is 1 when b is true and 0 when b is false.
2.1 Threat Model and Security
We provide security in the simulation paradigm [19, 31, 46] against a static semi-honest probabilistic polynomial time (PPT) adversary . That is, a computationally bounded adversary corrupts either P0 or P1 at the beginning of the protocol and follows the protocol specification honestly. Security is modeled by defining two interactions: a real interaction where P0 and P1 execute the protocol in the presence of and the environment and an ideal interaction where the parties send their inputs to a trusted functionality that performs the computation faithfully. Security requires that for every adversary in the real interaction, there is an adversary (called the simulator) in the ideal interaction, such that no environment can distinguish between real and ideal interactions. Many of our protocols invoke multiple sub-protocols and we describe these using the hybrid model. This is similar to a real interaction, except that sub-protocols are replaced by the invocations of instances of corresponding functionalities. A protocol invoking a functionality is said to be in “-hybrid model.”
2.2 Cryptographic Primitives
2.2.1 Secret Sharing Schemes. Throughout this work, we use 2-out-of-2 additive secret sharing schemes over different rings [12, 58]. The 3 specific rings that we consider are the field 2, the ring L, where L= (=32, typically), and the ring n, for a positive integer n (this last ring includes the special case of prime fields used in the works of [42, 48]). We let ShareL (x) denote the algorithm that takes as input an element x in L, and outputs shares over L, denoted by x0L and x1L. Shares are generated by sampling random ring elements x0L and x1L, with the only constraint that x)0L+x1L=x (where + denotes addition in L). Additive secret sharing schemes are perfectly hiding, i.e., given a share x0L or x1L, the value x is completely hidden. The reconstruction algorithm ReconstL (x0L, x1L) takes as input the two shares and outputs x=x0L+x1L. Shares (along with their corresponding Share( ) and Reconst( ) algorithms) are defined in a similar manner for 2 and n with superscripts B and n, respectively. We sometimes refer to shares over L and n as arithmetic shares and shares over 2 as boolean shares.
2.2.2 Oblivious Transfer. Let
denote the 1-out-of-k Oblivious Transfer (OT) functionality [17] (which generalizes 1-out-of-2 OT [26, 53]). The sender's inputs to the functionality are the k strings m1, . . . , mk, each of length and the receiver's input is a value i∈[k]. The receiver obtains mi from the functionality and the sender receives no output. We use the protocols from [43], which are an optimized and generalized version of the OT extension framework proposed in [9, 40]. This framework allows the sender and receiver, to “reduce” λc number of oblivious transfers to λ “base” OTs in the random oracle model [11] (for any constant c>1). We also use the notion of correlated 1-out-of-2 OT [5], denoted by
In our context, this is a functionality where the sender's input is a ring element x and the receiver's input is a choice bit b. The sender receives a random ring element r as output and the receiver obtains either r or x+r as output depending on b. The protocols for
execute in 2 rounds and have total communication of 2λ+k and λ+, respectively. Moreover, simpler
has a communication of λ+2 bits [5, 40] (The protocol of
incurs a communication cost of λ+k. However, to achieve the same level of security, their security parameter needs to be twice that of
In concrete terms, therefore, we write the cost as 2λ+k).
2.2.3 Multiplexer and B2A conversion. The functionality MUXn takes as input arithmetic shares of a over n and boolean shares of choice bit c from P0, P1, and returns shares of a if c=1, else returns shares of 0 over the same ring. A protocol for MUXn can easily be implemented by 2 simultaneous calls to
and communication complexity is 2(λ+2η), where η=┌log n┐.
The functionality B2An (for boolean to arithmetic conversion) takes boolean (i.e., over 2) shares as input and gives out arithmetic (i.e., over n) shares of the same value as output. It can be realized via one call to
and hence, its communication is λ+η. For completeness, we provide the protocols realizing MUXn as well as B2An formally in Appendix A.3 and Appendix A.4, respectively.
2.2.4 Homomorphic Encryption. A homomorphic encryption of x allows computing encryption of f (x) without the knowledge of the decryption key. In this work, we require an additively homomorphic encryption scheme that supports addition and scalar multiplication, i.e. multiplication of a ciphertext with a plaintext. We use the additively homomorphic scheme of BFV [16, 27] (the scheme used in the recent works of Gazelle [42] and Delphi [48]) and use the optimized algorithms of Gazelle for homomorphic matrix-vector products and homomorphic convolutions. The BFV scheme uses the batching optimization [45, 59] that enables operation on plaintext vectors over the field n, where n is a prime plaintext modulus of the form 2KN+1, K is some positive integer and N is scheme parameter that is a power-of-2.
In this section, we provide our protocols for millionaire's problem and ReLU′(a) (defined to be 1 if a>0 and 0 otherwise) when the inputs are bit signed integers as well as elements in general rings of the form n (including prime fields). Our protocol for Millionaire's problem invokes instances of AND that takes as input Boolean shares of values x,y∈{0, 1} and returns boolean shares of x∧y. We discuss efficient protocols for AND in Appendix A.1 and A.2
3.1 Protocol for Millionaires'
In the Yao Millionaires' problem, party P0 holds x and party P1 holds y and they wish to learn boolean shares of 1{x<y}.
As represented by arrow 121, the first party computing system 101 provides its input x to the two-party computation module 110. Also, as represented by arrow 122, the second party computing system 102 provides its input y to the two-party computing module 110. As represented by arrows 131 and 132, the two-party computation module 110 outputs a first share 111 of the value 1{x<y} to the first party computing system 101, and outputs a second share 112 of the value 1{x<y} to the second party computing system 102. At this point, the first and second party computing systems 101 and 102 could not independently reconstruct the value 140 (1{x<y}) unless they acquired the share they do not have from the other party. Thus, unless the two computing systems 101 and 102 were to share their shares (as represented by arrows 141 and 142), the result of the computation remains secure.
Here, x and y are -bit unsigned integers. We denote this functionality by . Our protocol for builds on the following observation (Equation 1) that was also used in [28].
1{x<y}=1{x1<y1}⊕(1{x1=y1}∧1{x0<y0}), (1)
where, x=x1∥x0 and y=y1|y0.
Let m be a parameter and M=2m. First, for ease of exposition, we consider the special case when m divides and q=/m is a power of 2.
Now, we compute the shares of the inequalities and equalities of strings at the leaf level using
(steps 9 and 10, resp.). Next, we compute the shares of the inequalities (steps 14 & 15) and equalities (step 16) at each internal node upwards from the leaf using Equation 1. Value of inequality at the root gives the final output.
eqi−1,2j+1 bB to learn output temp bB.
eqi−1,2j+1 bB to learn output eqi,2 bB.
Let us take a concrete example to further clarify with respect to
In line 1 of Algorithm 1, and referring to the arrow 201A of
In line 9, using
each party learns their Boolean share of 1{x3<y3}, 1{x2<y2}, 1{x1<y1}, and 1{x0<y0}, or in other words, their respective Boolean share of 0, 0, 0, and 1 since only 1{x0<y0} is equal to one. As represented by arrow 211A of
In line 10, each party uses
to learn their Boolean share of 1{x3=y3}, 1{x2=y2}, 1{x1=y1} and 1{x0=y0}, or in other words their Boolean share of 1, 1, 1, 0, since only x0=y0 is false, and since the leaf strings x3, x2 and x1 are each equal to the respective leaf strings y3, y2 and y1. As represented by arrow 212A of
In the first recursion 230 (when i is equal to 1), there is an inequality and equality to be learned for x32 and y32 (when j is equal to 0, and where x32=x3∥x2 and y32=y3∥y2), and an equality and inequality to be learned for x10 and y1- (when j is equal to 1, and where x10=x1∥x1 and y10=y1∥y1).
The first iteration will now be described with respect to the example. In this example x32 is 10010101, and y32 is also 10010101. Thus, we expect 1{x32<y32} to be 0. Applying Equation 1 to inputs y3. y2, x3, and x2, each party learns their respective Boolean shares 231A and 231B of 1{x32<y32}, which is 0⊕(1∧0), or 0⊕0, or 0. Applying AND to these same inputs, each party learns their respective shares 233A and 233B of 1{x32=y32}, which is 1. Also in this example x10 is 10000001, and y10 is also 10001000. Thus, we expect 1{x10<y10} to be 1. Applying Equation 1 to inputs y1. y0, x1, and x1, each party learns their respective Boolean shares 232A and 232B of 1{x10<y10}, which is 0⊕(1∧1), or 0⊕1, or 1. Applying AND to these same inputs, each party learns their respective shares 234A and 234B of 1{x10=y10}, which is 0. In the second iteration, the value 1{x<y} should be 1 since x is less than y. Applying Equation 1 to inputs y32. y10, x32, and x10, each party learns their respective Boolean shares 241A and 241B of 1{x<y}, which is 0⊕(1∧1), or 0⊕1, or 1.
Correctness and security. Correctness is shown by induction on the depth of the tree starting at the leaves. First, by correctness of
in step 9, lt0,j1B=lt0,j0B⊕1{xj<yj}. Similarly, eq0,i1B=eq0,i0B⊕1{xj=yj}. This proves the base case. Let qi=q/2i. Also, for level i of the tree, parse x=x(i)=xq
AND)-hybrid.
General case. When m does not divide and q=┌/m┐ is not a power of 2, we make the following modifications to the protocol. Since m does not divide , xq−1∈{0, 1}r, where r= mod m (Note that r=m when m|). When doing the compute for xq−1 and yq−1, we perform a small optimization and use
in steps 9 and 10, where R=2r. Second, since q is not a power of 2, we do not have a perfect binary tree of recursion and we need to slightly change our recursion/tree traversal. In the general case, we construct maximal possible perfect binary trees and connect the roots of the same using the relation in Equation 1. Let α be such that 2α<q≤2α+1. Now, our tree has a perfect binary sub-tree with 2α leaves and we have remaining q′=q−2α leaves. We recurse on q′. In the last step, we obtain our tree with q leaves by combining the roots of perfect binary tree with 2α leaves and tree with q′ leaves using Equation 1. Note that value at the root is computed using ┌log q┐ sequential steps starting from the leaves.
Again, let us take a concrete example to further clarify. Suppose that the length of the input is 11, and that the length of each leaf string m will be 4. As an example suppose that input x=10110000001, and input y=10110001000. Here, m does not divide . Accordingly, xq−1∈{0, 1}r, where r= mod m (Note that r=m when m). Accordingly, x2 is equal to 101, and y2 is equal to 101. q is equal to 3, and thus there is no x3 and y3. x1, x0, y1 and y2 are the same as in the previous example. Here, when doing the compute for x2 and y2, the Boolean shares of the inequality and equality are each learned using uses
instead of uses
In the first iteration, the Boolean shares the equalities and inequalities of x2 and y2 are calculated However, the equalities and inequalities for x10 and y10 are still calculated. Then, in the second recursion, the Boolean shares of the inequality for x and y is calculated using inputs, x2 and x10, and y2 and y10.
3.1.1 Optimizations. We reduce the concrete communication complexity of our protocol using the following optimizations that are applicable to both the special and the general case.
Combining two
calls into one
Since the input of P1 (OT receiver) to
in steps 9 and 10 is the same, i.e. yj, we can collapse these steps into a single call to
where P0 and P1 input {(sj,k∥tj,k)}k and yj, respectively. P1 sets its output as (lt0, 1B∥eq0,j1B). This reduces the cost from 2(2λ+M) to (2λ+2M).
Realizing AND efficiently: It is known that AND can be realized using Beaver bit triples [8]. In prior works [5, 24], generating a bit triple costs 2λ bits. For our protocol, we observe that the 2 calls to AND in steps 14 and 16 have a common input, eqi−1,2j+1bB. Hence, we optimize communication of these steps by generating correlated bit triples (dbB, ebB, fbB) and (d′bB, ebB, f′bB) for b∈{0, 1}, such that d∧e=f and d′∧e=f′. Next, we use
to generate one such correlated bit triple (Appendix A.2) with communication 2λ+16 bits, giving the amortized cost of λ+8 bits per triple. Given correlated bit triples, we need 6 additional bits to compute both AND calls.
Removing unnecessary equality computations: As observed in [28], the equalities computed on lowest significant bits are never used. Concretely, we can skip computing the values eqi,0 for i∈{0, . . . , log q}. Once we do this optimization, we only need a single call to AND instead of 2 correlated calls for the leftmost branch of the tree. We use the
reduction to generate 2 regular bit triples (Appendix A.1) with communication of 2λ+32 bits. This gives us amortized communication of λ+16 bits per triple. This is ≈2× improvement over 2λ bits required in prior works [5, 24]. Given a bit triple, we need 4 bits to realize AND. This reduces the total communication by M (for the leaf) plus (λ+2)·┌log q┐ (for leftmost branch) bits.
3.1.2 Communication Complexity. In our protocol, we communicate in protocols for OT (steps 9&10) and AND (steps 14&16). With above optimizations, we need 1 call to
(q−2) calls to
and 1 call to
which cost (2λ+M), ((q−2)·(2λ+2M)(and (2λ+2R) bits, respectively. In addition, we have ┌log q┐ invocations of AND and (q−1−┌log q┐) invocations of correlated AND. These require communication of (λ+20)·┌log q┐ and (2λ+22)·(q−1−┌log q┐) bits. This gives us total communication of λ(4q−┌log q┐−2)+M(2q−3)+2R+22(q−1)−2┌log q┐ bits. Using this expression for =32 we get the least communication for m=7 (Table 1). We note that there is a trade-off between communication and computational cost of OTs used and we discuss our choice of m for our experiments in Section 6.
3.2 Protocol for ReLU′ for -Bit Integers
Here, we describe our protocol for that takes as input arithmetic shares of a and returns boolean shares of ReLU′(a) (DReLU stands for derivative of ReLU, i.e., ReLU′). Note that ReLU′(a)=(1⊕MSB(a)), where MSB(a) is the most significant bit of a. Let arithmetic shares of a∈L be a0L=msb0∥x0 and a=msb1∥x1 such that msb0, msb1∈{0, 1}. We compute the boolean shares of MSB(a) as follows: Let carry=1{(x0+x1)>−1}. Then, MSB(a)=msb0⊕msb1⊕carry. We compute boolean shares of carry by invoking an instance of .
Correctness and security. By correctness of , ReconstB (carry)0B, (carry1B)=1{−1−x0)<x1}=1{(x0+x1)>−1}. Also, ReconstB (ReLU′)0B, ReLU′1B)=msb0⊕msb1 ⊕carry⊕1=MSB(a)⊕1. Security follows trivially in the hybrid.
Communication complexity In Algorithm 2, we communicate the same as in ; that is <λ+14)(−1) by using m=4.
3.3 Protocol for ReLU′ for General n
We describe a protocol for that takes arithmetic shares of a over n as input and returns boolean shares of ReLU′(a). For integer rings n, ReLU′(a)=1 if a<┌n/2┐ and 0 otherwise. Note that this includes the case of prime fields considered in the works of [42, 48]. We first describe a (simplified) protocol for ReLU′ in n, in Algorithm 3 with protocol logic as follows: Let arithmetic shares of a∈n be a0n, and a1n. Define wrap=1{a0n+an1>n−1}, lt=1{a0n+a1n>(n−1)/2} and rt=1{a0n+a1n>n+(n−1)/2}. Then, ReLU′(a) is (1⊕lt) if wrap=0, else it is (1⊕rt). In Algorithm 3, steps 1, 2, 3, compute these three comparisons using MILL. Final output can be computed using an invocation of MUX2.
Optimizations We describe an optimized protocol for DReLUring,n in Algorithm 4 that reduces the number of calls to MILL to 2. First, we observe that if the input of P1 is identical in all three invocations, then the invocation of OT in Algorithm 1 (steps 9&10) can be done together for the three comparisons. This reduces the communication for each leaf OT invocation in steps 9&10 by an additive factor of 4λ. To enable this, P0, P1 add (n−1)/2 to their inputs to MILLη+1 in steps 1, 3 (η=┌log n┐). Hence, P1's input to MILLη+1 is (n−1)/2+a1n in all invocations and P0's inputs are (3(n−1)/2−n0n) (n−1−a0n), (2n−1−a0n) in steps 1, 2, 3, respectively.
Next, we observe that one of the comparisons in step 2 or step 3 is redundant. For instance, if a0n>(n−1)/2, then the result of the comparison lt=a0n+a1n>(n−1)/2 done in step 2 is always 1. Similarly, if a0n≤(n−1)/2, then the result of the comparison rt=1{a0n+a1n>n+(n−1)/2} done in step 3 is always 0. Moreover, P0 knows based on her input a0n which of the two comparisons is redundant. Hence, in the optimized protocol, P0 and P1 always run the comparison to compute shares of wrap and one of the other two comparisons. Note that the choice of which comparison is omitted by P0 need not be communicated to P1, since P1's input is same in all invocations of MILL. Moreover, this omission does not reveal any additional information to P1 by security of MILL. Finally, P0 and P1 can run a
to learn the shares of ReLU′(a). Here, P1 is the receiver and her choice bits are the shares learnt in the two comparisons. P0 is the sender who sets the 4 OT messages based on her input share, and two shares learnt from the comparison protocol. We elaborate on this in the correctness proof below.
Correctness and Security. First, by correctness of MILLη+1 (step 1), wrap=ReconstB (wrap0B, wrap1B)=1{a0L+a1L>n−1}. Let j*=xt1B∥wrap1B. Then tj*=1⊕xt. We will show that stj*=ReLU′(a), and hence, by correctness of
z=ReconstB (z0B, z1B)=ReLU′(a). We have the following two cases.
When a0L>(n−1)/2, lt=1, and ReLU′(a)=wrap∧(1⊕rt). Here, by correctness of MILLη+1 (step 3), xt=ReconstB (xt0B, xt1B)=rt. Hence, s′j*=tj*∧(wrap0B⊕j*1)=(1⊕rt)∧wrap.
When a0L≤(n−1)/2, rt=0, ReLU′(a) is 1⊕lt if wrap=0, else 1. It can be written as (1⊕lt)⊕(lt∧wrap). In this case, by correctness of MILLη+1 (step 3), xt=ReconstB (x0B, xt1B)=lt. Hence, s′j*=tj*⊕((1⊕tj*)∧(wrap0B⊕j*1))=(1⊕lt)⊕(lt∧wrap). Since z0B is uniform, security follows in the
Communication complexity. With the above optimization, the overall communication complexities of our protocol for ReLU′ in n is equivalent to 2 calls to ΠMILLη+1 where P1 has same input plus 2λ+4 (for protocol for
Two calls to ΠMILLη+1 in this case (using m=4) cost<3/2λ(η+1)+28(η+1) bits. Hence, total communication is <3/2λ(η+1)+28(η+1)+2λ+4. We note that the communication complexity of simplified protocol in Algorithm 3 is approximately 3 independent calls to ΠMILLη, which cost 3(λη+14η) bits, plus 2λ+4 bits for MUX2. Thus, our optimization gives almost 2× improvement.
We present our results on secure implementations of division in the ring by a positive integer and truncation (division by power-of-2) that are bitwise equivalent to the corresponding cleartext computation. We begin with closed form expressions for each of these followed by secure protocols that use them.
4.1 Expressing General Division and Truncation Using Arithmetic Over Secret Shares
Let idiv: ×→ denote signed integer division, where the quotient is rounded towards −∞ and the sign of the remainder is the same as that of divisor. We denote division of a ring element by a positive integer using rdiv: n×→n defined as
rdiv(a,d)idiv(au−1{au≥┌n/2┐}·n,d)mod n,
where the integer au∈{0, 1, . . . , n−1} is the unsigned representation of a∈n lifted to integers and 0<d<n. For brevity, we use x=n y to denote x mod n=y mod n.
T
Then, we have:
rdiv(a0n,d)+rdiv(a1n,d)+(corr·n1+1−C−B)mod n=nrdiv(a,d).
The proof of the above theorem is presented in Appendix C.
4.1.1 Special Case of truncation for bit integers. The expression above can be simplified for the special case of division by 2s of -bit integers, i.e., arithmetic right shift with s (>>s), as follows:
C
(a0>>s)+(a1>>s)+corr·+1{a00+a10≥2s}=L(a>>s).
P
4.2 Protocols for Division
In this section, we describe our protocols for division in different settings. We first describe a protocol for the simplest case of truncation for f-bit integers followed by a protocol for general division in n by a positive integer (Section 4.2.2). Finally, we discuss another simpler case of truncation, which allows us to do better than general division for rings with a special structure (Section 4.2.3).
m bB= α bB ⊕ b.
d bL.
4.2.1 Protocol for truncation of -bit integer. Let be the functionality that takes arithmetic shares of a as input and returns arithmetic shares of a>>s as output. In this work, we give a protocol Algorithm 5 that realizes the functionality correctly building on Corollary 4.2.
Parties P0 & P1 first invoke an instance of (where one party locally flips its share of ReLU′(a)) to get boolean shares mbB of MSB(a). Using these shares, they use a
for calculating corrbL, i.e., arithmetic shares of corr term in Corollary 4.2. Next, they use an instance of MILLs to compute boolean shares of c=1{a00+a10≥2s}. Finally, they compute arithmetic shares of c using a call to B2AL (Algorithm 7).
Correctness and Security. For any z∈L, MSB(z)=1{zu≥}, where zu is unsigned representation of z lifted to integers. First, note that ReconstB(m0B, m1B)=1⊕ReconstB(a0B, a1B)=MSB(a) by correctness of . Next, we show that ReconstL(corr0L, corr1L)=corr, as defined in Corollary 4.2. Let xb=MSB(abL) for b∈{0, 1}, and let j*=(m1B∥x1). Then, tj*=(m0B⊕m1B ⊕x0)∧(m0B⊕m1B⊕x1)=(MSB(a)⊕x0)∧(MSB(a)⊕x1). Now, tj*=1 implies that we are in one of the first two cases of expression for corr—which case we are in can be checked using x0 (steps 7 & 9). Now we can see that sj*=−corr0L+corr=corr1L. Next, by correctness of MILLs, c=ReconstB c0B, c1B)=1{a00+a10≥2s}. That is, c=c0 B⊕c1B. Given boolean shares of c, step 17, creates arithmetic shares of the same using an instance of B2AL. Since corr0L is uniformly random, security of our protocol is easy to see in
Communication complexity. involves a single call each to
Hence, communication required is <λ+2λ+19+communication for MILLs that depends on parameters. For =32 and s=12, our concrete communication is 4310 bits (using m=7 for ΠMILL12 as well as ΠMILL31 inside ΠDReLUint,32) as opposed to 24064 bits for garbled circuits.
4.2.2 Protocol for division in ring. Let Divring,n,d be the functionality for division that takes arithmetic shares of a as input and returns arithmetic shares of rdiv(a, d) as output. Our protocol builds on our closed form expression from Theorem 4.1. We note that -bit integers is a special case of n and we use the same protocol for division of an element in L by a positive integer.
This protocol is similar to the previous protocol for truncation and uses the same logic to compute shares of corr term. The most non-trivial term to compute is C that involves three signed comparisons over . We emulate these comparisons using calls to DReLUint,δ where δ is large enough to ensure that there are no overflows or underflows. We can see that −2d+2≤A≤2d−2 and hence, −3d+2≤A−d,A,A+d≤3d−2. Hence, we set δ=┌ log δd┐. Now, with this value of δ, the term C can we re-written as (ReLU′(A−d)⊕1)+(ReLU′(A)⊕1)+(ReLU′(A+d)⊕1), which can be computed using three calls to DReLUint,δ (Step 19) and B2An (Step 20) each. Finally, note that to compute C we need arithmetic shares of A over the ring Δ, Δ=2δ. And this requires shares of corr over the same ring. Hence, we compute shares of corr over both n and Δ (Step 15). Due to space constraints, we describe the protocol formally in Appendix D. Table 3 provides theoretical and concrete communication numbers for division in both L and n, as well as a comparison with garbled circuits.
4.2.3 Truncation in rings with special structure. Truncation by s in general rings can be done by performing a division by d=2s. However, we can omit a call to DReLUint,δ and B2An when the underlying ring and d satisfy a relation. Specifically, if we have 2·n0≤d=2s, then A is always greater than equal to −d, where n0, A∈Z are as defined in Theorem 4.1. Thus, the third comparison (A≤−d) in the expression of C from Theorem 4.1 can be omitted. Moreover, this reduces the value of δ needed and δ=┌log 4d┐ suffices since −2d≤A−d, A≤2d−2.
Our homomorphic encryption scheme requires n to be a prime of the form 2KN+1 (Section 2.2.4), where K is a positive integer and N≥8192 is a power-of-2. Thus, we have n0=n mod 2s=1 for 1≤s≤14. For all our benchmarks, s≤12 and we use this optimization for truncation in PIEHE.
We give an overview of all the layers that are computed securely to realize the task of secure neural network inference. Layers can be broken into two categories—linear and non-linear. An inference algorithm simply consists of a sequence of layers of appropriate dimension connected to each other. Examples of linear layers include matrix multiplication, convolutions, Avgpool and batch normalization, while non-linear layers include ReLU, Maxpool, and Argmax.
We are in the setting of secure inference where the model owner, say P0, holds the weights. When securely realizing each of these layers, we maintain the following invariant: Parties P0 and P1 begin with arithmetic shares of the input to the layer and after the protocol, end with arithmetic shares (over the same ring) of the output of the layer. This allows us to stitch protocols for arbitrary layers sequentially to obtain a secure computation protocol for any neural network comprising of these layers. For protocols in PIEOT, this arithmetic secret sharing is over L; in PIEHE, the sharing is over n, prime n.
5.1 Linear Layers
5.1.1 Fully connected layers and convolutions. A fully connected layer in a neural network is simply a product of two matrices—the matrix of weights and the matrix of activations of that layer—of appropriate dimension. At a very high level, a convolutional layer applies a filter (usually of dimension f×f for small integer f) to the input matrix by sliding across it and computing the sum of elementwise products of the filter with the input. Various parameters are associated with convolutions—e.g. stride (a stride of 1 denotes that the filter slides across the larger input matrix beginning at every row and every column) and zero-padding (which indicates whether the matrix is padded with 0s to increase its dimension before applying the filter). When performing matrix multiplication or convolutions over fixed-point values, the values of the final matrix are scaled down appropriately so that it has the same scale as the inputs to the computation. We note that our values are in fixed-point with an associated scale s and have been encoded into appropriate size rings L or n as follows: a Real r is encoded as [r2s] mod k where k=L or n. Hence, to do faithful fixed-point arithmetic, we first compute the matrix multiplication or convolution over the ring (L or n) followed by truncation, i.e., division-by-2s of all the values. In PIEOT, multiplication and convolutions over the ring L are done using oblivious transfer techniques and in PIEHE these are done over n using homomorphic encryption techniques that we describe next followed by our truncation method.
OT based computation. We note that OT-based techniques for multiplication are known [8, 24, 50] and we describe them briefly for completeness. First consider the simple case of secure multiplication of 2 elements a and b in L where P0 knows a and P0 and P1 hold arithmetic shares of b. This can be done with instances of
Using this, multiplying two matrices A∈LM,N and B∈LN,K such that P0 knows A and B is arithmetically secret shared requires MNK instances of
This can be optimized by using the structured multiplications inside a matrix multiplication by combining all the COT sender messages when multiplying with the same element, reducing the complexity to NK instances of
Finally, we reduce the task of secure convolutions to secure matrix multiplication similar to [44, 49, 60].
HE based computation. PIEHE, uses techniques from Gazelle [42] and Delphi [48] to compute matrix multiplications and convolutions over a field L (prime n), of appropriate size. At a high level, first, P1 sends an encryption of its arithmetic share to P0. Then, P0 homomorphically computes on this ciphertext using weights of the model (known to P0) to compute an encryption of the arithmetic share of the result and sends this back to P1. Hence, the communication only depends on the input and output size of the linear layer and is independent of the number of multiplications being performed. Homomorphic operations can have significantly high computational cost—to mitigate this, we build upon the output rotations method from [42] for performing convolutions, and reduce its number of homomorphic rotations. At a very high level, after performing convolutions homomorphically, ciphertexts are grouped, rotated in order to be correctly aligned, and then packed using addition. In our work, we divide the groups further into subgroups that are misaligned by the same offset. Hence the ciphertexts within a subgroup can first be added and the resulting ciphertext can then be aligned using a single rotation as opposed to ≈ci/cn in [42] (where ci denotes the number of input channels and cn is the number of channels that fit in a single ciphertext). We refer the reader to Appendix E for details.
Faithful truncation. To correctly emulate fixed-point arithmetic, the value encoded in the shares obtained from the above methods are divided-by-2s, where s is the scale used. For this we invoke in PIEOT and Divring,n,2
5.1.2 Avgpoold. The function Avgpoold (a1, . . . , ad) over a pool of d elements a1, . . . , ad is defined to be the arithmetic mean of these d values. The protocol to compute this function works as follows: P0 and P1 begin with arithmetic shares (e.g. over L in PIEOT) of ai, for all i∈[d]. They perform local addition to obtain shares of w=Σi=1d ai (i.e., Pb computes wbL=Σi=1daibL). Then, parties invoke Divring,L,d on inputs wbL to obtain the desired output. Correctness and security follow in therein Divring,L,d-hybrid model. Here too, unlike prior works, our secure execution is bitwise equal to the cleartext version.
5.1.3 Batch Normalization. This layer takes as input vectors c, x, d of the same length, and outputs c⊙x+d, where c⊙x refers to the element-wise product of the vectors c and x. Moreover, c and d are a function of the mean and the variance of the training data set, and some parameters learnt during training. Hence, c and d are known to model owner, i.e., P0. This layer can be computed using techniques of secure multiplication.
5.2 Nonlinear Layers
5.2.1 ReLU. Note that ReLU(a)=a if a≥0, and 0 otherwise. Equivalently, ReLU(a)=ReLU′(a)·a. Once we compute the boolean shares of ReLU′(a) using a call to we compute shares of ReLU(a) using a call to multiplexer functionality MUXL (Section 2.2.3). We describe the protocol for ReLU(a) over L formally in Algorithm 8, Appendix B (the case of Zn follows in a similar manner). For communication complexity, refer to Table 2 for comparison with garbled circuits and Appendix B for detailed discussion.
5.2.2 Maxpoold and Argmaxd. The function Maxpoold (a1, . . . , ad) over d elements a1, . . . , ad is defined in the following way. Define gt(x,y)=z, where w=x−y and z=x, if w>0 and z=y, if w≤0. Define z1=a1 and zi=gt(ai, zi−1), recursively for all 2≤i≤d. Now, Maxpoold (a1, . . . , ad)=zd.
We now describe a protocol such that parties begin with arithmetic shares (over L) of ai, for all i∈[d] and end the protocol with arithmetic shares (over L) of Maxpoold (a1, . . . , ad). For simplicity, we describe how P0 and P1 can compute shares of z=gt(x,y) (beginning with the shares of x and y). It is easy to see then how they can compute Maxpoold. First, parties locally compute shares of w=x−y (i.e., Pb computes wbL=xbL−ybL, for b∈{0, 1}). Next, they invoke with input wbL to learn output vbB. Now, they invoke MUXL with input wbL and vbB to learn output tbL. Finally, parties' output zbL=ybL+tbL. The correctness and security of the protocol follows in a straightforward manner. Computing Maxpool d is done using d−1 invocations of the above sub-protocol in d−1 sequential steps.
Argmaxd (a1, . . . , ad) is defined similar to Maxpoold (a1, . . . , ad), except that its output is an index i*s.t. ai*=Maxpoold (a1, . . . , ad). Argmaxd can be computed securely similar to Maxpoold (a1, . . . , ad).
We implement our cryptographic protocols in a library PIE and integrate them into the CrypTFlow framework [1, 44] as a new cryptographic backend. CrypTFlow compiles high-level Tensor-Flow [3] inference code to secure computation protocols, that are then executed by its cryptographic backends. We modify the truncation behavior of CrypTFlow's float-to-fixed compiler, Athos, in support of faithful fixed-point arithmetic. We start by describing the implementation of our cryptographic library, followed by the modifications that we made to Athos.
6.1 Cryptographic Backend
To implement our protocols, we build upon the
implementation from EMP [61] and extend it to
using the protocol from [43]. Our linear-layer implementation using PIEHE is based on SEAL/Delphi [2, 57] and PIEOT is based on EMP. All our protocol implementations are multi-threaded.
Oblivious Transfer.
uses AES256IC as a hash function in the random oracle model to mask the sender's messages in the OT extension protocol of [43] (There are two types of AES in MPC applications—fixed key (FK) and ideal cipher (IC) [10, 34]. While the former runs key schedule only once and is more efficient, the latter generates a new key schedule for every invocation and is required in this application. It is parameterized by the key size, which is 256 in this case). We incorporated the optimizations from [32, 33] for AES key expansion and pipelining these AES256IC calls. This leads to roughly 6× improvement in the performance of AES256IC calls, considerably improving the overall execution time of
(e.g. 2.7× over LAN for
Millionaires' protocol. Recall that m is a parameter in our protocol . While we discussed the dependence of communication complexity on m in Section 3.1.2, here we discuss its influence on the computational cost. Our protocol makes calls to
(after merging steps 9&10), where M=2m. Using OT extension techniques, generating an instance of
requires 6 AES256IC and (M+1) AES256IC evaluations. Thus, the computational cost grows super-polynomial with m. We note that for =32, even though communication is minimized for m=7, empirically we observe that m=4 gives us the best performance under both LAN and WAN settings (communication in this case is about 30% more than when m=7 but computation is 3× lower).
Implementing linear layers in PIEHE. To implement the linear layers in PIEHE, we build upon the Delphi implementation [2, 48], that is in turn based on the SEAL library [57]. We implement the fully connected layers as in [48]. For convolution layers, we parallelize the code, employ modulus-switching [57] to reduce the ciphertext modulus (and hence ciphertext size), and implement the strided convolutions proposed in Gazelle [42] These optimizations resulted in significant performance improvement of convolution layers. E.g. for the first convolution layer of ResNet50, the runtime decreased from 306s to 18s in the LAN setting and communication decreased from 204 MiB to 76 MiB (Layer parameters: image size 230×230, filter size 7×7, input channels 3, output channels 64, and stride size 2×2).
6.2 CrypTFlow Integration
We integrate our protocols into the CrypTFlow framework [1, 44] as a new cryptographic backend. CrypTFlow's float-to-fixed compiler, Athos, outputs fixed-point DNNs that use 64-bit integers and sets an optimal scale using a validation set. CrypTFlow required 64-bits to ensure that the probability of local truncation errors in its protocols is small (Section 5.1.1). Since our protocols are correct and have no such errors, we extend Athos to set both the bitwidth and the scale optimally using the validation set. The bitwidth and scale leak information about the weights and this leakage is similar to the prior works on secure inference [42, 44, 47-50, 60].
Implementing faithful truncations using requires the parties to communicate. We implement the following peephole optimizations in Athos to reduce the cost of these truncation calls. Consider a DNN having a convolution layer followed by a ReLU layer. While truncation can be done immediately after the convolution, moving the truncation call to after the ReLU layer can reduce the cost of our protocol . Since the values after ReLU are guaranteed to be all positive, the call to within it (step 2 in Algorithm 5) now becomes redundant and can be omitted. Our optimization further accounts for operations that may occur between the convolutions and ReLU, say a matrix addition. Moving the truncation call from immediately after convolution to after ReLU means the activations flowing into the addition operation are now scaled by 2s, instead of the usual s. For the addition operation to then work correctly, we scale the other argument of addition by s as well. These optimizations are fully automatic and need no manual intervention.
We empirically validate the following claims:
We start with a description of our experimental setup and benchmarks, followed by the results.
Experimental Setup. We ran our benchmarks in two network settings, namely, a LAN setting with both machines situated in West Europe, and transatlantic WAN setting with one of the machines in East US. The bandwidth between the machines is 377 MBps and 40 MBps in the LAN and the WAN setting respectively and the echo latency is 0.3 ms and 80 ms respectively. Each machine has commodity class hardware: 3.7 GHz Intel Xeon processor with 4 cores and 16 GBs of RAM.
Our Benchmarks. We evaluate on the ImageNet-scale benchmarks considered by [44]: SqueezeNet [39], ResNet50 [36], and DenseNet121 [37]. To match the reported accuracies, we need 37-bit fixed-point numbers for ResNet50, whereas 32 bits suffice for DenseNet121 and SqueezeNet. Recall that our division protocols lead to correct secure executions and there is no accuracy loss in going from cleartext inference to secure inference. A brief summary of the complexity of these benchmarks is given in Appendix F.
7.1 Comparison with Garbled Circuits
We compare with EMP-toolkit [61], the state-of-the-art library for Garbled Circuits (GC).
On the x-axis, which is in log-scale, the number of ReLUs range from 20 to 220. The histogram shows, using the right y-axis, the cumulative number of layers in our benchmarks (SqueezeNet, ResNet50, DenseNet121) which require the number of ReLU activations given on the x-axis. We observe that these DNNs have layers that compute between 213 and 220 ReLUs. For such layers, we observe (on the left y-axis) that our protocols are 2×-25× faster than GC—the larger the layers the higher the speedups, and gains are larger in the WAN settings. Specifically, for WAN and >217 ReLUs, the speedups are much higher than the LAN setting. Here, the cost of rounds is amortized over large layers and the communication cost is a large fraction of the total runtime. Note that our implementations perform load-balancing to leverage full-duplex TCP.
Next, we compare the time taken by GC and our protocols in computing the ReLU activations of our benchmarks in Table 4.
Our protocol over L is up to 8× and 18× faster than GC in the LAN and WAN settings respectively, while it is ≈7× more communication efficient. As expected, our protocol over n has even better gains over GC. Specifically, it is up to 9× and 21× faster in the LAN and WAN settings respectively, and has 9× less communication.
We also performed a similar comparison of our protocols with GC for the Avgpool layers of our benchmarks, and saw up to 51× reduction in runtime and 41× reduction in communication. We report the concrete performance numbers and discuss the results in more detail in Appendix G.
7.2 Comparison with State-of-the-Art
In this section, we compare with Gazelle [42] and Delphi [48], which are the current state-of-the-art for 2-party secure DNN inference that outperform [13, 14, 18, 20, 23, 30, 47, 55]. They use garbled circuits for implementing their non-linear layers, and we show that with our protocols, the time taken to evaluate the non-linear layers of their benchmarks can be decreased significantly.
For a fair evaluation, we demonstrate these improvements on the benchmarks of Delphi [48], i.e., the MiniONN (CIFAR-10) [47] and ResNet32 (CIFAR-100) DNNs (as opposed to the ImageNet-scale benchmarks for which their systems have not been optimized). For these benchmarks, Gazelle and Delphi have the same total time and communication; we refer to them as GD. Since Gazelle's choice of parameters was insecure, which was later fixed in Delphi, we use Delphi's implementation for comparing with them.)
In Table 5, we report the performance of GD for evaluating the linear and non linear components of MiniONN and ResNet32 separately, along with the performance of our protocols for the same non-linear computation (Our non-linear time includes the cost of truncation).
The table shows that the time to evaluate non-linear layers is the bulk of the total time and our protocols are 4×-30× faster in evaluating the non-linear layers. Also note that we reduce the communication by 11× on MiniONN, and require around 9× less communication on ResNet32.
7.3 Evaluation on Practical DNNs
With all our protocols and implementation optimizations in place, we demonstrate the scalability of PIE by efficiently running ImageNet-scale secure inference. Table 6 shows that both our backends, PIEOT and PIEHE, are efficient enough to evaluate SqueezeNet in under a minute and scale to ResNet50 and DenseNet121.
In the LAN setting, for both SqueezeNet and DenseNet121, PIEOT performs better than PIEHE by at least 20% owing to the higher compute in the latter. However, the quadratic growth of communication with bitlength in the linear-layers of PIEOT can easily drown this difference if we go to higher bitlengths. Because ResNet50, requires 37-bits (compared to 32 in SqueezeNet and DenseNet121) to preserve accuracy, PIEHE outperforms PIEOT in both LAN and WAN settings. In general for WAN settings where communication becomes the major performance bottleneck, PIEHE performs better than PIEOT: 2× for SqueezeNet and DenseNet121 and 4× for ResNet50. Overall, with PIE, we could evaluate all the 3 benchmarks within 10 minutes on LAN and 20 minutes on WAN. Since PIE supports both PIEOT and PIEHE, one can choose a specific backend depending on the network statistics [18, 52] to get the best secure inference latency. To the best of our knowledge, no prior system provides this support for OT and HE-based secure DNN inference.
We have presented secure, efficient, and correct implementations of practical 2-party DNN inference that outperform prior work in both latency and scale. Like all prior work on 2PC for secure DNN inference, PIE only considers semi-honest adversaries.
Because the principles described herein are performed in the context of a computing system, some introductory discussion of a computing system will be described with respect to
As illustrated in
The computing system 400 also has thereon multiple structures often referred to as an “executable component”. For instance, the memory 404 of the computing system 400 is illustrated as including executable component 406. The term “executable component” is the name for a structure that is well understood to one of ordinary skill in the art in the field of computing as being a structure that can be software, hardware, or a combination thereof. For instance, when implemented in software, one of ordinary skill in the art would understand that the structure of an executable component may include software objects, routines, methods (and so forth) that may be executed on the computing system. Such an executable component exists in the heap of a computing system, in computer-readable storage media, or a combination.
One of ordinary skill in the art will recognize that the structure of the executable component exists on a computer-readable medium such that, when interpreted by one or more processors of a computing system (e.g., by a processor thread), the computing system is caused to perform a function. Such structure may be computer readable directly by the processors (as is the case if the executable component were binary). Alternatively, the structure may be structured to be interpretable and/or compiled (whether in a single stage or in multiple stages) so as to generate such binary that is directly interpretable by the processors. Such an understanding of example structures of an executable component is well within the understanding of one of ordinary skill in the art of computing when using the term “executable component”.
The term “executable component” is also well understood by one of ordinary skill as including structures, such as hard coded or hard wired logic gates, that are implemented exclusively or near-exclusively in hardware, such as within a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or any other specialized circuit. Accordingly, the term “executable component” is a term for a structure that is well understood by those of ordinary skill in the art of computing, whether implemented in software, hardware, or a combination. In this description, the terms “component”, “agent”, “manager”, “service”, “engine”, “module”, “virtual machine” or the like may also be used. As used in this description and in the case, these terms (whether expressed with or without a modifying clause) are also intended to be synonymous with the term “executable component”, and thus also have a structure that is well understood by those of ordinary skill in the art of computing.
In the description that follows, embodiments are described with reference to acts that are performed by one or more computing systems. If such acts are implemented in software, one or more processors (of the associated computing system that performs the act) direct the operation of the computing system in response to having executed computer-executable instructions that constitute an executable component. For example, such computer-executable instructions may be embodied on one or more computer-readable media that form a computer program product. An example of such an operation involves the manipulation of data. If such acts are implemented exclusively or near-exclusively in hardware, such as within a FPGA or an ASIC, the computer-executable instructions may be hard-coded or hard-wired logic gates. The computer-executable instructions (and the manipulated data) may be stored in the memory 404 of the computing system 400. Computing system 400 may also contain communication channels 408 that allow the computing system 400 to communicate with other computing systems over, for example, network 410.
While not all computing systems require a user interface, in some embodiments, the computing system 400 includes a user interface system 412 for use in interfacing with a user. The user interface system 412 may include output mechanisms 412A as well as input mechanisms 412B. The principles described herein are not limited to the precise output mechanisms 412A or input mechanisms 412B as such will depend on the nature of the device. However, output mechanisms 412A might include, for instance, speakers, displays, tactile output, virtual or augmented reality, holograms and so forth. Examples of input mechanisms 412B might include, for instance, microphones, touchscreens, virtual or augmented reality, holograms, cameras, keyboards, mouse or other pointer input, sensors of any type, and so forth.
Embodiments described herein may comprise or utilize a special-purpose or general-purpose computing system including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments described herein also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general-purpose or special-purpose computing system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: storage media and transmission media.
Computer-readable storage media includes RAM, ROM, EEPROM, CD-ROM, or other optical disk storage, magnetic disk storage, or other magnetic storage devices, or any other physical and tangible storage medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general-purpose or special-purpose computing system.
A “network” is defined as one or more data links that enable the transport of electronic data between computing systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computing system, the computing system properly views the connection as a transmission medium. Transmission media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general-purpose or special-purpose computing system. Combinations of the above should also be included within the scope of computer-readable media.
Further, upon reaching various computing system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RANI within a network interface module (e.g., a “NIC”), and then be eventually transferred to computing system RANI and/or to less volatile storage media at a computing system. Thus, it should be understood that storage media can be included in computing system components that also (or even primarily) utilize transmission media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general-purpose computing system, special-purpose computing system, or special-purpose processing device to perform a certain function or group of functions. Alternatively, or in addition, the computer-executable instructions may configure the computing system to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries or even instructions that undergo some translation (such as compilation) before direct execution by the processors, such as intermediate format instructions such as assembly language, or even source code.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computing system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, datacenters, wearables (such as glasses) and the like. The invention may also be practiced in distributed system environments where local and remote computing system, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Those skilled in the art will also appreciate that the invention may be practiced in a cloud computing environment. Cloud computing environments may be distributed, although this is not required. When distributed, cloud computing environments may be distributed internationally within an organization and/or have components possessed across multiple organizations. In this description and the following claims, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services). The definition of “cloud computing” is not limited to any of the other numerous advantages that can be obtained from such a model when properly deployed.
For the processes and methods disclosed herein, the operations performed in the processes and methods may be implemented in differing order. Furthermore, the outlined operations are only provided as examples, and some of the operations may be optional, combined into fewer steps and operations, supplemented with further operations, or expanded into additional operations without detracting from the essence of the disclosed embodiments.
Here, we describe supporting protocols that our main protocols rely on.
A.1 Protocol for Regular and
Regular AND can be realized using bit-triples [8], which are of the form (d)bB, ebB, fbB), where b∈{0, 1} and d∧e=f. Using an instance of
the parties can generate two bit-triples. We describe this protocol for generating the first triple, and from there, it will be easy to see how to also generate the second one. The parties start by sampling random shares
P1 sets the first two bits of its input to
as d1B∥e1B, while the other two bits are used for the second triple. P0 samples a random bit r and sets its input messages to
as follows: for the i-th message, where i∈{0, 1}4, P0 uses the first two bits i1∥i2 of i to compute r⊕((i1⊕d0B∧(i2⊕e0B)), and sets it as the first bit of the message, while reserving the second bit for the other triple. Finally, P0 sets f0B=r, and P1 sets the first bit of the output of
Correctness can be seen by noting that f1B=f0B⊕(d∧e), and since f0B is uniformly random, security follows directly in the
The communication of this protocol is the same as that of
which is 2λ+16·2 bits. Since we generate two bit-triples using this protocol, the amortized cost per triple is λ+16 bits, which is 144 for λ=128.
A.2 Protocol for Correlated AND
Correlated triples are two sets of bit triples (dbB, ebB, fbB) and (d′bB), e′bB, f′bB), for b∈{0, 1}, such that e=e′, d∧e=f, and d′∧e′=f′. The protocol from Appendix A.1 used a
invocation to generate two regular triples, where the 4 bits of P1's input were its shares of d, e, d′, and e′. However, when generating correlated triples, we can instead use an instance of
because e=e′, and thus, 3 bits suffice to represent P1's input. Correctness and security follow in a similar way as in the case of regular AND (see Appendix A.1).
The communication of this protocol is equal to that of
which costs 2λ+8·2 bits. Thus, we get an amortized communication of λ+8 bits per correlated triple.
A.3 Protocol for Multiplexer
We describe our protocol for realizing MUX2 in Algorithm 6. First we argue correctness. Let c=ReconstB(c0B, c1B)=c0B⊕c1B. By correctness of
x1=−r0+c·a0n. Similarly, x0=−r1+c·a1n. Hence, Reconstn (z0n, a1n)=z0+z1=c·a. Security trivially follows in
Communication complexity is 2(λ+λη).
a 1n, −r1).
A.4 Protocol for B2A
We describe our protocol for realizing B2An formally in Algorithm 7. For correctness, we need to show that d=ReconstL (d0n, d1n)=c0B+c1B−2c0B c1B. By correctness of
Using this, d0n=c0B+2x and d1n=c1B−2x−2c0Bc1B. Security follows from the security of
and communication required is λ+η bits.
We describe our ReLU protocol for the case where the input and output shares are over L in Algorithm 8, and note that the case of n follows similarly. It is easy to see that the correctness and security of the protocol follow in the (, MUXL)-hybrid.
Communication complexity. We first look at the complexity of , which involves a call to , and MUXL. , has the same communication as , which requires λ(−1)+13½(−1)−2λ−22 bits if we assume m=4 and m|(−1), and exclude optimization (3.1.1) in the general expression from Section 3.1.2. MUXL incurs a cost of 2λ+4 bits, bringing the total cost to λ+17½−λ−35½ bits, which can be rewritten as <lt+18. We get our best communication for =32 (with all the optimizations) by taking m=7 for the ΠMILL32, invocation inside ΠDReLUint,32, which gives us a total communication of 3298 bits.
Now, we look at the complexity of ΠReLUring,n, which makes calls to DReLUring,n and MUXn). The cost of DReLUring,n is 2λ+4 bits for
plus (3/2) λ(η+1)+27(η+1)−4λ−44 bits for 2 invocations of MILLn+1, where P1's input is the same in both invocations and the same assumptions are made as for the expression of above. The cost of MUXn is 2λ+4η bits, and thus, the total cost is (3/2) λ(η+1)+31η−13, which can be rewritten as <(3/2) λ(η+1)+31η. Concretely, we get the best communication for η=32 by taking m=7 for the millionaire invocations, getting a total communication of 5288 bits.
ReLU(a) bL = z bL.
Here, we prove Theorem 4.1.
From Equation 2, we can write rdiv(ain, d) as:
for i∈{0, 1}. au can be expressed as au=a0+a1−w·n, where the wrap-bit w=1{a0+a1≥n}. We can rewrite this as:
for some integer k such that 0≤a00+a10−w·n0−k·d<d. Similar to Equation 3 and from Equation 4, we can write rdiv(a, d) as:
From Equations 3 and 5, we have the following correction term:
Let A′ii=idiv(a00+a10−i·n0, d). Then the values of the correction terms c1 and c0 are as summarized in the following table:
From the table, we have c1=corr and can rewrite the correction term as c=n corr·n1+c0−B. Thus, adding corr·n1−B mod n to rdiv(a0n, d)+rdiv(a1n, d) accounts for all the correction terms except c0 mod n.
Now all that remains to be proven is that c0=1−C. Let C0=1{A<d}, C1=1{A<0}, and C2=1{A<−d}. Then, we have C=C0+C1+C2. Note from the theorem statement that A=a00+a10 and A=a00+a10−2·n0 for the cases corresponding to rows 1 and 8 respectively from the table, while A=a00+a10−n0 for the rest of cases. Thus, it can be herein seen that c0=idiv(A, d). Also note that −2·d+2≤A≤2·d−2, implying that the range of c0 is {−2, −1, 0, 1}. Now we look at each value assumed by c0 separately as follows:
We describe our protocol for general division formally in Algorithm 9. As discussed in Section 4.2.2, our protocol builds on Theorem 4.1 and we compute the various sub-terms securely using our new protocols. Let δ=┌log δd┐. We compute the shares of corr over both n and Δ (Step 15). We write the term C as (ReLU′(A−d)⊕1)+(ReLU′(A)⊕1)+(ReLU′(A+d)⊕1), which can be computed using three calls to DReLUint,δ (Step 19) and B2An (Step 20) each.
Correctness and Security. First, m=ReconstB(m0B, m1B)=ReconstB(a0B, a1B)=1{a≥n′}. Next, similar to Algorithm 5, ReconstL(corr0L, corr1L)=corr=ReconstΔ(corr0Δ, corr1Δ), where corr is as defined in Theorem 4.1. Given the bounds on value of A (as discussed above), we can see that Steps 16 & 17 compute arithmetic shares of A, and A0=(A−d), A1=A, A2=(A+d), respectively. Now, invocation of DReLUint,δ on shares of Aj (Step 19) returns boolean shares of γ=(1⊕MSB(Aj)) over δ bit integers, which is same as 1⊕1{Aj<0} over . Hence, C′j=ReconstB (C′j)0B, C′1B)=1{Aj<0}. By correctness of B2An, step 22 computes arithmetic shares of C as defined in Theorem 4.1. In step 23, B0+B1=n B as defined. Hence, correctness holds and zbn are shares of rdiv(a, d).
Given that corr0n and corr0Δ are uniformly random, security of the protocol can be seen in
Communication complexity. ΠDIVring,n,d involves a single call to DReLUring,n and
and three calls each to DReLUint,δ, and B2An. From Appendix B, we have the cost of DReLUring,n as (3/2)λη+27η−λ/2−13 bits.
and 3×B2An cost 2λ+4·(η+δ) and 3λ+3η bits respectively. Since the cost of is λ+13½−3λ−35½ bits (see Appendix B), 3×DReLUint,δ, requires 3λδ+40½δ−9λ−106½ bits of communication. Thus, the overall communication of ΠDIVring,n,d is (3/2)λη+34η+3λδ+44½δ−4½λ−119½, which can be rewritten as <(3/2λ+34)·(η+2δ). Concretely, we get the best communication for ΠDIVring,n,49 (η=32) setting m=7 in all our millionaire invocations, which results in a total communication of 7796 bits.
Note that for the case of -bit integers, our division protocol would use a call to and
and three calls each to DReLUint,δ, and B2AL. The cost of , and 3×DReLUint,δ are as mentioned in the previous paragraph, and the cost of
and B2AL are 2λ+4·(+δ) and 3λ+3 bits respectively. Thus, the overall communication is λ+3λδ+20½+44½δ−7λ−142 bits, which can be rewritten as <(λ+21)·(+3δ). By setting m=8 in all our millionaire invocations, we get the best communication of 5570 bits for ΠDIVring,32,49.
Gazelle [42] proposed two methods for computing convolutions, namely, the input rotations and the output rotations method. The only difference between the two methods is the number of (homomorphic) rotations required (the number of homomorphic additions also differ, but they are relatively very cheap). In this section, we describe an optimization to reduce the number of rotations required by the output rotations method.
Let ci and co denote the number of input and output channels respectively, and cn denote the number of channels that can fit in a single ciphertext. At a high level, the output rotations method works as follows: after performing all the convolutions homomorphically, we have ci·co/cn intermediate ciphertexts that are to be accumulated to form tightly packed output ciphertexts. Since most of these ciphertexts are misaligned after the convolution, they must be rotated in order to align and pack them. The intermediate ciphertexts can be grouped into co/cn groups of ci ciphertexts each, such that the ciphertexts within each group are added (after alignment) to form a single ciphertext. In [42], the ciphertexts within each group are rotated (aligned) individually, resulting in ≠ci·co/cn rotations. We observe that these groups can be further divided into cn subgroups of ci/cn ciphertexts each, such that ciphertexts within a subgroup are misaligned by the same offset. Doing this has the advantage that the ci/cn ciphertexts within each subgroup can first be added and then the resulting ciphertext can be aligned using a single rotation. This brings down the number of rotations by a factor of ci/cn to cn·co/cn.
With our optimization, the output rotations method is better than the input rotations method when f2·ci>co, where f2 the filter size, which is usually the case.
The complexity of the benchmarks we use in Section 7 is summarized as follows:
In this section, we compare our protocols with garbled circuits for evaluating the Avgpool layers of our benchmarks, and the corresponding performance numbers are given in Table 7.
On DenseNet121, where a total of 176, 640 divisions are performed, we have improvements over GC of more than 32× and 45× in the LAN and the WAN setting, respectively, for both our protocols. However, on SqueezeNet and ResNet50, the improvements are smaller (2× to 7×) because these DNNs only require 1000 and 2048 divisions, respectively, which are not enough for the costs in our protocols to amortize well. On the other hand, the communication difference between our protocols and GC is huge for all three DNNs. Specifically, we have an improvement of more than 19×, 27×, and 31× on SqueezeNet, ResNet50, and DenseNet121 respectively, for both our protocols.
The present invention may be embodied in other specific forms without departing from its spirit or characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicate by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application claims the benefit of U.S. Provisional Application No. 63/051,754 filed Jul. 14, 2020, which is hereby incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63051754 | Jul 2020 | US |