The subject matter described herein relates to an interpretability framework for calculating confidence levels and expected membership advantages of an adversary in identifying members of a training dataset used with training machine learning models.
Machine learning models can leak sensitive information about training data. To address such situations, noise can be added during the training process via differential privacy (DP) to mitigate privacy risk. To apply differential privacy, data scientists choose DP parameters (ϵ, δ). However, interpreting and choosing DP privacy parameters (ϵ, δ), and communicating the factual guarantees with regard to re-identification risk and plausible deniability is still a cumbersome task for non-experts. Different approaches for justification and interpretation of DP privacy parameters have been introduced which stray from the original DP definition by offering an upper bound on privacy in face of an adversary with arbitrary auxiliary knowledge.
In a first aspect, data is received that specifies a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Privacy parameters ε, δ are then calculated based on the received data that govern a differential privacy (DP) algorithm to be ρc on a conditional probability of different possible datasets on a ratio of probabilities distributions of different observations, which are bound by the posterior belief ρc as applied to a dataset. The calculated privacy parameters are then used to apply the DP algorithm to the function over the dataset.
The probability distributions can be generated using a Gaussian mechanism with an (ε, δ) guarantee that perturbs the result of the function evaluated over the dataset, preventing a posterior belief greater than ρc on the dataset.
The probability distributions can be generated using a Laplacian mechanism with an s guarantee that perturbs the result of the function evaluated over the dataset, preventing a posterior belief greater than ρc on the dataset.
The resulting dataset (i.e, the dataset after application of the DP algorithm to the function over the dataset) can be used for various applications including training a machine learning model. Such a trained machine learning model can be deployed and then classify data input therein.
Privacy parameter ∈ can equal log (ρc/(1−ρc) for a series of (ε, δ) or ε anonymized function evaluations with multidimensional data.
A resulting total posterior belief ρc can be calculated using a sequential composition or Rényi differential privacy (RDP) composition. The at least one machine learning model can be updated using the calculated resulting total posterior belief ρc.
In an interrelated aspect, data is received that specifies privacy parameters ε, δ which govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The received data is then used to calculate an ραρα can be used when applying the DP algorithm to a function over the dataset.
The probability distributions can be generated using a Gaussian mechanism with an (ε, δ) guarantee that perturbs the result of the function evaluated over the dataset, ensuring that membership advantage is ρa on the dataset.
The probability distributions can be generated using a Laplacian mechanism with an ε guarantee that perturbs the result of the function evaluated over the dataset, ensuring that membership advantage is ρa on the dataset.
The resulting dataset (i.e, the dataset after application of the DP algorithm to the function over the dataset) can be used to train at least one machine learning. Such a trained machine learning model can be deployed so as to classify further data input therein.
The calculated expected membership advantage ρα for a series of (ε, δ) anonymized function evaluations with multidimensional data is equal to:
wherein CDF is the cumulative distribution function of the standard normal distribution.
A resulting expected membership advantage ρα can be calculated using sequential composition or Rényi differential privacy (RDP) composition. The calculated resulting expected membership advantage ρα can be used to update the at least one machine learning model.
In a further interrelated aspect, data is received that specify privacy parameters ε, δ which govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. Thereafter, the received data is used to calculate an adversarial posterior belief bound ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Such calculating can be based on an overlap of two probability distributions. The DP algorithm can then be applied, using the calculated adversarial posterior belief bound ρc, to a function over the dataset to result in an anonymized function output (e.g., machine learning model, etc.).
Posterior belief bound ρc can equal 1/(1+e−∈) for a series of (ε, δ) or ε anonymized function evaluations with multidimensional data.
Data can be received that specifies an expected adversarial posterior belief bound expected ρc such that ρc=expected ρc+δ*(1−expected ρc).
The probability distributions can be generated using a differential privacy mechanism either with an (ε, δ) guarantee or with an ε guarantee that perturbs the result of the function evaluated over the dataset, preventing a posterior belief greater than ρc on the dataset.
At least one machine learning model can be anonymously trained using the resulting dataset (i.e, the dataset after application of the DP algorithm to the function over the dataset). A resulting total posterior belief ρc can be calculated using a sequential composition or Rényi differential privacy (RDP) composition. The at least one machine learning model can be updated using the calculated resulting total posterior belief ρc.
In a still further interrelated aspect, a dataset is received. Thereafter, at least one first user-generated privacy parameter is received which governs a differential privacy (DP) algorithm to be applied to a function evaluated over the received dataset. Using the received at least one first user-generated privacy parameter, at least one second privacy parameter is calculated based on a ratio or overlap of probabilities of distributions of different observations. Thereafter, the DP algorithm is applied, using the at least one second privacy parameter, to the function over the received dataset to result in an anonymized function output (e.g., machine learning model, etc.). At least one machine learning model can be anonymously trained using the dataset which, when deployed, is configured to classify input data.
The machine learning model(s) can be deployed once trained to classify input data when received.
The at least one first user-generated privacy parameter can include a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. With such an arrangement, the calculated at least one second privacy parameter can include privacy parameters ε, δ and the calculating can be based on a ratio of probabilities distributions of different observations which are bound by the posterior belief ρc as applied to the dataset.
In another variation, the at least one first user-generated privacy parameter includes privacy parameters ε, δ. With such an implementation, the calculated at least one second privacy parameter can include an expected membership advantage ρα that corresponds to a probability of an adversary successfully identifying a member in the dataset and the calculating can be based on an overlap of two probability distributions.
In still another variation, the at least one first user-generated privacy parameter can include privacy parameters ε, δ. With such an implementation, the ρα that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output and the calculating can be based on an overlap of two probability distributions.
Non-transitory computer program products (i.e., physically embodied computer program products) are also described that store instructions, which when executed by one or more data processors of one or more computing systems, cause at least one data processor to perform operations herein. Similarly, computer systems are also described that may include one or more data processors and memory coupled to the one or more data processors. The memory may temporarily or permanently store instructions that cause at least one processor to perform one or more of the operations described herein. In addition, methods can be implemented by one or more data processors either within a single computing system or distributed among two or more computing systems. Such computing systems can be connected and can exchange data and/or commands or other instructions or the like via one or more connections, including but not limited to a connection over a network (e.g., the Internet, a wireless wide area network, a local area network, a wide area network, a wired network, or the like), via a direct connection between one or more of the multiple computing systems, etc.
The subject matter described herein provides many technical advantages. For example, the current framework provides enhanced techniques for selecting a privacy parameter c based on the re-identification confidence ρc and expected membership advantage ρα. These advantages were demonstrated on synthetic data, reference data and real-world data in a machine learning and data analytics use case which show that the current framework is suited for multidimensional queries under composition. The current framework furthermore allows the optimization of the utility of differentially private queries at the same (ρc, ρα) by considering the sensitive range S(ƒ) instead of global sensitivity Δƒ. The framework allows data owners and data scientists to map their expectations of utility and privacy, and derive the consequent privacy parameters ϵ.
The details of one or more variations of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features and advantages of the subject matter described herein will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
Provided herein is an interpretability framework for calculating the confidence ρc and expected membership advantage ρα of an adversary in identifying members of training data used in connection with one or machine learning models. These metrics are derived a prióri for multidimensional, iterative computations, as found in machine learning. The framework is compatible with composition theorems and alternative differential privacy definitions like Renyi Differential Privacy, offering a tight upper bound on privacy. For illustration purposes, the framework and resulting utility is evaluated on synthetic data, in a deep learning reference task, and in a real-world electric load forecasting benchmark.
The current subject matter provides a generally applicable framework for interpretation of the DP guarantee in terms of an adversary's confidence and expected membership advantage for identifying the dataset on which a differentially private result was computed. The framework adapts to various DP mechanisms (e.g., Laplace, Gaussian, Exponential) for scalar and multidimensional outputs and is well-defined even under composition. The framework allows users to empirically analyze a worst-case adversary under DP, but also gives analytical bounds with regard to maximum confidence and expected membership advantage.
The current subject matter, in particular, can be used to generate anonymized function output within specific privacy parameter bounds which govern the difficulty of getting insight into the underlying input data. Such anonymous function evaluations can be used for various purposes including training of machine learning models which, when deployed, can classify future data input into such models.
Also provided herein, are illustrations of how different privacy regimes can be determined by the framework independent of a specific use case.
Still further, with the current subject matter, privacy parameters for abstract composition theorems such as Rényi Differential Privacy in deep learning can be inferred from the desired confidence and membership advantage in our framework.
Differential Privacy. Generally, data analysis can be defined as the evaluation of a function ƒ: DOM→R on some dataset ∈DOM yielding a result r∈R. Differential privacy is a mathematical definition for anonymized analysis of datasets. In contrast to previous anonymization methods based on generalization (e.g., k-anonymity), DP perturbs the result of a function ƒ(·) over a dataset ={d1, . . . , dn} s.t. it is no longer possible to confidently determine whether ƒ(·) was evaluated on or some neighboring dataset ′ differing in one individual. The neighboring dataset ′ can be created by either removing one data point from (unbounded DP) or by replacing one data point in with another from DOM (bounded DP). Thus, privacy is provided to participants in the dataset since their impact of presence (absence) on the query result becomes negligible. To inject differentially private noise into the result of some arbitrary function ƒ(·), mechanisms fulfilling Definition 1 are utilized.
Definition 1 ((∈, δ)-Differential Privacy). A mechanism gives (∈, δ)-Differential Privacy if for all , ′⊆DOM differing in at most one element, and all outputs S⊆R
Pr(()∈S)≤e∈·Pr((′)∈S)+δ
∈-DP is defined as (∈, δ=0)-DP and refer to the application of a mechanism to a function ƒ(·) as output perturbation. DP holds if mechanisms are calibrated to the global sensitivity i.e., the largest influence a member of the dataset can cause to the outcome of any ƒ(·). Let and ′ be neighboring datasets, the global l1-sensitivity of a function ƒ is defined as Δƒ=maxD,′∥ƒ()−ƒ()∥1. Similarly, Δƒ2=maxD,′|ƒ()−ƒ(′)1∥2 can be referred to as global l2-sensitivity.
A popular mechanism for perturbing the outcome of numerical query functions ƒ is the Laplace mechanism. Following Definition 1 the Laplace mechanism adds noise calibrated to Δƒ by drawing noise from the Laplace distribution with mean μ=0.
Theorem 1 (Laplace Mechanism). Given a numerical query functions ƒ:DOM→Rk, the Laplace mechanism
Lap(,ƒ,∈):=ƒ()+(z1, . . . ,zk)
is an ∈-differentially private mechanism when all zi with 1≤i≤k are independently drawn from
A second DP mechanism used for output perturbation within this work is the Gaussian mechanism of Definition 2. The Gaussian mechanism uses l2-sensitivity.
Theorem 2 (Gaussian Mechanism). Given a numerical query function ƒ: DOM→Rk, it exists σ s. t. the Gaussian mechanism
Gau(,ƒ,Γ,δ):=ƒ()+(z1, . . . ,zk)
is an (∈, δ)-differentially private mechanism for a given pair of ∈, δ∈(0, 1) when all zi with 1≤i≤k are independently drawn from ˜(0,σ2).
Prior work has analyzed the tails of the normal distributions and found that bounding σ>Δƒ2√{square root over (2 ln(1.25/δ))}/∈ fulfills Theorem 2. However, these bounds have been shown to be loose and result in overly pessimistic noise addition.
Definition 2 ((α, ∈RDP)-Differential Privacy). A mechanism gives (α, ∈RDP)-RDP if for any adjacent , ′⊆DOM and α>1
Calibrating the Gaussian mechanism in terms of Rényi differential privacy (RDP) is straight forward due to the relation ∈RDP=α·Δƒ22/2σ2. One option is to split σ=Δƒ2*η where η is called noise multiplier which is the actual term dependent on ∈RDP as Δƒ2 is fixed. A (α, RDP)-RDP guarantee converts to
which is not trivially invertible as multiple (α, ∈RDP) yield the same (∈, δ)-DP guarantee. A natural choice is to search for a (α, ∈RDP) causing σ to be as low as possible. Hence, it can be expanded as follows:
and minimize
which provides a tight bound on η and thus on σ for given (∈, δ).
The standard approach to analyze the privacy decay over a series of ∈-DP mechanisms is the sequential composition theorem.
Theorem 3 (Sequential Composition). Let i provide (∈i,δi)-Differential Privacy. The sequence of 1, . . . , k() provides (Σi ∈i, Σi δi)-DP.
Sequential composition is, again, loose for (∈, δ)-DP which has resulted in various advanced theorems for composition. Yet, tight composition bounds are also studied in the RDP domain which has the nice property of ∈RDP,i being summed up as well. So, for the sequence of k mechanisms executions each providing (α, ∈RDP,i)-RDP the total guarantee composes to (α, Σi ∈RDP,i)-RDP. Using the equations above, a tight per step η can be derived from this.
These aspects of DP build the foundations of private deep learning. In private deep learning, the tradeoff between privacy and utility becomes important because practical neural networks need to offer a certain accuracy. Although increasing privacy through the (∈, δ) guarantee always decreases utility, other factors also affect the accuracy of models, such as the quantity of training data and the value chosen for the clipping norm C.
Various properties of C affect its optimal value. Unfortunately, Δƒ2 cannot be determined in advance for the size of gradients, so it has been proposed to clip each per example gradient to C, bounding the influence of one example on an update. This parameter can be set to maximize model accuracy and offer the rule to set C to “the median of the norms of the unclipped gradients over the course of training.” The following effects can be taken into account: the clipped gradient may point in a different direction from the original gradient if C is too small, but if C is too large, the large magnitude of noise added decreases utility. Since gradients change over the course of training, the optimal value of C at the beginning of training may no longer be optimal toward the end of training. Adaptively setting the clipping norm may further improve utility by changing C as training progresses or setting C differently for each layer. To improve utility for a set privacy guarantee, the value of C can be tuned and adapted.
prob notations
⊂ denotes the subset of records of whom prob knows
Strong Probabilistic Adversary. For interpretation of (ϵ, δ) the privacy guarantee ϵ with regard to a desired bound on the Bayesian belief of a probabilistic adversary prob. prob's knowledge is modeled as the tuple (, , n, ,ƒ, r) which is defined in Table 1. prob seeks to identify \ by evaluating possible combinations of missing individuals drawn from , which can be formally denoted as possible worlds:
Ψ={∪{d1, . . . ,dn}|d1, . . . ,dn∈u\}
prob assigns a probability to each world ω∈P, reflecting the confidence that w was used as input to . This confidence can be referred to as belief β(ω). The posterior belief of prob on world ωi is defined as conditional probability:
Using the fact that represents a continuous random variable and the choice of worlds is discrete, Bayes theorem allows inserting 's probability density function (PDF) in step 2. The firmest guess of prob is represented by the world ω having the highest corresponding belief. However, it is not guaranteed that ω represents the true world . From this point, the terms confidence and posterior belief are used interchangeably.
The initial distribution over Ψ reflects the prior belief on each world by prob. It is assumed that this is a discrete uniform distribution among worlds, thus
By bounding belief β({tilde over (ω)}) for the true world ω by a chosen constant p, a desired level of privacy can be guaranteed. It is noted that bounding the belief for the true world implicitly also bounds the belief for any other world.
The noise added to hide
Definition 3 (Sensitive Range S(ƒ)). The sensitive range of a query function ƒ is the range of ƒ:
S(ƒ)=maxω
This approach resulted in the introduction of differential identifiability which is defined below in Definition 4.
Definition 4 (Differential Identifiability). Given a dataset , a randomized mechanism satisfies ρ-Differential Identifiability if among all possible datasets i, 2, . . . , m differing in one individual w.r.t. the posterior belief β, after getting the response r∈R, is bounded by ρ:
β(i|()=r)≤ρ. (3)
The notation of possible world ω∈Ψ is replaces by possible datasets which is semantically the same. ρ-Differential Identifiability implies that after receiving a mechanism's output r the true dataset can be identified by prob with confidence β()≤ρ.
DP and Differential Identifiability have been shown to be equal when |Ψ|=2, since DP considers two neighboring datasets , ′ by definition. Specifically, Differential Identifiability is equal to bounded DP in this case, since possible worlds each have the same number of records. Under this assumption, the sensitive range S(ƒ) represents a special case of local sensitivity in which both and ′ are fixed. It can be assumed that Δƒ is equal to S(ƒ). If this condition is met, the relation ρ↔∈ for Lap is:
Framework For Interpreting DP. Based on the original strong probabilistic adversary prob provided above, an interpretability framework is formulated that allows to translate formal (ϵ, δ) guarantees into concrete re-identification probabilities. First, the original confidence upper bound of Equation (3) can be extended to work with arbitrary DP mechanisms and a discussion is provided with regard to how δ is integrated into the confidence bound. Second, prob is extended to behave adaptively with regard to a sequence of mechanisms. It is shown below that that the resulting adaptive adversary adapt behaves as assumed by composition theorems. Third, expected membership advantage ρα is defined and suggested as a privacy measure complementing ρ, which we will refer to as ρc in the following.
General Adversarial Confidence Bound. According to Equation (4) the probabilistic adversary with unbiased priors (i.e., 0.5) regarding neighboring datasets , ′ has a maximum posterior belief of 1/(1+e−ϵ) when the ϵ-differentially private Laplace mechanism (cf. Definition 1) is applied to ƒ having a scalar output. In the following, it is shown that this upper bound holds also for arbitrary ϵ-differentially private mechanisms with multidimensional output. Therefore, the general belief calculation of Equation (1) can be bound by the inequality of Definition 1.
For δ=0, the last equation simplifies to 1/(1+e−ϵ) so it can be concluded:
Corollary 1. For any ϵ-differentially private mechanism, the strong probabilistic adversary's confidence on either dataset , ′ is bounded by
For δ>0, however, it was observed that where Pr((′)=r) becomes very small, β() grows towards 1:
Hence, if the Gaussian mechanism Gau samples a value at the tails of the distribution in the direction away from ƒ(′), the posterior belief for and ′ head to 1 and 0, respectively. If a value is sampled from the tails in the direction of ƒ(′), the posterior belief for and ′ go to 0 and 1, respectively. The difference in behavior between the Laplace and Gaussian mechanism when large values of noise are sampled is demonstrated. Fixes ƒ()=0,ƒ(′)=1 and Δƒ=Δƒ2=1 can be utilized. In diagram 100b of
β is now extended to k-dimensional (ϵ, δ)-differentially private mechanisms where ƒ()→1Rk.
Theorem 4. The general confidence bound of Corollary 1 holds for multidimensional (ϵ, δ)-differentially private mechanisms with probability 1−δ.
Proof. Properties of RDP can be used to prove the confidence bound for multidimensional (ϵ, δ)-differentially private mechanisms.
In the step from Equation (6) to (7), probability preservation properties are used to prove that RDP guarantees can be converted to (ϵ, δ) guarantees. In the context of this proof, it is implied that ϵ-DP holds when e−ϵRDP Pr((′)=r)=r)>δα/(a−1) since otherwise Pr(M()=r)<δ. It can therefore be assumed that e−ϵRDP(′)=r)>δα/(α−1), which occurs at most in 1−δ cases, and continue from Equation (8):
In the step from Equation (9) to (10), it is noted the exponent perfectly matches the conversion from ∈ to ∈RDP.
Consequently, Corollary 1 holds with probability 1−δ for Gau. Hence, the general confidence upper bound for (∈, δ)-differentially private mechanisms can be defined as follows:
Definition 5 (Expected Adversarial Confidence Bound). For any (ϵ, δ)-differentially private mechanism, the expected bound on the strong probabilistic adversary's confidence on either dataset , ′ is
ρc(ϵ,δ)=E[ρ(ϵ)]=(1−δ)ρ(ϵ)+δ=ρ(ϵ)+δ(1−ρ(ϵ)).
Adaptive Posterior Belief Adversary. prob computes posterior beliefs β(·) for datasets and ′ and makes a guess arg max∈{,′}β(). Therefore, the strong prob represents a naive Bayes classifier choosing an option w.r.t. to the highest posterior probability. The input features are the results r observed by prob, which are independently sampled and thus fulfill the i.i.d. assumption. Also, the noise distributions are known to prob, thus making the naive Bayes classifier the strongest probabilistic adversary in our scenario.
A universal adversary against DP observes multiple subsequent function results and adapts once a new result r is obtained. To extend prob to an adaptive adversary adapt, adaptive beliefs can be defined as provided below.
Definition 6 (Adaptive Posterior Belief). Let , ′ be neighboring datasets and 1, 2 be ϵ1, ϵ2-differentially private mechanisms. If 1() is executed first with posterior belief β1(), the adaptive belief for after executing 2 () is:
Given k iterative independent function evaluations, βk() is written to mean βk (, βk-1(, . . . )). To compute βk(), the adaptive adversary adapt computes adaptive posterior beliefs as specified by Algorithm 1.
The calculation of βk () and βk (′) as presented in Algorithm 1 can also be expressed as a closed form calculation which can be used later to further analyze the attacker.
Aspects of the associated proof are provided below in which it is assumed that the attacker starts with uniform priors. Thus, β1 () is calculated to be:
In the second step β1 () is used as the prior, hence β2 () is calculated as:
This scheme continues for all k iterations by induction.
Even though the closed form provides an efficient calculation scheme for βk (), numerical issues can be experienced so Algorithm 1 can be used for practical simulation of adapt. However, by applying the closed form, it can be shown that adapt operates as assumed by the sequential composition theorem (cf. 3) which substantiates the strength of adapt. It is also noted that β1 () has the same form as βk (), since the multiplication of two Gaussian distributions results in another Gaussian distribution. Therefore, the composition of several Gaussian mechanisms can be regarded as a single execution of a multidimensional mechanism with an adjusted privacy guarantee.
Theorem 5 (Composition of Adaptive Beliefs). Let , ′ be neighboring datasets and 1, . . . , k be an arbitrary sequence of mechanisms providing ϵ1, . . . , ϵk-Differential Privacy, then
By using Definition 1 and δ=0, the following can be bound:
This demonstrates that in the worst case adapt takes full advantage of the composition of ∈. But what about the case where δ>0? The same σi can be had in all dimensions as if it is assumed that the privacy budget (ϵ, δ) is split equally s. t. ϵi=∈j and δi=δj which, given previous assumptions, leads to σi=σj, ∀i, jϵ{1, . . . , k}. The following can be transformed:
In the step from Equation (12) to (13), simplifications from Equations (6) to (10) in Theorem 4 are used. This short proof demonstrates that adapt behaves as expected by sequential composition theorems also for the (ϵ, δ)-differentially private Gaussian mechanism.
To take advantage of RDP composition, simplifications from Equation (6) to (9) can be used. The following transformations can be utilized:
Equation (16) implies that an RDP-composed bound can be achieved with a composed δ value of δk. It is known that sequential composition results in a composed δ value of kδ. Since δk<kδ, RDP offers a stronger (ϵ, δ) guarantee for the same ρc. This behavior can also be interpreted as the fact that holding the composed (ϵ, δ) guarantee constant, the value of ρc is greater when sequential composition is used compared to RDP. Therefore, RDP offers a tighter bound for ρc under composition.
Expected Membership Advantage. The adaptive posterior belief adversary allows to transform the DP guarantee (ϵ, δ) into a scalar measure ρc indicating whether adapt can confidently re-identify an individual's record in a dataset. From an individual's point of view, of interest is deniability, i.e., if adapt has low confidence, an individual can plausibly deny that the hypothesis of adapt is correct. A resulting question concerns how often a guess by adapt, about the presence of an individual is actually correct or what adapt 's advantage is. As described above, it can be assumed that adapt operates as a naive Bayes classifier with known probability distributions. Looking at the decision boundary of the classifier (i.e., when to choose or ′Gau with different (ϵ, δ) guarantees, it is found that the decision boundary does not change as long as the PDFs are symmetric. For example, consider a scenario with given datasets , ′ and query ƒ: DOM→R that yields ƒ ()=0 and ƒ (′)=1. Furthermore, assuming w.l.o.g. that Δƒ2=1.
If a (6, 10−6)−P Gau is applied to perturb the results of ƒ, adapt has to choose between the two PDFs with solid lines in
However, it is expected that the information “How likely is an adversary to guess the dataset in which I have participated?” to be a major point of interest when interpreting DP guarantees in iterative evaluations of ƒ, like those found in data science use cases such as machine learning. Expected membership advantage ρα can be defined as the difference between the probabilities of adapt correctly identifying (true positive rate) and of adapt misclassifying a member of ′ as belonging to (false negative rate), as in 40. The worst-case advantage ρα=1 occurs in the case in which always samples on that side of the decision boundary that belongs to the true dataset . In contrast to the analysis of ρc, ρα will not give a worst-case bound, but an average case estimation. Since adapt is a naive Bayes classifier, the properties of normal distributions can be used. With the multidimensional region where adapt chooses as ∫Dc and where adapt chooses ′ as ∫Di:
ρα=Pr(Succes:=Pr(adapt=|))−Pr(Error:=Pr(adapt=|′))=∫DcPr(()=r)Pr()dr−∫DiPr(()=r)Pr()dr.
The corresponding regions of error for the previous example are visualized in diagrams 300a, 300b of
where φ is the cumulative distribution function (CDF) of the standard normal distribution and Δ=√{square root over ((μ1−μ2TΣ−1(μ1−μ2))} the Mahalanobis distance. Adding independent noise in all dimensions Σ=σ2I, the Mahalanobis distance simplifies to
Definition 7 (Bound on the Expected Adversarial Membership Advantage). For the (ϵ, δ)-differentially private Gaussian mechanism, the expected membership advantage of the strong probabilistic adversary on either dataset , ′ is
Again, the current framework can express (ϵ, δ) guarantees with δ>0 via a scalar value ρα. However, a specific membership advantage can be computed individually for different kinds of mechanisms .
Above it was evaluated how the confidence of adapt changes under composition. A similar analysis of the membership advantage under composition is required. Again, the elucidations can be restricted to the Gaussian mechanism. As shown above, the k-fold composition of Gaui, each step guaranteeing (α, ∈ERDP,i)-RDP, can be represented by a single execution of Gau with k-dimensional output guaranteeing (α, ∈RDP=k∈RDP,i)-RDP. For this proof, it can be assumed that each of the composed mechanism executions has the same sensitivity ∥μ1,i−μ2,i∥=Δƒ2. A single execution of Gau can be analyzed with the tools described above. Definition 7 yields
The result shows that the strategy of adapt fully takes advantage of the RDP composition properties of ϵRDP,i and α. As expected, ρα takes on the same value, regardless of whether k composition steps with ϵRDP,i or a single composition step with ϵRDP is carried out.
Privacy Regimes. With confidence ρc and expected membership advantage ρα two measures were defined that taken together form the current framework for interpreting DP guarantees (ϵ, δ). While ρα indicates the likelihood with which adapt is to discover any participant's data correctly, ρc complements this information with the plausibility with which any participant in the data can argue that adapt guess is incorrect. Here it is demonstrated how the current framework can be applied to measure the level of protection independent of any particular dataset . Furthermore, several, allegedly secure (ϵ, δ) pairs suggested in literature are revisited and interpret their protection. Finally, general guidance is provided to realize high, mid or low to no privacy regimes.
The interpretability framework can be applied in two steps. First, the participants in the dataset receive a predefined (ϵ, δ) guarantee. This (ϵ, δ) guarantee is based on the maximum tolerable decrease in utility (e.g., accuracy) of a function ƒ evaluated by a data analyst. The participants interpret the resulting tuple (ρc, ρα) w.r.t. their protection. Each participant can either reject or accept the use of their data by the data analyst. Second, participants are free to suggest an (ϵ, δ) based on the corresponding adversarial confidence and membership advantage (pc, ρα) which is in turn evaluated by the data analyst w.r.t. expected utility of ƒ. To enable participants to perform this matching, general curves of ρc and ρα are provided for different (ϵ, δ) as shown in diagrams 400a, 400b of
Validation Over Synthetic Data. The following demonstrates how the confidence and membership advantage of adapt develop in an empirical example. The evaluation characterizes how well ρc and ρα actually model the expected membership advantage risk for data members and how effective adapt behaves on synthetic data. As adapt is assumed to know all data member except for one, the size of does not influence her. For this reason, the following tiny data universe U, true dataset and alternative ′ presented in Tables 2, 3 and 4 was used. Let U represent a set of employees that were offered to participate in a survey about their hourly wage. Alice, Bob and Carol participated. Dan did not. Thus, the survey data consists of 3 entries. The survey owner allows data analysts to pose queries to until a DP budget of (ϵ=5, J=0.01) is consumed. adapt is the data analyst that queries . Aside from learning statistics about the wage, adapt is also interested in knowing about who participated. So far, she knows that Alice and Carol participated for sure and there are three people in total. Thus, she has to decide between and ′, i.e., whether Bob or Dan is the missing entry. As a side information, she knows that the employer pays at least $1 and a maximum of $10. As a consequence, when adapt is allowed to ask only the sum query function, S(ƒ)=Δƒ2=9. Further, the Gaussian mechanism is known to be used for anonymization.
Given this prior information, adapt iteratively updates her belief on , ′ after each query. She makes a final guess after the whole (ϵ, δ)-budget has been used. By using the current framework (ρc, ρα), data members (especially Bob in this case) can compute their protection guarantee: What is the advantage of adapt in disclosing a person's participation (i.e., ρα)? How plausibly can that person deny a revelation (i.e., (ρc) by adapt ? Referring to Definition 7 and Definition 5, ρα(∈=5, δ=0.01)=0.5 is computed under composition and ρc (∈=5,δ=0.01)=0.99 illustrating that the risk of re-identification is quite high and the deniability extremely low. However, to show if adapt actually reaches those values her behavior can be empirically analyzed by iteratively querying and applying Algorithm 1 after each query. k=100 queries can be used and the experiment can be repeated 10,000 times to estimate membership advantage and show the distribution of confidence at the end of each run. As it is known that the adversary will compose k times, the RDP composition scheme can be used to determine what noise scale can be applied for each individual query. In diagram 500a of
To illustrate this phenomenon, a histogram over the beliefs at the end of each run for various choices of ρc in diagram 600 of
A final note on δ which describes the probability of exceeding ρc. When looking closely at the histograms, one can see that there are some (small) columns for a range of values that are larger than the worst-case bound. Their proportion to all runs can be calculated, e.g., 0.0008 for ϵ=5 which is less than the expected δ.
Application to Deep Learning. A natural question arising is how the introduced adversary adapt behaves on real data and high dimensional, iterative differentially private function evaluations. Such characteristics are typically found in Deep Learning classification tasks. Here, a neural network (NN) is provided a training dataset to learn a prediction function ŷ=ƒnn(x) given (x,y)ϵ. Learning is achieved by means of an optimizer. Afterwards, the accuracy of the learned prediction function ƒnn(·) is tested on a dataset test.
A variety of differentially private optimizers for deep learning can be utilized. These optimizers represent a differentially private training mechanism nn(ƒθ(·)) that updates the weights θt per training step tϵT with θt←θt−1−a·{tilde over (g)}, where α>0 is the learning rate and {tilde over (g)} denotes the Gaussian perturbed gradient (cf. Definition 2). After T update steps, where each update step itself an application of Gau(ƒθ(·)), the algorithm outputs a differentially private weight matrix θ which is then used in the prediction function ƒnn(·). Considering the evaluation of ƒnn(·) given (x, y)ϵ as post-processing of the trained weights θ, it is found that prediction {tilde over (y)}=ƒnn(x) is (ϵ, δ)-differentially private too.
It is assumed that adapt desires to correctly identify the dataset that was utilized for training when having the choice between and ′. There are two variations of DP: bounded and unbounded. In bounded DP, it holds that ||=|′|. However, differentially private deep learning optimizers such as the one utilized herein consider unbounded DP as the standard case in which ||−|′|=1. Furthermore, it can be assumed that adapt to possess the following information: the initial weights θ0, the perturbed gradients {tilde over (g)} after every epoch, the values of privacy parameters (ϵ, δ), and sensitivity Δƒ2=C equal to the clipping norm. Here, Δƒ2 refers to the sensitivity with which noise added by a mechanism is scaled, not necessarily global sensitivity. In some experiments, for example, Δƒ2=S(ƒ), which expresses the true difference between ƒ() and ƒ(′), as in Definition 3. The assumptions are analogous to those of white-box membership inference attacks. The attack itself is based on calculating clipped gradients ĝ(, θt) and ĝ (′, θt) for each training step tϵT, and finding β() for that training step given and calculating θt+1 by applying {tilde over (g)}.
Above, sensitivity was set to Δƒ2=S(ƒ)=∥ƒ()−ƒ(′)∥2, the true difference between the sums of wages resulting from Bob's and Dan's participation in a survey. In order to create a comparable experiment for differentially private deep learning, the difference between gradients that can be obscured by noise for each epoch is S(ƒθt)=∥n·ĝ(, θt)−(n−1)·ĝ (′, θt))∥2. C bounds the influence of a single training example on training by clipping each per-example gradient to the chosen value of C; although this value bounds the influence of a single example on the gradient, this bound is loose. If S(ƒθt«C, adversary confidence β() would be very small in every case when Δƒ2=C, as is the case in most implementations of differentially private neural networks. This behavior is due to the fact that an assumption for Equation (4) does not hold, since Δƒ2≠S(ƒ0t). To address this challenge in differentially private deep learning, Δƒ2=S(ƒ0t) can be adaptively set. Choosing Δƒ2 this way is analogous to using local sensitivity in differential privacy.
Based on the previously introduced assumptions and notations Algorithm 1 can be adapted to be bounded and unbounded, as well as global and S(ƒθ)-based, settings. The adapted Strong Adaptive Adversary for differentially private deep learning is stated in Algorithm 2 specifies adapt in an unbounded environment with Δƒ2=S(ƒθ). For bounded differential privacy with Δƒ2=S(ƒθ), Algorithm 2 can be adjusted, s.t. ′ is defined to contain n records, and Δƒ=S(ƒθt)=n·∥ĝt(′)−ĝt()∥2. To implement global unbounded differential privacy, Δƒ2=C and ′ contains n−1 records. To implement global bounded differential privacy, ′ contains n−1 records and Δƒ2=2C, since the maximum influence of one example on the sum of per-example gradients is C. If one record is replaced with another, the lengths of the clipped gradients of these two records could each be C and point in opposite directions, which results in n·∥ĝt(′)−n·∥ĝt(′)∥2=2C. It is also noted that the same value of Δƒ2 used by adapt can also be used by to add noise.
For practical evaluation a feed-forward NN for the MNIST dataset was built. For MNIST the utilized NN architecture consists of two repetitions of a convolutional layer with kernel size (3, 3), batch normalization and max pooling with pool size (2, 2) before being flattened for the output layer. Relu and softmax activation functions were used for the convolutional layers and the output layer, respectively.
One epoch represents the evaluation of all records in . Thus, it is important to highlight that the number of update steps T varies in practice depending on the number of records from used for calculating the DP gradient update ĝ. In mini-batch gradient descent a number of b records from is used for calculating an update and one epoch results in t=/b update steps. In contrast in batch gradient descent, all records in are used for calculating the update and each epoch consists of a single update step. While the approaches vary in their speed of convergence due to the gradient update behavior (i.e., many small updates vs. few large updates) none of the approaches has hard limitations w.r.t. convergence of accuracy and loss. With the current subject matter, batch gradient descent was utilized and given differentially private gradient updates {tilde over (g)} after any update step t the previously introduced adversary adapt shall decide whether it was calculated on or ′. It was assumed that adapt has equal prior beliefs of 0.5 on and ′. The prior belief of adapt adapts at every step t according to (1).
In the experiments, relevant parameters were set as follows: training data |D|=100, epochs k=30, clipping norm C=3.0, learning rate α=0.005, 5=0.01, and ρc=0.9. These values correspond to ρα=25.62%.
The empirically calculated values (i.e., after the training) for ρα and δ are presented in Table 5. The belief distributions for the described experiments can be found in diagrams 700, 800 of
Note that δ indeed bounds the percentage of experiments for which βT()>ρc. For all experiments with Δƒ2=S(ƒθ) and for global, unbounded DP, the empirical and analytical values of ρα match the empirical values. However, in global, bounded differential privacy the difference of correct guesses and incorrect guesses by adapt falls below ρα. In this experiment, the percentage of experiments for which βT()>ρc is also far lower. This behavior confirms the hypothesis that C is loose, so global sensitivity results in a lower value of βT(), as is again confirmed by
The following investigates the reason for the similarities between unbounded differential privacy with Δƒ2=S(ƒθ) and Δƒ2=C and also for the differences between
Diagram 900(b) of
Similarly, it is observed that Δƒ2 is greater for global, bounded DP in
Relation to Membership Inference Threat Model. The membership inference threat model and the analysis of adapt herein exhibit clear commonalities. Namely, they exhibit the same overarching goal: to intuitively quantify the privacy offered by DP in a deep learning scenario. Both approaches aim to clarify the privacy risks associated with deep learning models.
Considering that adapt desires to identify the dataset used for training a NN, adapt is analogous to a membership inference adversary who desires to identify individual records in the training data. Furthermore, parallel to membership inference, adapt operates in the whitebox model, observing the development of {tilde over (g)}t over all training steps t of a NN. In one approach, the adversary uses the training loss to infer membership of an example.
Although the general ideas overlap, adapt is far stronger than a membership inference adversary. Membership advantage quantifies the effectiveness of the practical membership inference attack and therefore provides a lower bound on information leakage, which adversaries with auxiliary information quickly surpass. adapt has access to arbitrary auxiliary information, including all data points in and ′, staying closer to the original DP guarantees. Using adapt, what the best possible adversary is able to infer can be calculated and it can be seen that this adversary reaches the upper bound.
An adversarial game can be defined in which the adversary receives both datasets and ′ instead of only receiving one value z, the size n of the dataset, and the distribution from which the data points are drawn.
Experiment 1. Let A be an adversary, A be a learning algorithm, D and ′ be neighboring datasets. The identifiability experiment proceeds as follows:
Here, the expected value of membership advantage to quantify the accuracy of adapt is calculated.
In the evaluation of adapt in a deep learning setting, it was realized that adapt did not reach the upper confidence bound until the sensitivity was adjusted. In differentially private deep learning, gradients decrease over the course of training until convergence and can fall below the sensitivity or clipping norm. This means that more noise is added than would have been necessary to obscure the difference made by a member of the dataset. Overall, the difference between the lower bound on privacy offered by membership advantage in a membership inference setting and the upper bound offered by maximum adversary confidence includes the auxiliary knowledge of adapt and the inherent privacy offered in deep learning scenarios through decreasing gradients.
Application To Analytics. The observed utility gains can also be realized on real-world data in an energy forecasting task that is relevant for German energy providers. In this energy forecasting problem, the energy transmission network is structured into disjoint virtual balancing groups under the responsibility of individual energy providers. Each balancing group consist of multiple zones and each zone consists of individual households. Energy providers have an incentive to forecast the demand in their balancing group with low error to schedule energy production accordingly. Currently, energy providers can utilize the overall aggregated energy consumption per balancing group to calculate a demand forecast, since they have to report these number to the transmission system operator. However, with the rollout of smart meters additional communication channels can be set up and the demand per zone could be computed. Note that, counterintuitively, forecasting on grouped household loads instead of individual households is beneficial for forecasting performance due to reduced stochasticity. Nonetheless, computing the demand per zone is reflecting the sum of individual household energy consumption and thus a sensitive task. Here, the use of differential privacy allows one to compute the anonymized energy consumption per zone and mitigate privacy concerns. The energy provider will only have an incentive to apply differential privacy, if the forecasting error based on differentially private energy consumption per zone is lower than the forecasting error based on the aggregated zonal loads. Vice versa the energy provider has to quantify and communicate the achieved privacy guarantee to the balancing group households to gather consent for processing the data.
This forecasting task was based on the dataset and benchmarking model of the 2012 Global Energy Forecast Competition (GEFCom). The GEFCom dataset consists of hourly energy consumption data of 4.5 years from 20 zones for training and one week of data from the same zones for testing. The GEFCom winning model computes the Forecast F for a zone z at time t by computing a linear regression over a p dimensional feature vector x (representing 11 weather stations):
The sensitive target attribute of the linear regression is zonal load consumption Lz,t, e.g., the sum of n household loads l2,t,i, . . . ,n. Differential privacy can be added to the sum computation by applying the Gaussian mechanism (cf. 2) yielding:
The energy provider will only have a benefit if the differentially private load forecasting is having a smaller error than the aggregated forecast. A suited error metric for energy forecasting is the Mean Absolute Error (MAE), i.e.:
Diagram 1000a of
In contrast to the energy provider, it is assumed that households have an interest in ρc«1. This leads to the question whether the energy provider and the households have a intersection of their preferred ϵ.
The machine learning model(s) can be deployed once trained to classify input data when received.
The at least one first user-generated privacy parameter can include a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. With such an arrangement, the calculated at least one second privacy parameter can include privacy parameters ε, δ and the calculating can be based on ρc which are bound by the posterior belief ρc as applied to the dataset.
In another variation, the at least one first user-generated privacy parameter includes privacy parameters ε, S. With such an implementation, the calculated at least one second privacy parameter can include an expected membership advantage ρα that corresponds to a likelihood of an adversary successfully identifying a member in the dataset and the calculating can be based on an overlap of two probability distributions.
In still another variation, the at least one first user-generated privacy parameter can include privacy parameters ε, S. With such an implementation, the calculated at least one second privacy parameter can include an adversarial posterior belief bound ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output and the calculating can be based on a conditional probability of different possible datasets.
In one example, a disk controller 1548 can interface with one or more optional disk drives to the system bus 1504. These disk drives can be external or internal floppy disk drives such as 1560, external or internal CD-ROM, CD-R, CD-RW, or DVD, or solid state drives such as 1552, or external or internal hard drives 1556. As indicated previously, these various disk drives 1552, 1556, 1560 and disk controllers are optional devices. The system bus 1504 can also include at least one communication port 1520 to allow for communication with external devices either physically connected to the computing system or available externally through a wired or wireless network. In some cases, the at least one communication port 1520 includes or otherwise comprises a network interface.
To provide for interaction with a user, the subject matter described herein can be implemented on a computing device having a display device 1540 (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information obtained from the bus 1504 via a display interface 1514 to the user and an input device 1532 such as keyboard and/or a pointing device (e.g., a mouse or a trackball) and/or a touchscreen by which the user can provide input to the computer. Other kinds of input devices 1532 can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback by way of a microphone 1536, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The input device 1532 and the microphone 1536 can be coupled to and convey information via the bus 1504 by way of an input device interface 1528. Other computing devices, such as dedicated servers, can omit one or more of the display 1540 and display interface 1514, the input device 1532, the microphone 1536, and input device interface 1528.
One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores.
In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” In addition, use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible.
The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
20180004978 | Hebert | Jan 2018 | A1 |
20190238516 | Weggenmann | Aug 2019 | A1 |
20220067505 | Liu | Mar 2022 | A1 |
Entry |
---|
Abadi et al., (2016) “Deep Learning with Differential Privacy,” arXiv:1607.00133v2 [stat.ML] Oct. 24, 2016, 14 pages. |
Anonymous, (2009) “Differential privacy and the secrecy of the sample,” Oddly Shaped Pegs. Online. Retrieved https://adamdsmith.wordpress.com/2009/09/02/sample-secrecy/, pp. 1-4. |
Beimel et al., “Bounds on the Sample Complexity for Private Learning and Private Data Release,” pp. 1-18 (2014). |
Bernau et al., (2020) “Assessing Differentially Private Deep Learning With Membership Inference,” arXiv:1912.11328v4 [cs.CR] May 26, 202 May 27, 2020, pp. 1-17. |
Dwork et al., (2010) “Boosting and Differential Privacy,” 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, IEEE Computer Society, pp. 51-60. |
Dwork et al., (2014) “The Algorithmic Foundations of Differential Privacy,” Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407. |
Dwork et al., (2016) “Concentrated Differential Privacy,” arXiv:1603.01887 [cs.DS] Mar. 16, 2016, pp. 1-28. |
Dwork et al., “Our Data, Ourselves: Privacy via Distributed Noise Generation,” pp. 1-18 (2006). |
Dwork, “Differential Privacy,” pp. 1-12 (2006). |
Eibl et al., (2018) “The influence of differential privacy on short term electric load forecasting,” Energy Informatics, 1(Suppl 1):48. |
Hsu et al., (2014) “Differential Privacy: An Economic Method for Choosing Epsilon,” arXiv:1402.3329 [cs.DB] Feb. 13, 2014, pp. 1-29. |
Kalrouz et al., (2015) “The Composition Theorem for Differential Privacy,” Proceedings of the 32nd International Conference on Machine Learning, Lille, France, JMLR: W&CP, vol. 37, 10 pages. |
Lee et al., (2011) “How Much Is Enough? Choosing ϵ for Differential Privacy,” X. Lai, J. Zhou, and H. Li (Eds.): Springer-Verlag Berlin Heidelberg, pp. 325-340. |
Lee et al., (2012) “Differential Identifiability,” KDD'12, Aug. 12-16, 2012, Beijing, China, pp. 1041-1049. |
Li et al., (2011), “On Sampling, Anonymization, and Differential Privacy: Or, k-Anonymization Meets Differential Privacy,” arxiv.org/abs/1101.2604v2, pp. 1-12. |
Li et al., (2013) “Membership Privacy: A Unifying Framework For Privacy Definitions,” CCS'13, Nov. 4-8, 2013, Berlin, Germany, 13 pages. |
Lyubashevsky et al., (2013) “On Ideal Lattices and Learning with Errors Over Rings,” pp. 1-34. |
Mahalanobis, (1936), “On the Generalized Distance in Statistics,” Journ. Asiat. Soc. Bengal, pp. 49-55. |
Mardia et al., (1979), Multivariate Analysis, Academic Press Limited, London, pp. 1-518. |
McMahan et al., (2018) “Learning Differentially Private Recurrent Language Models,” arXiv:1710.06963v3 [cs.LG] Feb. 24, 2018, pp. 1-14. |
Nissim et al., (2007) “Smooth Sensitivity and Sampling in Private Data Analysis,” STOC'07, Jun. 11-13, 2007, San Diego, CA, pp. 1-10. |
Samarati et al., “Protecting Privacy when Disclosing Information: κ-Anonymity and Its Enforcement through Generalization and Suppression,” 19 pages. (1998). |
Shokri et al., (2017) “Membership Inference Attacks Against Machine Learning Models,” arXiv:1610.05820 [cs.CR] Mar. 31, 2017, pp. 1-16. |
Thakkar et al., (2019) “Differentially Private Learning with Adaptive Clipping,” arXiv:1905.03871v1 [cs.LG] May 9, 2019, pp. 1-9. |
Number | Date | Country | |
---|---|---|---|
20220138348 A1 | May 2022 | US |