Interpretability framework for differentially private deep learning

Information

  • Patent Grant
  • 12001588
  • Patent Number
    12,001,588
  • Date Filed
    Friday, October 30, 2020
    4 years ago
  • Date Issued
    Tuesday, June 4, 2024
    5 months ago
Abstract
Data is received that specifies a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Privacy parameters ε, δ are then calculated based on the received data that govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The calculating is based on a ratio of probabilities distributions of different observations, which are bound by the posterior belief ρc as applied to a dataset. The calculated privacy parameters are then used to apply the DP algorithm to the function over the dataset. Related apparatus, systems, techniques and articles are also described.
Description
TECHNICAL FIELD

The subject matter described herein relates to an interpretability framework for calculating confidence levels and expected membership advantages of an adversary in identifying members of a training dataset used with training machine learning models.


BACKGROUND

Machine learning models can leak sensitive information about training data. To address such situations, noise can be added during the training process via differential privacy (DP) to mitigate privacy risk. To apply differential privacy, data scientists choose DP parameters (ϵ, δ). However, interpreting and choosing DP privacy parameters (ϵ, δ), and communicating the factual guarantees with regard to re-identification risk and plausible deniability is still a cumbersome task for non-experts. Different approaches for justification and interpretation of DP privacy parameters have been introduced which stray from the original DP definition by offering an upper bound on privacy in face of an adversary with arbitrary auxiliary knowledge.


SUMMARY

In a first aspect, data is received that specifies a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Privacy parameters ε, δ are then calculated based on the received data that govern a differential privacy (DP) algorithm to be ρc on a conditional probability of different possible datasets on a ratio of probabilities distributions of different observations, which are bound by the posterior belief ρc as applied to a dataset. The calculated privacy parameters are then used to apply the DP algorithm to the function over the dataset.


The probability distributions can be generated using a Gaussian mechanism with an (ε, δ) guarantee that perturbs the result of the function evaluated over the dataset, preventing a posterior belief greater than ρc on the dataset.


The probability distributions can be generated using a Laplacian mechanism with an s guarantee that perturbs the result of the function evaluated over the dataset, preventing a posterior belief greater than ρc on the dataset.


The resulting dataset (i.e, the dataset after application of the DP algorithm to the function over the dataset) can be used for various applications including training a machine learning model. Such a trained machine learning model can be deployed and then classify data input therein.


Privacy parameter ∈ can equal log (ρc/(1−ρc) for a series of (ε, δ) or ε anonymized function evaluations with multidimensional data.


A resulting total posterior belief ρc can be calculated using a sequential composition or Rényi differential privacy (RDP) composition. The at least one machine learning model can be updated using the calculated resulting total posterior belief ρc.


In an interrelated aspect, data is received that specifies privacy parameters ε, δ which govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The received data is then used to calculate an ραcustom characterραcustom character can be used when applying the DP algorithm to a function over the dataset.


The probability distributions can be generated using a Gaussian mechanism with an (ε, δ) guarantee that perturbs the result of the function evaluated over the dataset, ensuring that membership advantage is ρa on the dataset.


The probability distributions can be generated using a Laplacian mechanism with an ε guarantee that perturbs the result of the function evaluated over the dataset, ensuring that membership advantage is ρa on the dataset.


The resulting dataset (i.e, the dataset after application of the DP algorithm to the function over the dataset) can be used to train at least one machine learning. Such a trained machine learning model can be deployed so as to classify further data input therein.


The calculated expected membership advantage ρα for a series of (ε, δ) anonymized function evaluations with multidimensional data is equal to:







CDF
(

1


2



2


ln

(

1.25
δ

)




ϵ


)

-

CDF
(


-
1



2



2


ln

(

1.25
δ

)




ϵ


)






wherein CDF is the cumulative distribution function of the standard normal distribution.


A resulting expected membership advantage ρα can be calculated using sequential composition or Rényi differential privacy (RDP) composition. The calculated resulting expected membership advantage ρα can be used to update the at least one machine learning model.


In a further interrelated aspect, data is received that specify privacy parameters ε, δ which govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. Thereafter, the received data is used to calculate an adversarial posterior belief bound ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Such calculating can be based on an overlap of two probability distributions. The DP algorithm can then be applied, using the calculated adversarial posterior belief bound ρc, to a function over the dataset to result in an anonymized function output (e.g., machine learning model, etc.).


Posterior belief bound ρc can equal 1/(1+e−∈) for a series of (ε, δ) or ε anonymized function evaluations with multidimensional data.


Data can be received that specifies an expected adversarial posterior belief bound expected ρc such that ρc=expected ρc+δ*(1−expected ρc).


The probability distributions can be generated using a differential privacy mechanism either with an (ε, δ) guarantee or with an ε guarantee that perturbs the result of the function evaluated over the dataset, preventing a posterior belief greater than ρc on the dataset.


At least one machine learning model can be anonymously trained using the resulting dataset (i.e, the dataset after application of the DP algorithm to the function over the dataset). A resulting total posterior belief ρc can be calculated using a sequential composition or Rényi differential privacy (RDP) composition. The at least one machine learning model can be updated using the calculated resulting total posterior belief ρc.


In a still further interrelated aspect, a dataset is received. Thereafter, at least one first user-generated privacy parameter is received which governs a differential privacy (DP) algorithm to be applied to a function evaluated over the received dataset. Using the received at least one first user-generated privacy parameter, at least one second privacy parameter is calculated based on a ratio or overlap of probabilities of distributions of different observations. Thereafter, the DP algorithm is applied, using the at least one second privacy parameter, to the function over the received dataset to result in an anonymized function output (e.g., machine learning model, etc.). At least one machine learning model can be anonymously trained using the dataset which, when deployed, is configured to classify input data.


The machine learning model(s) can be deployed once trained to classify input data when received.


The at least one first user-generated privacy parameter can include a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. With such an arrangement, the calculated at least one second privacy parameter can include privacy parameters ε, δ and the calculating can be based on a ratio of probabilities distributions of different observations which are bound by the posterior belief ρc as applied to the dataset.


In another variation, the at least one first user-generated privacy parameter includes privacy parameters ε, δ. With such an implementation, the calculated at least one second privacy parameter can include an expected membership advantage ρα that corresponds to a probability of an adversary successfully identifying a member in the dataset and the calculating can be based on an overlap of two probability distributions.


In still another variation, the at least one first user-generated privacy parameter can include privacy parameters ε, δ. With such an implementation, the ραcustom character that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output and the calculating can be based on an overlap of two probability distributions.


Non-transitory computer program products (i.e., physically embodied computer program products) are also described that store instructions, which when executed by one or more data processors of one or more computing systems, cause at least one data processor to perform operations herein. Similarly, computer systems are also described that may include one or more data processors and memory coupled to the one or more data processors. The memory may temporarily or permanently store instructions that cause at least one processor to perform one or more of the operations described herein. In addition, methods can be implemented by one or more data processors either within a single computing system or distributed among two or more computing systems. Such computing systems can be connected and can exchange data and/or commands or other instructions or the like via one or more connections, including but not limited to a connection over a network (e.g., the Internet, a wireless wide area network, a local area network, a wide area network, a wired network, or the like), via a direct connection between one or more of the multiple computing systems, etc.


The subject matter described herein provides many technical advantages. For example, the current framework provides enhanced techniques for selecting a privacy parameter c based on the re-identification confidence ρc and expected membership advantage ρα. These advantages were demonstrated on synthetic data, reference data and real-world data in a machine learning and data analytics use case which show that the current framework is suited for multidimensional queries under composition. The current framework furthermore allows the optimization of the utility of differentially private queries at the same (ρc, ρα) by considering the sensitive range S(ƒ) instead of global sensitivity Δƒ. The framework allows data owners and data scientists to map their expectations of utility and privacy, and derive the consequent privacy parameters ϵ.


The details of one or more variations of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features and advantages of the subject matter described herein will be apparent from the description and drawings, and from the claims.





DESCRIPTION OF DRAWINGS


FIGS. 1A and 1B are diagrams respectively illustrating posterior beliefs over an output space of custom characterGau and custom characterLαp;



FIGS. 2A and 2B are diagrams respectively illustrating decision boundaries based on PDFs and confidence;



FIGS. 3A and 3B are diagrams illustrating custom characteradapt error regions for varying ε, custom characterGau, f(custom character)=0, f(custom character′)=1;



FIGS. 4A and 4B are diagrams respectively illustrating expected adversarial worst-case confidence bound ρc and the adversarial membership advantage ρα for various (ϵ, δ) when using custom characterGau for perturbation;



FIGS. 5A and 5B are diagrams illustrating a sample run of custom characteradapt on a sequence of k=100 evaluations of custom characterGaui that shows the mechanism outputs on the left y-axis and the development of confidences on custom character and custom character′ on the right-hand y-axis. At the custom charactercustom characteradapt decides for the dataset with the highest confidence. In 5(b), the number of decisions for overall runs are shown.



FIGS. 6A-6D are diagrams illustrating a confidence distribution of custom characteradapt at the end of 10,000 runs, i.e., after composition over different ε and fixed δ=0.001.



FIGS. 7A-7D are diagrams illustrating a confidence distribution of custom characteradapt at the end of 30 epochs, i.e., after composition with δ=0.001; these diagrams show the distribution for global sensitivity, which yields strong privacy and little Δf2=S(f), which yields a distribution identical to its counterpart using synthetic data.



FIG. 8 is a diagram illustrating confidence distribution after 30 epochs with privacy parameters ρc=0.9, δ=0.01;



FIG. 9A-9B is a diagram illustrating sensitivity and test accuracy over 30 epochs;



FIG. 10A-10C are diagrams illustrating utility and privacy metrics for the GEFcom challenge;



FIG. 11 is a first process flow diagram illustrating an interpretability framework for differential private deep learning;



FIG. 12 is a second process flow diagram illustrating an interpretability framework for differential private deep learning;



FIG. 13 is a third process flow diagram illustrating an interpretability framework for differential private deep learning;



FIG. 14 is a fourth process flow diagram illustrating an interpretability framework for differential private deep learning; and



FIG. 15 is a diagram of a computing device for implementing aspects of an interpretability framework for differential private deep learning.





Like reference symbols in the various drawings indicate like elements.


DETAILED DESCRIPTION

Provided herein is an interpretability framework for calculating the confidence ρc and expected membership advantage ρα of an adversary in identifying members of training data used in connection with one or machine learning models. These metrics are derived a prióri for multidimensional, iterative computations, as found in machine learning. The framework is compatible with composition theorems and alternative differential privacy definitions like Renyi Differential Privacy, offering a tight upper bound on privacy. For illustration purposes, the framework and resulting utility is evaluated on synthetic data, in a deep learning reference task, and in a real-world electric load forecasting benchmark.


The current subject matter provides a generally applicable framework for interpretation of the DP guarantee in terms of an adversary's confidence and expected membership advantage for identifying the dataset on which a differentially private result was computed. The framework adapts to various DP mechanisms (e.g., Laplace, Gaussian, Exponential) for scalar and multidimensional outputs and is well-defined even under composition. The framework allows users to empirically analyze a worst-case adversary under DP, but also gives analytical bounds with regard to maximum confidence and expected membership advantage.


The current subject matter, in particular, can be used to generate anonymized function output within specific privacy parameter bounds which govern the difficulty of getting insight into the underlying input data. Such anonymous function evaluations can be used for various purposes including training of machine learning models which, when deployed, can classify future data input into such models.


Also provided herein, are illustrations of how different privacy regimes can be determined by the framework independent of a specific use case.


Still further, with the current subject matter, privacy parameters for abstract composition theorems such as Rényi Differential Privacy in deep learning can be inferred from the desired confidence and membership advantage in our framework.


Differential Privacy. Generally, data analysis can be defined as the evaluation of a function ƒ: DOM→R on some dataset custom character∈DOM yielding a result r∈R. Differential privacy is a mathematical definition for anonymized analysis of datasets. In contrast to previous anonymization methods based on generalization (e.g., k-anonymity), DP perturbs the result of a function ƒ(·) over a dataset custom character={d1, . . . , dn} s.t. it is no longer possible to confidently determine whether ƒ(·) was evaluated on custom character or some neighboring dataset custom character′ differing in one individual. The neighboring dataset custom character′ can be created by either removing one data point from custom character (unbounded DP) or by replacing one data point in custom character with another from DOM (bounded DP). Thus, privacy is provided to participants in the dataset since their impact of presence (absence) on the query result becomes negligible. To inject differentially private noise into the result of some arbitrary function ƒ(·), mechanisms custom character fulfilling Definition 1 are utilized.


Definition 1 ((∈, δ)-Differential Privacy). A mechanism custom character gives (∈, δ)-Differential Privacy if for all custom character, custom character′⊆DOM differing in at most one element, and all outputs S⊆R

Pr(custom character(custom character)∈S)≤e·Pr(custom character(custom character′)∈S)+δ


∈-DP is defined as (∈, δ=0)-DP and refer to the application of a mechanism custom character to a function ƒ(·) as output perturbation. DP holds if mechanisms are calibrated to the global sensitivity i.e., the largest influence a member of the dataset can cause to the outcome of any ƒ(·). Let custom character and custom character′ be neighboring datasets, the global l1-sensitivity of a function ƒ is defined as Δƒ=maxD,custom character∥ƒ(custom character)−ƒ(custom character)∥1. Similarly, Δƒ2=maxD,custom character|ƒ(custom character)−ƒ(custom character′)12 can be referred to as global l2-sensitivity.


A popular mechanism for perturbing the outcome of numerical query functions ƒ is the Laplace mechanism. Following Definition 1 the Laplace mechanism adds noise calibrated to Δƒ by drawing noise from the Laplace distribution with mean μ=0.


Theorem 1 (Laplace Mechanism). Given a numerical query functions ƒ:DOM→Rk, the Laplace mechanism

custom characterLap(custom character,ƒ,∈):=ƒ(custom character)+(z1, . . . ,zk)


is an ∈-differentially private mechanism when all zi with 1≤i≤k are independently drawn from









~

Lap

(

z
,

λ
=



Δ

f

ϵ


μ



)


=
0.





A second DP mechanism used for output perturbation within this work is the Gaussian mechanism of Definition 2. The Gaussian mechanism uses l2-sensitivity.


Theorem 2 (Gaussian Mechanism). Given a numerical query function ƒ: DOM→Rk, it exists σ s. t. the Gaussian mechanism

custom characterGau(custom character,ƒ,Γ,δ):=ƒ(custom character)+(z1, . . . ,zk)


is an (∈, δ)-differentially private mechanism for a given pair of ∈, δ∈(0, 1) when all zi with 1≤i≤k are independently drawn from custom character˜(0,σ2).


Prior work has analyzed the tails of the normal distributions and found that bounding σ>Δƒ2√{square root over (2 ln(1.25/δ))}/∈ fulfills Theorem 2. However, these bounds have been shown to be loose and result in overly pessimistic noise addition.


Definition 2 ((α, ∈RDP)-Differential Privacy). A mechanism custom character gives (α, ∈RDP)-RDP if for any adjacent custom character, custom character′⊆DOM and α>1










D
α

(



(
𝒟
)







(

𝒟


)


)

=



1

α
-
1



ln




𝔼

x
~



(

𝒟


)



(




(
𝒟
)




(

𝒟


)


)

α




ϵ
RDP






Calibrating the Gaussian mechanism in terms of Rényi differential privacy (RDP) is straight forward due to the relation ∈RDP=α·Δƒ22/2σ2. One option is to split σ=Δƒ2*η where η is called noise multiplier which is the actual term dependent on ∈RDP as Δƒ2 is fixed. A (α, RDP)-RDP guarantee converts to







(



ϵ
RDP

-


ln

δ


α
-
1



,
δ

)

-
DP





which is not trivially invertible as multiple (α, ∈RDP) yield the same (∈, δ)-DP guarantee. A natural choice is to search for a (α, ∈RDP) causing σ to be as low as possible. Hence, it can be expanded as follows:






ϵ
=



ϵ
RDP

-


ln

δ


α
-
1



=




α
·
Δ




f
2
2

/
2



σ
2


-


ln

δ


α
-
1



=



α
/
2



η
2


-


ln

δ


α
-
1










and minimize






η
=


min
α




α
/
2



(

ϵ
+


(

ln

δ

)

/

(

α
-
1

)



)









which provides a tight bound on η and thus on σ for given (∈, δ).


The standard approach to analyze the privacy decay over a series of ∈-DP mechanisms is the sequential composition theorem.


Theorem 3 (Sequential Composition). Let custom characteri provide (∈ii)-Differential Privacy. The sequence of custom character1, . . . , k(custom character) provides (Σi i, Σi δi)-DP.


Sequential composition is, again, loose for (∈, δ)-DP which has resulted in various advanced theorems for composition. Yet, tight composition bounds are also studied in the RDP domain which has the nice property of ∈RDP,i being summed up as well. So, for the sequence of k mechanisms executions each providing (α, ∈RDP,i)-RDP the total guarantee composes to (α, Σi RDP,i)-RDP. Using the equations above, a tight per step η can be derived from this.


These aspects of DP build the foundations of private deep learning. In private deep learning, the tradeoff between privacy and utility becomes important because practical neural networks need to offer a certain accuracy. Although increasing privacy through the (∈, δ) guarantee always decreases utility, other factors also affect the accuracy of models, such as the quantity of training data and the value chosen for the clipping norm C.


Various properties of C affect its optimal value. Unfortunately, Δƒ2 cannot be determined in advance for the size of gradients, so it has been proposed to clip each per example gradient to C, bounding the influence of one example on an update. This parameter can be set to maximize model accuracy and offer the rule to set C to “the median of the norms of the unclipped gradients over the course of training.” The following effects can be taken into account: the clipped gradient may point in a different direction from the original gradient if C is too small, but if C is too large, the large magnitude of noise added decreases utility. Since gradients change over the course of training, the optimal value of C at the beginning of training may no longer be optimal toward the end of training. Adaptively setting the clipping norm may further improve utility by changing C as training progresses or setting C differently for each layer. To improve utility for a set privacy guarantee, the value of C can be tuned and adapted.









TABLE 1








custom character
prob notations









Symbol
Description






custom character

Data universe custom character , i.e., a set of individuals that are possibly



present in the original dataset custom character .



custom character


custom character  ⊂ custom character  denotes the subset of records of whom custom characterprob knows




they have participated in custom character .


n
|D|.



custom character

Mechanism custom character  and parameters, e.g. (ϵ, δ, f) for custom characterLap


f
Data analysis function f(•).


r
Differentially private output r yielded by custom character .


pω(*)
probability density function of custom character  given world ω.









Strong Probabilistic Adversary. For interpretation of (ϵ, δ) the privacy guarantee ϵ with regard to a desired bound on the Bayesian belief of a probabilistic adversary custom characterprob. custom characterprob's knowledge is modeled as the tuple (custom character, custom character, n, custom character,ƒ, r) which is defined in Table 1. custom characterprob seeks to identify custom character\custom character by evaluating possible combinations of missing individuals drawn from custom character, which can be formally denoted as possible worlds:

Ψ={custom character∪{d1, . . . ,dn}|d1, . . . ,dn∈u\custom character}



custom character
prob assigns a probability to each world ω∈P, reflecting the confidence that w was used as input to custom character. This confidence can be referred to as belief β(ω). The posterior belief of custom characterprob on world ωi is defined as conditional probability:













β

(

ω
i

)

=


Pr

(



ω
i





(
·
)


=
r

)







=




Pr

(




(
·
)

=

r


ω
i



)

·

Pr

(

ω
i

)



Pr

(




(
·
)

=
r

)








=




Pr

(




(
·
)

=

r


ω
i



)

·

Pr

(

ω
i

)





j



Pr

(




(
·
)

=

r


ω
j



)

·

Pr

(

ω
j

)










=




Pr

(




(

ω
i

)

=
r

)

·

Pr

(

ω
i

)





j



Pr

(




(

ω
j

)

=
r

)

·

Pr

(

ω
j

)











(
1
)












=




p

ω
i


(
r
)

·

Pr

(

ω
i

)





j




p

ω
j


(
r
)

·

Pr

(

ω
j

)








(
2
)







Using the fact that custom character represents a continuous random variable and the choice of worlds is discrete, Bayes theorem allows inserting custom character's probability density function (PDF) in step 2. The firmest guess of custom characterprob is represented by the world ω having the highest corresponding belief. However, it is not guaranteed that ω represents the true world custom character. From this point, the terms confidence and posterior belief are used interchangeably.


The initial distribution over Ψ reflects the prior belief on each world by custom characterprob. It is assumed that this is a discrete uniform distribution among worlds, thus







Pr

(
ω
)

=


1



"\[LeftBracketingBar]"

Ψ


"\[RightBracketingBar]"



.



ω


Ψ
.









By bounding belief β({tilde over (ω)}) for the true world ω by a chosen constant p, a desired level of privacy can be guaranteed. It is noted that bounding the belief for the true world implicitly also bounds the belief for any other world.


The noise added to hide ω can be scaled to the sensitivity of the result to a change in {tilde over (ω)}. Instead of basing this value on global sensitivity, the largest possible contribution of any individual can be quantified as the sensitive range S(ƒ).


Definition 3 (Sensitive Range S(ƒ)). The sensitive range of a query function ƒ is the range of ƒ:

S(ƒ)=maxω1ω2∈Ψ∥ƒ(ω1)−ƒ(ω2)∥


This approach resulted in the introduction of differential identifiability which is defined below in Definition 4.


Definition 4 (Differential Identifiability). Given a dataset custom character, a randomized mechanism custom character satisfies ρ-Differential Identifiability if among all possible datasets custom characteri, custom character2, . . . , custom characterm differing in one individual w.r.t. custom character the posterior belief β, after getting the response r∈R, is bounded by ρ:

β(custom characteri|custom character(custom character)=r)≤ρ.  (3)


The notation of possible world ω∈Ψ is replaces by possible datasets which is semantically the same. ρ-Differential Identifiability implies that after receiving a mechanism's output r the true dataset custom character can be identified by custom characterprob with confidence β(custom character)≤ρ.


DP and Differential Identifiability have been shown to be equal when |Ψ|=2, since DP considers two neighboring datasets custom character, custom character′ by definition. Specifically, Differential Identifiability is equal to bounded DP in this case, since possible worlds each have the same number of records. Under this assumption, the sensitive range S(ƒ) represents a special case of local sensitivity in which both custom character and custom character′ are fixed. It can be assumed that Δƒ is equal to S(ƒ). If this condition is met, the relation ρ↔∈ for custom characterLap is:











𝒮

(
f
)

λ

=



ϵ
-

ln

(

ρ

1
-
ρ


)



ρ

=


1

1
+

e

-


𝒮

(
f
)

λ





=


1

1
+

e

-
ϵ




>


1
2

.








(
4
)







Framework For Interpreting DP. Based on the original strong probabilistic adversary custom characterprob provided above, an interpretability framework is formulated that allows to translate formal (ϵ, δ) guarantees into concrete re-identification probabilities. First, the original confidence upper bound of Equation (3) can be extended to work with arbitrary DP mechanisms and a discussion is provided with regard to how δ is integrated into the confidence bound. Second, custom characterprob is extended to behave adaptively with regard to a sequence of mechanisms. It is shown below that that the resulting adaptive adversary custom characteradapt behaves as assumed by composition theorems. Third, expected membership advantage ρα is defined and suggested as a privacy measure complementing ρ, which we will refer to as ρc in the following.


General Adversarial Confidence Bound. According to Equation (4) the probabilistic adversary custom character with unbiased priors (i.e., 0.5) regarding neighboring datasets custom character, custom character′ has a maximum posterior belief of 1/(1+e−ϵ) when the ϵ-differentially private Laplace mechanism (cf. Definition 1) is applied to ƒ having a scalar output. In the following, it is shown that this upper bound holds also for arbitrary ϵ-differentially private mechanisms with multidimensional output. Therefore, the general belief calculation of Equation (1) can be bound by the inequality of Definition 1.










β

(
𝒟
)

=



Pr

(




(
𝒟
)

=
r

)



Pr

(




(
𝒟
)

=
r

)

+

Pr

(




(

𝒟


)

=
r

)















Pr

(




(

𝒟


)

=
r

)

·

e
ϵ


+
δ




Pr

(




(

𝒟


)

=
r

)

·

e
ϵ


+
δ
+

Pr

(




(

𝒟


)

=
r

)









=


1

1
+


Pr

(




(

𝒟


)

=
r

)




Pr

(




(

𝒟


)

=
r

)

·

e
ϵ


+
δ











For δ=0, the last equation simplifies to 1/(1+e−ϵ) so it can be concluded:


Corollary 1. For any ϵ-differentially private mechanism, the strong probabilistic adversary's confidence on either dataset custom character, custom character′ is bounded by







ρ

(
ϵ
)

=

1

1
+

e

-
ϵ








For δ>0, however, it was observed that where Pr(custom character(custom character′)=r) becomes very small, β(custom character) grows towards 1:











lim


Pr

(



(

𝒟


)

)


0



1

1
+


Pr

(




(

𝒟


)

=
r

)




Pr

(




(

𝒟


)

=
r

)

·

e
ϵ


+
δ





=
1




(
5
)







Hence, if the Gaussian mechanism custom characterGau samples a value at the tails of the distribution in the direction away from ƒ(custom character′), the posterior belief for custom character and custom character′ head to 1 and 0, respectively. If a value is sampled from the tails in the direction of ƒ(custom character′), the posterior belief for custom character and custom character′ go to 0 and 1, respectively. The difference in behavior between the Laplace and Gaussian mechanism when large values of noise are sampled is demonstrated. Fixes ƒ(custom character)=0,ƒ(custom character′)=1 and Δƒ=Δƒ2=1 can be utilized. In diagram 100b of FIG. 1(b), the effect of the output of custom characterLap on the posterior beliefs for custom character and custom character′ when ϵ=1, δcustom charactercustom characterGau results in an upper bound of 1, as is visualized in diagram 100a of FIG. 1(a). Therefore, β(custom character) can only be bound for 1−δcustom charactercustom characterGaucustom charactercustom characterGau provides ϵ-custom characterP with probability 1−6.


β is now extended to k-dimensional (ϵ, δ)-differentially private mechanisms where ƒ(custom character)→1Rk.


Theorem 4. The general confidence bound of Corollary 1 holds for multidimensional (ϵ, δ)-differentially private mechanisms with probability 1−δ.


Proof. Properties of RDP can be used to prove the confidence bound for multidimensional (ϵ, δ)-differentially private mechanisms.













β

(
𝒟
)

=



Pr

(




(
𝒟
)

=
r

)



Pr

(




(
𝒟
)

=
r

)

+

Pr

(




(

𝒟


)

=
r

)









=


1

1
+


Pr

(




(

𝒟


)

=
r

)

/

Pr

(




(
𝒟
)

=
r

)











(
6
)














1

1
+


Pr

(




(

𝒟


)

=
r

)



(


e

ϵ
RDP


·

Pr

(




(

𝒟


)

=
r

)


)


1
-

1
/
α










(
7
)












=

1

1
+


e

-


ϵ
RDP

(

1
-

1
/
α


)



·


Pr

(




(

𝒟


)

=
r

)


1
/
α









(
8
)







In the step from Equation (6) to (7), probability preservation properties are used to prove that RDP guarantees can be converted to (ϵ, δ) guarantees. In the context of this proof, it is implied that ϵ-DP holds when e−ϵRDP Pr(custom character(custom character′)=r)=r)>δα/(a−1) since otherwise Pr(M(custom character)=r)<δ. It can therefore be assumed that e−ϵRDPcustom charactercustom character(custom character′)=r)>δα/(α−1), which occurs at most in 1−δ cases, and continue from Equation (8):















1

1
+


e

-


ϵ
RDP

(

1
-

1
/
α


)



·


(


δ

α
/

(

α
-
1

)



·

e

-

ϵ
RDP




)


1
/
α











=


1

1
+


e

-

ϵ
RDP



·

δ

1
/

(

α
-
1

)












=


1

1
+


e

-

ϵ
RDP



·

e



-
1

/

(

α
-
1

)




ln
(

1
/
δ

)












=


1

1
+

e

-

(


ϵ
RDP

+



(

α
-
1

)


-
1




ln
(

1
/
δ

)



)












(
9
)












=

1

1
+

e

-
ϵ








(
10
)







In the step from Equation (9) to (10), it is noted the exponent perfectly matches the conversion from ∈ to ∈RDP.


Consequently, Corollary 1 holds with probability 1−δ for custom characterGau. Hence, the general confidence upper bound for (∈, δ)-differentially private mechanisms can be defined as follows:


Definition 5 (Expected Adversarial Confidence Bound). For any (ϵ, δ)-differentially private mechanism, the expected bound on the strong probabilistic adversary's confidence on either dataset custom character, custom character′ is

ρc(ϵ,δ)=E[ρ(ϵ)]=(1−δ)ρ(ϵ)+δ=ρ(ϵ)+δ(1−ρ(ϵ)).


Adaptive Posterior Belief Adversary. custom characterprob computes posterior beliefs β(·) for datasets custom character and custom character′ and makes a guess arg maxcustom character∈{custom character,custom character′}β(custom character). Therefore, the strong custom characterprob represents a naive Bayes classifier choosing an option w.r.t. to the highest posterior probability. The input features are the results r observed by custom characterprob, which are independently sampled and thus fulfill the i.i.d. assumption. Also, the noise distributions are known to custom characterprob, thus making the naive Bayes classifier the strongest probabilistic adversary in our scenario.


A universal adversary against DP observes multiple subsequent function results and adapts once a new result r is obtained. To extend custom characterprob to an adaptive adversary custom characteradapt, adaptive beliefs can be defined as provided below.


Definition 6 (Adaptive Posterior Belief). Let custom character, custom character′ be neighboring datasets and custom character1, custom character2 be ϵ1, ϵ2-differentially private mechanisms. If custom character1(custom character) is executed first with posterior belief β1(custom character), the adaptive belief for custom character after executing custom character2 (custom character) is:








β
2

(

𝒟
,


β
1

(
𝒟
)


)

=



Pr

(




2

(
𝒟
)

=
r

)

·


β
1

(
𝒟
)





Pr

(




2

(
𝒟
)

=
r

)

·


β
1

(
𝒟
)


+


Pr

(




2

(

𝒟


)

=
r

)



(

1
-


β
1

(
𝒟
)


)








Given k iterative independent function evaluations, βk(custom character) is written to mean βk (custom character, βk-1(custom character, . . . )). To compute βk(custom character), the adaptive adversary custom characteradapt computes adaptive posterior beliefs as specified by Algorithm 1.












Algorithm 1 Strong Adaptive Adversary















Input: datasets custom character ,  custom character ′, mechanism outputs R = r1, . . . , rk,









mechanisms custom character 1, . . . ,  custom characterk







Output: βk( custom character ), βk( custom character ′)








1:
β0( custom character ), β0( custom character ′) ← 0.5


2:
for i ∈ {1, . . . , k} do


3:
 p custom character  ← Pr( custom character i( custom character ) = ri)


4:
 p custom character ← Pr( custom character i( custom character ′) = ri)


5:
 βi( custom character ) ← βi−1( custom character ) · p custom character /(p custom character  · βi−1( custom character ) + p custom character · βi−1( custom character ′))


6:
 βi( custom character ′) ← βi−1( custom character ′) · p custom character /(p custom character  · βi−1( custom character ) + p custom character · βi−1( custom character ′))


7:
end for









The calculation of βk (custom character) and βk (custom character′) as presented in Algorithm 1 can also be expressed as a closed form calculation which can be used later to further analyze the attacker.











β
k

(
𝒟
)

=






i
=
1

k


Pr

(




i

(
𝒟
)

=

r
i


)







i
=
1

k


Pr

(




i

(
𝒟
)

=

r
i


)


+




i
=
1

k


Pr

(




i

(

𝒟


)

=

r
i


)










=


1

1
+





i
=
1

k


Pr

(




i

(

𝒟


)

=

r
i


)






i
=
1

k


Pr

(




i

(
𝒟
)

=

r
i


)












Aspects of the associated proof are provided below in which it is assumed that the attacker starts with uniform priors. Thus, β1 (custom character) is calculated to be:











β
1

(
𝒟
)

=



Pr

(




1

(
𝒟
)

=

r
1


)



Pr

(




1

(
𝒟
)

=

r
1


)

+

Pr

(




1

(

𝒟


)

=

r
1


)









=


1

1
+


Pr

(




1

(

𝒟


)

=

r
1


)


Pr

(




1

(
𝒟
)

=

r
1


)











In the second step β1 (custom character) is used as the prior, hence β2 (custom character) is calculated as:











β
2

(
𝒟
)

=




Pr

(




2

(
𝒟
)

=

r
2


)

·


β
1

(
𝒟
)








Pr

(




2

(
𝒟
)

=

r
2


)

·


β
1

(
𝒟
)


+







Pr

(




2

(

𝒟


)

=

r
2


)

·

(

1
-


β
1

(
𝒟
)


)












=


1

1
+






Pr

(




2

(

𝒟


)

=

r
2


)

-







Pr

(




2

(

𝒟


)

=

r
2


)

·


β
1

(
𝒟
)







Pr

(




2

(
𝒟
)

=

r
2


)

·


β
1

(
𝒟
)











=


1

1
+



Pr

(




2

(

𝒟


)

=

r
2


)

-






Pr

(




2

(

𝒟


)

=

r
2


)

·






Pr

(




1

(
𝒟
)

=

r
1


)









Pr

(




1

(
𝒟
)

=

r
1


)

+






Pr

(




1

(

𝒟


)

=

r
1


)









Pr

(




2

(
𝒟
)

=

r
2


)

·

Pr

(




1

(
𝒟
)

=

r
1


)




Pr

(




1

(
𝒟
)

=

r
1


)

+

Pr

(




1

(

𝒟


)

=

r
1


)












=


1

1
+



Pr

(




2

(

𝒟


)

=

r
2


)

·

Pr

(




1

(

𝒟


)

=

r
1


)




Pr

(




2

(
𝒟
)

=

r
2


)

·

Pr

(




1

(
𝒟
)

=

r
1


)












This scheme continues for all k iterations by induction.


Even though the closed form provides an efficient calculation scheme for βk (custom character), numerical issues can be experienced so Algorithm 1 can be used for practical simulation of custom characteradapt. However, by applying the closed form, it can be shown that custom characteradapt operates as assumed by the sequential composition theorem (cf. 3) which substantiates the strength of custom characteradapt. It is also noted that β1 (custom character) has the same form as βk (custom character), since the multiplication of two Gaussian distributions results in another Gaussian distribution. Therefore, the composition of several Gaussian mechanisms can be regarded as a single execution of a multidimensional mechanism with an adjusted privacy guarantee.


Theorem 5 (Composition of Adaptive Beliefs). Let custom character, custom character′ be neighboring datasets and custom character1, . . . , custom characterk be an arbitrary sequence of mechanisms providing ϵ1, . . . , ϵk-Differential Privacy, then











β
k

(
𝒟
)



ρ

(




i
=
1

j


ϵ
i


)





(
11
)







By using Definition 1 and δ=0, the following can be bound:











β
k

(
𝒟
)

=



1

1
+





i
=
1

k


Pr

(




i

(

𝒟


)

=

r
i


)






i
=
1

k


Pr

(




i

(
𝒟
)

=

r
i


)















1

1
+





i
=
1

k


Pr

(




i

(

𝒟


)

=

r
i


)






i
=
1

k



Pr

(




i

(

𝒟


)

=

r
i


)

*

e
ϵ


i











=


1

1
+




i
=
1

k



e
ϵ


i










=


1

1
+

e

-




i
=
1

k


e
i












=



ρ

(




i
=
1

k


ϵ
i


)










This demonstrates that in the worst case custom characteradapt takes full advantage of the composition of ∈. But what about the case where δ>0? The same σi can be had in all dimensions as if it is assumed that the privacy budget (ϵ, δ) is split equally s. t. ϵi=∈j and δij which, given previous assumptions, leads to σij, ∀i, jϵ{1, . . . , k}. The following can be transformed:











β
k

(
𝒟
)

=


1

1
+




i
=
1

k



Pr

(




i

(

𝒟


)

=

r
i


)


Pr

(




i

(
𝒟
)

=

r
i


)











(
12
)













1

1
+




i
=
1

k


e

-

ϵ
i






=


1

1
+

e

-




i
=
1

k


e
i






=

ρ

(




i
=
1

k


ϵ
i


)






(
13
)







In the step from Equation (12) to (13), simplifications from Equations (6) to (10) in Theorem 4 are used. This short proof demonstrates that custom characteradapt behaves as expected by sequential composition theorems also for the (ϵ, δ)-differentially private Gaussian mechanism.


To take advantage of RDP composition, simplifications from Equation (6) to (9) can be used. The following transformations can be utilized:














β
k

(
𝒟
)

=


1

1
+





i
=
1

k


Pr

(




i

(

𝒟


)

=

r
i


)






i
=
1

k


Pr

(




i

(
𝒟
)

=

r
i


)











=



1

1
+




i
=
1

k



Pr

(




i

(

𝒟


)

=

r
i


)


Pr

(




i

(
𝒟
)

=

r
i


)














(
14
)

















1

1
+




i
=
1

k


e

-

(


e

RDP
,
i


+



(

α
+
1

)


-
1




ln
(

1
/
δ

)



)












=


1

1
+

e




k

(

α
-
1

)


-
1




ln
(

1
/
δ

)


-




i
=
1

k


ϵ

RDP
,
i













=


1

1
+

e




(

α
-
1

)


-
1




ln
(

1
/
δ

)


-




i
=
1

k


ϵ

RDP
,
i














(
15
)














=

ρ

(





i
=
1

k


ϵ

RDP
,
i



-



(

α
-
1

)


-
1




ln

(

1
/

δ
k


)



)






(
16
)







Equation (16) implies that an RDP-composed bound can be achieved with a composed δ value of δk. It is known that sequential composition results in a composed δ value of kδ. Since δk<kδ, RDP offers a stronger (ϵ, δ) guarantee for the same ρc. This behavior can also be interpreted as the fact that holding the composed (ϵ, δ) guarantee constant, the value of ρc is greater when sequential composition is used compared to RDP. Therefore, RDP offers a tighter bound for ρc under composition.


Expected Membership Advantage. The adaptive posterior belief adversary allows to transform the DP guarantee (ϵ, δ) into a scalar measure ρc indicating whether custom characteradapt can confidently re-identify an individual's record in a dataset. From an individual's point of view, of interest is deniability, i.e., if custom characteradapt has low confidence, an individual can plausibly deny that the hypothesis of custom characteradapt is correct. A resulting question concerns how often a guess by custom characteradapt, about the presence of an individual is actually correct or what custom characteradapt 's advantage is. As described above, it can be assumed that custom characteradapt operates as a naive Bayes classifier with known probability distributions. Looking at the decision boundary of the classifier (i.e., when to choose custom character or custom charactercustom charactercustom characterGau with different (ϵ, δ) guarantees, it is found that the decision boundary does not change as long as the PDFs are symmetric. For example, consider a scenario with given datasets custom character, custom character′ and query ƒ: DOM→R that yields ƒ (custom character)=0 and ƒ (custom character′)=1. Furthermore, assuming w.l.o.g. that Δƒ2=1.



FIGS. 2(a) and 2(b) are diagrams 200a, 200b, respectively illustrating decision boundaries based on PDFs and confidence. It is shown that the decision boundary of custom characteradapt does not change when increasing the privacy guarantee since (ϵ, δ) causes the PDFs of custom character and custom character′ to become squeezed. Thus, custom characteradapt will exclusively choose custom character if a value is sampled from the left, red region, and vice versa for custom character′ in the right blue region. Still, confidence towards either decision declines.


If a (6, 10−6)−custom characterP custom characterGau is applied to perturb the results of ƒ, custom characteradapt has to choose between the two PDFs with solid lines in FIG. 2a based on the output custom characterGau(·)=r. FIG. 2b visualizes the resulting posterior beliefs for custom character, custom character′ (solid lines), the highest of which custom characteradapt chooses given r. The regions where custom characteradapt chooses custom character are shaded red in both figures, and regions that result in the choice custom character′ are shaded blue. Increasing the privacy guarantee to (3, 10−6)-DP (dashed lines in the figures) squeezes the PDFs and confidence curves. However, the decision boundary of the regions at which custom characteradapt chooses a certain dataset stay the same. Thus, it is important to note that holding r constant and reducing (ϵ, δ) solely affects the posterior beliefs of custom characteradapt, not the choice (i.e., the order from most to least confident is maintained even while maximum posterior belief is lowered).


However, it is expected that the information “How likely is an adversary to guess the dataset in which I have participated?” to be a major point of interest when interpreting DP guarantees in iterative evaluations of ƒ, like those found in data science use cases such as machine learning. Expected membership advantage ρα can be defined as the difference between the probabilities of custom characteradapt correctly identifying custom character (true positive rate) and of custom characteradapt misclassifying a member of custom character′ as belonging to custom character (false negative rate), as in 40. The worst-case advantage ρα=1 occurs in the case in which custom character always samples on that side of the decision boundary that belongs to the true dataset custom character. In contrast to the analysis of ρc, ρα will not give a worst-case bound, but an average case estimation. Since custom characteradapt is a naive Bayes classifier, the properties of normal distributions can be used. With the multidimensional region where custom characteradapt chooses custom character as ∫Dc and where custom characteradapt chooses custom character′ as ∫Di:

ρα=Pr(Succes:=Pr(custom characteradapt=custom character|custom character))−Pr(Error:=Pr(custom characteradapt=custom character|custom character′))=∫DcPr(custom character(custom character)=r)Pr(custom character)dr−∫DiPr(custom character(custom character)=r)Pr(custom character)dr.


The corresponding regions of error for the previous example are visualized in diagrams 300a, 300b of FIGS. 3(a) and 3(b). If custom characterGau is applied to achieve (ϵ, δ)-DP, the exact membership advantage of custom characteradapt can be determined analytically. Two multidimensional Gaussian PDFs (i.e., custom characterGau (custom character), custom characterGau (custom character′)) with known covariance matrix Σ and known means μ1=ƒ(custom character), μ2=ƒ(custom character′) can be considered.













Pr

(
Success
)

=


1
-

Pr

(
Error
)









=


1
-

Φ

(


-
Δ

/
2

)



,







(
17
)







where φ is the cumulative distribution function (CDF) of the standard normal distribution and Δ=√{square root over ((μ1−μ2TΣ−11−μ2))} the Mahalanobis distance. Adding independent noise in all dimensions Σ=σ2I, the Mahalanobis distance simplifies to






Δ
=







μ
1

-

μ
2




2

σ

.





Definition 7 (Bound on the Expected Adversarial Membership Advantage). For the (ϵ, δ)-differentially private Gaussian mechanism, the expected membership advantage of the strong probabilistic adversary on either dataset custom character, custom character′ is












ρ


a



(

ϵ
,
δ

)


=



Φ

(

Δ
/
2

)

-

Φ

(


-
Δ

/
2

)








=



Φ

(






μ
1

-

μ
2




2


2


σ
i



)

-

Φ

(

-






μ
1

-

μ
2




2


2


σ
i




)








=



Φ

(






μ
1

-

μ
2




2


2

Δ



f
2

(



2


ln

(

1.25
/
δ

)



/
ϵ

)



)

-










Φ

(

-






μ
1

-

μ
2




2


2

Δ



f
2

(



2


ln

(

1.25
/
δ

)



/
ϵ

)




)












Φ

(

1

2


(



2


ln

(

1.25
/
δ

)



/
ϵ

)



)

-









Φ

(

-

1

2


(



2


ln

(

1.25
/
δ

)



/
ϵ

)




)








Again, the current framework can express (ϵ, δ) guarantees with δ>0 via a scalar value ρα. However, a specific membership advantage can be computed individually for different kinds of mechanisms custom character.


Above it was evaluated how the confidence of custom characteradapt changes under composition. A similar analysis of the membership advantage under composition is required. Again, the elucidations can be restricted to the Gaussian mechanism. As shown above, the k-fold composition of custom characterGaui, each step guaranteeing (α, ∈ERDP,i)-RDP, can be represented by a single execution of custom characterGau with k-dimensional output guaranteeing (α, ∈RDP=k∈RDP,i)-RDP. For this proof, it can be assumed that each of the composed mechanism executions has the same sensitivity ∥μ1,i−μ2,i∥=Δƒ2. A single execution of custom characterGau can be analyzed with the tools described above. Definition 7 yields











ρ


a

=



Φ

(

Δ
/
2

)

-

Φ

(


-
Δ

/
2

)








=



Φ

(






μ
1

-

μ
2




2


2


σ
i



)

-

Φ

(

-






μ
1

-

μ
2




2


2


σ
i




)








=



Φ

(



k







μ

1
,
i


-

μ

2
,
i





2



2

Δ


f
2




α
/

(

2


ϵ

RDP
,
i



)





)

-

(

-



k







μ

1
,
i


-

μ

2
,
i





2



2

Δ


f
2




α
/

(

2


ϵ

RDP
,
i



)






)








=



Φ

(


k


2



α
/

(

2


ϵ

RDP
,
i



)





)

-

Φ

(

-


k


2



α
/

(

2


ϵ

RDP
,
i



)






)








=



Φ

(



k


ϵ

RDP
,
i




2

α



)

-

Φ

(

-



k


ϵ

RDP
,
i




2

α




)








=



Φ

(





ϵ
RDP



2

α



)

-

Φ

(

-





ϵ
RDP



2

α




)









The result shows that the strategy of custom characteradapt fully takes advantage of the RDP composition properties of ϵRDP,i and α. As expected, ρα takes on the same value, regardless of whether k composition steps with ϵRDP,i or a single composition step with ϵRDP is carried out.


Privacy Regimes. With confidence ρc and expected membership advantage ρα two measures were defined that taken together form the current framework for interpreting DP guarantees (ϵ, δ). While ρα indicates the likelihood with which custom characteradapt is to discover any participant's data correctly, ρc complements this information with the plausibility with which any participant in the data can argue that custom characteradapt guess is incorrect. Here it is demonstrated how the current framework can be applied to measure the level of protection independent of any particular dataset custom character. Furthermore, several, allegedly secure (ϵ, δ) pairs suggested in literature are revisited and interpret their protection. Finally, general guidance is provided to realize high, mid or low to no privacy regimes.


The interpretability framework can be applied in two steps. First, the participants in the dataset receive a predefined (ϵ, δ) guarantee. This (ϵ, δ) guarantee is based on the maximum tolerable decrease in utility (e.g., accuracy) of a function ƒ evaluated by a data analyst. The participants interpret the resulting tuple (ρc, ρα) w.r.t. their protection. Each participant can either reject or accept the use of their data by the data analyst. Second, participants are free to suggest an (ϵ, δ) based on the corresponding adversarial confidence and membership advantage (pc, ρα) which is in turn evaluated by the data analyst w.r.t. expected utility of ƒ. To enable participants to perform this matching, general curves of ρc and ρα are provided for different (ϵ, δ) as shown in diagrams 400a, 400b of FIGS. 4(a), 4(b). For ρα, the curves are specific for custom characterGau. In contrast, ρc is independent of custom character. To compute both measures, Definition 5 and Definition 7 can be used. It can also be assumed w.l.o.g. that ƒ(custom character)=(01, 02, . . . ,0k) and ƒ(custom character′)=(11,12, . . . ,1k) for all dimensions k. Thus, ƒ(custom character) and Δƒ2=√{square root over (k)}(custom character′) are maximally distinguishable, resulting in custom characterFIG. 4a illustrates that there is no significant difference between the expected worst-case confidence of custom characteradapt for ϵ-DP and (ϵ, δ)-DP for 0<δ<0.1. In contrast, ρα strongly depends on the choice of δ as depicted in FIG. 4b. For example, ρα is low for (2, 10−6)-DP indicating that the probability of custom characteradapt choosing custom character is similar to choosing custom character′. Yet, the corresponding ρc is high, which provides support that custom characteradapt guesses is correct. With these implications in mind, data owners and data analysts are empowered to discuss about acceptable privacy guarantees.


Validation Over Synthetic Data. The following demonstrates how the confidence and membership advantage of custom characteradapt develop in an empirical example. The evaluation characterizes how well ρc and ρα actually model the expected membership advantage risk for data members and how effective custom characteradapt behaves on synthetic data. As custom characteradapt is assumed to know all data member except for one, the size of custom character does not influence her. For this reason, the following tiny data universe U, true dataset custom character and alternative custom character′ presented in Tables 2, 3 and 4 was used. Let U represent a set of employees that were offered to participate in a survey about their hourly wage. Alice, Bob and Carol participated. Dan did not. Thus, the survey data custom character consists of 3 entries. The survey owner allows data analysts to pose queries to custom character until a DP budget of (ϵ=5, J=0.01) is consumed. custom characteradapt is the data analyst that queries custom character. Aside from learning statistics about the wage, custom characteradapt is also interested in knowing about who participated. So far, she knows that Alice and Carol participated for sure and there are three people in total. Thus, she has to decide between custom character and custom character′, i.e., whether Bob or Dan is the missing entry. As a side information, she knows that the employer pays at least $1 and a maximum of $10. As a consequence, when custom characteradapt is allowed to ask only the sum query function, S(ƒ)=Δƒ2=9. Further, the Gaussian mechanism is known to be used for anonymization.









TABLE 2








custom character











Name
Wage






Alice
$5



Bob
$10



Carol
$2



Dan
$1
















TABLE 3








custom character











Name
Wage






Alice
$5



Bob
$10



Carol
$2
















TABLE 4








custom character











Name
Wage






Alice
$5



Dan
$1



Carol
$2









Given this prior information, custom characteradapt iteratively updates her belief on custom character, custom character′ after each query. She makes a final guess after the whole (ϵ, δ)-budget has been used. By using the current framework (ρc, ρα), data members (especially Bob in this case) can compute their protection guarantee: What is the advantage of custom characteradapt in disclosing a person's participation (i.e., ρα)? How plausibly can that person deny a revelation (i.e., (ρc) by custom characteradapt ? Referring to Definition 7 and Definition 5, ρα(∈=5, δ=0.01)=0.5 is computed under composition and ρc (∈=5,δ=0.01)=0.99 illustrating that the risk of re-identification is quite high and the deniability extremely low. However, to show if custom characteradapt actually reaches those values her behavior can be empirically analyzed by iteratively querying custom character and applying Algorithm 1 after each query. k=100 queries can be used and the experiment can be repeated 10,000 times to estimate membership advantage and show the distribution of confidence at the end of each run. As it is known that the adversary will compose k times, the RDP composition scheme can be used to determine what noise scale can be applied for each individual query. In diagram 500a of FIG. 5a the adaptive posterior beliefs for custom character and custom character′ are depicted. After some fluctuation, the belief for custom character starts growing up to 0.90. Consequently, the final guess of custom characteradapt is custom character which is correct. The guesses over all runs are summarized in diagram 500b of FIG. 5(b). Here, it is seen that about 75% of the guesses of custom characteradapt are correct which corresponds exactly to the expected membership advantage of our threat model. However, the predicted upper bound of 0.99 was not reached in the sample run. In contrast to ρα, ρc is a worst case bound, not an expectation. Thus, the final belief of custom characteradapt approaches ρc very rarely.


To illustrate this phenomenon, a histogram over the beliefs at the end of each run for various choices of ρc in diagram 600 of FIG. 6. FIG. 6 illustrates that the predicted worst case bound was reached in a small proportion of runs. This effect becomes more visible when ϵ is lower.


A final note on δ which describes the probability of exceeding ρc. When looking closely at the histograms, one can see that there are some (small) columns for a range of values that are larger than the worst-case bound. Their proportion to all runs can be calculated, e.g., 0.0008 for ϵ=5 which is less than the expected δ.


Application to Deep Learning. A natural question arising is how the introduced adversary custom characteradapt behaves on real data and high dimensional, iterative differentially private function evaluations. Such characteristics are typically found in Deep Learning classification tasks. Here, a neural network (NN) is provided a training dataset custom character to learn a prediction function ŷ=ƒnn(x) given (x,y)ϵcustom character. Learning is achieved by means of an optimizer. Afterwards, the accuracy of the learned prediction function ƒnn(·) is tested on a dataset custom charactertest.


A variety of differentially private optimizers for deep learning can be utilized. These optimizers represent a differentially private training mechanism custom characternnθ(·)) that updates the weights θt per training step tϵT with θt←θt−1−a·{tilde over (g)}, where α>0 is the learning rate and {tilde over (g)} denotes the Gaussian perturbed gradient (cf. Definition 2). After T update steps, where each update step itself an application of custom characterGauθ(·)), the algorithm outputs a differentially private weight matrix θ which is then used in the prediction function ƒnn(·). Considering the evaluation of ƒnn(·) given (x, y)ϵcustom character as post-processing of the trained weights θ, it is found that prediction {tilde over (y)}=ƒnn(x) is (ϵ, δ)-differentially private too.


It is assumed that custom characteradapt desires to correctly identify the dataset that was utilized for training when having the choice between custom character and custom character′. There are two variations of DP: bounded and unbounded. In bounded DP, it holds that |custom character|=|custom character′|. However, differentially private deep learning optimizers such as the one utilized herein consider unbounded DP as the standard case in which |custom character|−|custom character′|=1. Furthermore, it can be assumed that custom characteradapt to possess the following information: the initial weights θ0, the perturbed gradients {tilde over (g)} after every epoch, the values of privacy parameters (ϵ, δ), and sensitivity Δƒ2=C equal to the clipping norm. Here, Δƒ2 refers to the sensitivity with which noise added by a mechanism is scaled, not necessarily global sensitivity. In some experiments, for example, Δƒ2=S(ƒ), which expresses the true difference between ƒ(custom character) and ƒ(custom character′), as in Definition 3. The assumptions are analogous to those of white-box membership inference attacks. The attack itself is based on calculating clipped gradients ĝ(custom character, θt) and ĝ (custom character′, θt) for each training step tϵT, and finding β(custom character) for that training step given and calculating θt+1 by applying {tilde over (g)}.


Above, sensitivity was set to Δƒ2=S(ƒ)=∥ƒ(custom character)−ƒ(custom character′)∥2, the true difference between the sums of wages resulting from Bob's and Dan's participation in a survey. In order to create a comparable experiment for differentially private deep learning, the difference between gradients that can be obscured by noise for each epoch is S(ƒθt)=∥n·ĝ(custom character, θt)−(n−1)·ĝ (custom character′, θt))∥2. C bounds the influence of a single training example on training by clipping each per-example gradient to the chosen value of C; although this value bounds the influence of a single example on the gradient, this bound is loose. If S(ƒθt«custom characterC, adversary confidence β(custom character) would be very small in every case when Δƒ2=C, as is the case in most implementations of differentially private neural networks. This behavior is due to the fact that an assumption for Equation (4) does not hold, since Δƒ2≠S(ƒ0t). To address this challenge in differentially private deep learning, Δƒ2=S(ƒ0t) can be adaptively set. Choosing Δƒ2 this way is analogous to using local sensitivity in differential privacy.












Algorithm 2 Strong Adaptive Adversary in Deep Learning















Require: Datasets custom character  and custom character ′ with n and n − 1 records custom characteri


 and custom characteri′, respectively, training steps T, cost function


 J(θ), perturbed gradients {tilde over (g)}t for each training step t ≤ T,


 initial weights θ0, prior beliefs β0(custom character ) = β0(custom character ′) = 0.5,


 learning rate α, clipping threshold C, and mechanism  custom character


Ensure: Adversary Confidence βT(custom character )








 1:
for t ∈ [T] do Compute gradients


 2:
For each i ∈ custom character , custom character ′, compute gt(custom characteri) ← ∇θtJ(θt, custom characteri) and



gt(custom characteri′) ← ∇θtJ(θt, custom characteri′)


 3:
Clip gradients


 4:
Clip each gt(custom characteri), gt(custom characteri′) for i ∈ custom character , custom character



to have a maximum L2 norm C using













g
_

t

(

𝒟
i

)





g
t

(

𝒟
i

)

/

max

(

1
,






g
t

(

𝒟
i

)



2

c


)



and





g
_

t

(

𝒟
i


)



















g
t

(

𝒟
i


)

/

max

(

1
,






g
t

(

𝒟
i


)



2

c


)










 5:
Calculate Batch gradients


 6:
ĝt(custom character ) ← avg(gt(custom characteri))


 7:
ĝt(custom character ′) ← avg(gt(custom characteri′))


 8:
Calculate Sensitivity


 9:
Δft ← n · ∥n · ĝt(custom character ′) − (n − 1) · ĝt(custom character )∥2


10:
Calculate Belief





11:






β

t
+
1


(
𝒟
)






β
t

(
𝒟
)

*

Pr
[




(



g
^

t

(
𝒟
)

)

=


g
~

t


]






β
t

(
𝒟
)

*

Pr
[




(



g
^

t

(
𝒟
)

)

=


g
~

t


]


+



β
t

(

𝒟


)

*

Pr
[




(



g
^

t

(

𝒟


)

)

=


g
~

t


]














Compute weights


12:
θt+1 ← θt − α{tilde over (g)}t


13:
end for









Based on the previously introduced assumptions and notations Algorithm 1 can be adapted to be bounded and unbounded, as well as global and S(ƒθ)-based, settings. The adapted Strong Adaptive Adversary for differentially private deep learning is stated in Algorithm 2 specifies custom characteradapt in an unbounded environment with Δƒ2=S(ƒθ). For bounded differential privacy with Δƒ2=S(ƒθ), Algorithm 2 can be adjusted, s.t. custom character′ is defined to contain n records, and Δƒ=S(ƒθt)=n·∥ĝt(custom character′)−ĝt(custom character)∥2. To implement global unbounded differential privacy, Δƒ2=C and custom character′ contains n−1 records. To implement global bounded differential privacy, custom character′ contains n−1 records and Δƒ2=2C, since the maximum influence of one example on the sum of per-example gradients is C. If one record is replaced with another, the lengths of the clipped gradients of these two records could each be C and point in opposite directions, which results in n·∥ĝt(custom character′)−n·∥ĝt(custom character′)∥2=2C. It is also noted that the same value of Δƒ2 used by custom characteradapt can also be used by custom character to add noise.


For practical evaluation a feed-forward NN for the MNIST dataset was built. For MNIST the utilized NN architecture consists of two repetitions of a convolutional layer with kernel size (3, 3), batch normalization and max pooling with pool size (2, 2) before being flattened for the output layer. Relu and softmax activation functions were used for the convolutional layers and the output layer, respectively.


One epoch represents the evaluation of all records in custom character. Thus, it is important to highlight that the number of update steps T varies in practice depending on the number of records from custom character used for calculating the DP gradient update ĝ. In mini-batch gradient descent a number of b records from custom character is used for calculating an update and one epoch results in t=custom character/b update steps. In contrast in batch gradient descent, all records in custom character are used for calculating the update and each epoch consists of a single update step. While the approaches vary in their speed of convergence due to the gradient update behavior (i.e., many small updates vs. few large updates) none of the approaches has hard limitations w.r.t. convergence of accuracy and loss. With the current subject matter, batch gradient descent was utilized and given differentially private gradient updates {tilde over (g)} after any update step t the previously introduced adversary custom characteradapt shall decide whether it was calculated on custom character or custom character′. It was assumed that custom characteradapt has equal prior beliefs of 0.5 on custom character and custom character′. The prior belief of custom characteradapt adapts at every step t according to (1).


In the experiments, relevant parameters were set as follows: training data |D|=100, epochs k=30, clipping norm C=3.0, learning rate α=0.005, 5=0.01, and ρc=0.9. These values correspond to ρα=25.62%.









TABLE 5







Empirical (ρα, δ)










Δf2 = S(fθ)
Global Δf2












Bounded DP
(0.240, 0.002)
(0.108, 0)   


Unbounded DP
(0.250, 0.001)
(0.266, 0.001)









The empirically calculated values (i.e., after the training) for ρα and δ are presented in Table 5. The belief distributions for the described experiments can be found in diagrams 700, 800 of FIGS. 7-8.


Note that δ indeed bounds the percentage of experiments for which βT(custom character)>ρc. For all experiments with Δƒ2=S(ƒθ) and for global, unbounded DP, the empirical and analytical values of ρα match the empirical values. However, in global, bounded differential privacy the difference of correct guesses and incorrect guesses by custom characteradapt falls below ρα. In this experiment, the percentage of experiments for which βT(custom character)>ρc is also far lower. This behavior confirms the hypothesis that C is loose, so global sensitivity results in a lower value of βT(custom character), as is again confirmed by FIGS. 7b and 9a. It is also noted that the distributions in FIGS. 7a and 7c look identical to each other and to the distributions in FIG. 6 for the respective values of ρc and δ. This observation confirms that the strong adaptive adversary attack model is applicable to choose privacy parameter e in deep learning.


The following investigates the reason for the similarities between unbounded differential privacy with Δƒ2=S(ƒθ) and Δƒ2=C and also for the differences between FIGS. 7(a) and 7(b) concerning bounded differential privacy with Δƒ2=S(ƒθ) and Δƒ2=2C. In the unbounded case, the distributions seem identical in diagram 800 of FIG. 8, which occurs when Δƒ2=S(ƒθ)=∥(n−1)·ĝt (custom character′)−n·ĝt(custom character)∥2=C, so the clipped per example gradient of the differentiating example in custom character should have the length 3, which is equal to C. This hypothesis is confirmed with a glance at the development of ∥(n−1)·ĝt(custom character′)−n·ĝt(custom character)∥2 in diagram 900a of FIG. 9. This behavior is not surprising, since all per example gradients over the course of all epochs were greater than or close to C=3. In the bounded differential privacy experiments, Δƒ2=S(ƒθ)=n·∥ĝt(custom character′)−ĝt (custom character)∥2≠2C, since the corresponding distributions in FIGS. 7(a) and 7(b), as well as FIG. 8, do not look identical. This expectation is confirmed by the plot of n·∥ĝ_t(custom character′)−ĝt(custom character)∥2 in FIG. 9(a). This difference implies that the per example gradients of the differentiating examples in custom character′ and custom character are less than 2C and do not point in opposite directions. It is also noted that the length of gradients tends to decrease over the course of training, a trend that can be observed in diagram 900a of FIG. 9(a), so if training converges to a point in which gradients are shorter than the chosen value of C, globally differentially private deep learning inherently offers a stronger privacy guarantee than was originally chosen.


Diagram 900(b) of FIG. 9(b) confirms that the differentially trained models in these models do, indeed, yield some utility. It was also observed that test accuracy is directly affected by the value of sensitivity Δƒ2 chosen for noise addition. Since gradients in all four scenarios are clipped to the same value of <C, the only differences between training the neural networks is Δƒ2. As visualized in FIG. 9(a), sensitivities for unbounded DP with Δƒ2=S(ƒθ) and Δƒ2=C were identical, so the nearly identical corresponding distributions in FIG. 9(b) do not come as a surprise.


Similarly, it is observed that Δƒ2 is greater for global, bounded DP in FIG. 9(a), so utility is also lower for this case in FIG. 9(b). The unbounded DP case with Δƒ2=S(ƒθ) yields the highest utility, which can be explained by the low value of Δƒ2 that can be read from FIG. 9(b).


Relation to Membership Inference Threat Model. The membership inference threat model and the analysis of custom characteradapt herein exhibit clear commonalities. Namely, they exhibit the same overarching goal: to intuitively quantify the privacy offered by DP in a deep learning scenario. Both approaches aim to clarify the privacy risks associated with deep learning models.


Considering that custom characteradapt desires to identify the dataset used for training a NN, custom characteradapt is analogous to a membership inference adversary who desires to identify individual records in the training data. Furthermore, parallel to membership inference, custom characteradapt operates in the whitebox model, observing the development of {tilde over (g)}t over all training steps t of a NN. In one approach, the adversary uses the training loss to infer membership of an example.


Although the general ideas overlap, custom characteradapt is far stronger than a membership inference adversary. Membership advantage quantifies the effectiveness of the practical membership inference attack and therefore provides a lower bound on information leakage, which adversaries with auxiliary information quickly surpass. custom characteradapt has access to arbitrary auxiliary information, including all data points in custom character and custom character′, staying closer to the original DP guarantees. Using custom characteradapt, what the best possible adversary is able to infer can be calculated and it can be seen that this adversary reaches the upper bound.


An adversarial game can be defined in which the adversary receives both datasets custom character and custom character′ instead of only receiving one value z, the size n of the dataset, and the distribution from which the data points are drawn.


Experiment 1. Let A be an adversary, A be a learning algorithm, D and custom character′ be neighboring datasets. The identifiability experiment proceeds as follows:

    • 1. Choose b←{0,1} uniformly at random
    • 2 Let custom charactercustom character=custom character(custom character) if b=0 and custom charactercustom character=custom character(custom character′) if b=1
    • 3. Output 1 if custom character (custom character, custom character′, custom charactercustom character)=0, 0 otherwise. custom character outputs 0 or 1.


Here, the expected value of membership advantage to quantify the accuracy of custom characteradapt is calculated.


In the evaluation of custom characteradapt in a deep learning setting, it was realized that custom characteradapt did not reach the upper confidence bound until the sensitivity was adjusted. In differentially private deep learning, gradients decrease over the course of training until convergence and can fall below the sensitivity or clipping norm. This means that more noise is added than would have been necessary to obscure the difference made by a member of the dataset. Overall, the difference between the lower bound on privacy offered by membership advantage in a membership inference setting and the upper bound offered by maximum adversary confidence includes the auxiliary knowledge of custom characteradapt and the inherent privacy offered in deep learning scenarios through decreasing gradients.


Application To Analytics. The observed utility gains can also be realized on real-world data in an energy forecasting task that is relevant for German energy providers. In this energy forecasting problem, the energy transmission network is structured into disjoint virtual balancing groups under the responsibility of individual energy providers. Each balancing group consist of multiple zones and each zone consists of individual households. Energy providers have an incentive to forecast the demand in their balancing group with low error to schedule energy production accordingly. Currently, energy providers can utilize the overall aggregated energy consumption per balancing group to calculate a demand forecast, since they have to report these number to the transmission system operator. However, with the rollout of smart meters additional communication channels can be set up and the demand per zone could be computed. Note that, counterintuitively, forecasting on grouped household loads instead of individual households is beneficial for forecasting performance due to reduced stochasticity. Nonetheless, computing the demand per zone is reflecting the sum of individual household energy consumption and thus a sensitive task. Here, the use of differential privacy allows one to compute the anonymized energy consumption per zone and mitigate privacy concerns. The energy provider will only have an incentive to apply differential privacy, if the forecasting error based on differentially private energy consumption per zone is lower than the forecasting error based on the aggregated zonal loads. Vice versa the energy provider has to quantify and communicate the achieved privacy guarantee to the balancing group households to gather consent for processing the data.


This forecasting task was based on the dataset and benchmarking model of the 2012 Global Energy Forecast Competition (GEFCom). The GEFCom dataset consists of hourly energy consumption data of 4.5 years from 20 zones for training and one week of data from the same zones for testing. The GEFCom winning model computes the Forecast F for a zone z at time t by computing a linear regression over a p dimensional feature vector x (representing 11 weather stations):










F

z
,
t


=


β
0

+




j
=
1

p



β
j

·

x

t
,
j




+


e
t

.






(
18
)







The sensitive target attribute of the linear regression is zonal load consumption Lz,t, e.g., the sum of n household loads l2,t,i, . . . ,n. Differential privacy can be added to the sum computation by applying the Gaussian mechanism (cf. 2) yielding:












L

z
.

t




=





i
=
1


n
z



l

z
,
i
,

t





+

𝒩

(

0
,

σ
2


)



)

.




(
19
)







The energy provider will only have a benefit if the differentially private load forecasting is having a smaller error than the aggregated forecast. A suited error metric for energy forecasting is the Mean Absolute Error (MAE), i.e.:









MAE
=


1
T






t
=
1

T





"\[LeftBracketingBar]"



F
t

-

L
t




"\[RightBracketingBar]"


.







(
20
)







Diagram 1000a of FIG. 10(a) illustrates that the forecasting error over 10 independent forecast trainings over increasing privacy parameters ϵ. Note that this illustration was limited to ϵ<0.4216 since the forecasting error is already exceeding the aggregate forecast error for this ϵ and continues to increase afterwards. Thus, from a utility perspective the energy provider will have a preference for ϵ»0.4216. The privacy loss is again analyzed over composition (k=38,070) with RDP for an additive privacy loss δ=10−9 and a global sensitivity of Δƒ=48 kW which is the maximum technical power demand fused in German residential homes. However, in practice households have been observed not to exceed power consumptions of up to 15.36 kW, which thus is used as an estimate for S(ƒ).



FIG. 10(b) illustrates the corresponding MAE when applying the Gaussian mechanism with Δƒ=S(ƒ)=15.36. Note that for both Gaussian mechanisms noise is sampled from a Gaussian distribution with σ=ηz·Δƒ, and that was used an equal z for both Gaussian mechanisms. FIG. 10 illustrates that the achievable utility is consistently higher.


In contrast to the energy provider, it is assumed that households have an interest in ρc«1. This leads to the question whether the energy provider and the households have a intersection of their preferred ϵ. FIG. 10(c) maps ρc and ρα to MAE over ϵ for δ=10−9 and either Δƒ=48 or Δƒ=S(ƒ)=15.36. It is observed for ρc≈0.65, which results in ϵ≈0.6 and ρα≈0.04, that the use of S(ƒ) instead of global sensitivity allows to reduce the maximum MAE by approximately 10 MW, which is significant when considering that the absolute difference between the aggregated forecast MAE and unperturbed forecast MAE is only ≈12 MW.



FIG. 11 is a process flow diagram 1100 in which, at 1110, data is received that specifies a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. Privacy parameters ε, δ are then calculated, at 1120, based on the received data that govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The calculating is based on a ratio of probabilities distributions of different observations, which are bound by the posterior belief ρc as applied to a dataset. The calculated privacy parameters are then used, at 1130, to apply the DP algorithm to the function over the dataset to result in an anonymized function output (e.g., a machine learning model, etc.).



FIG. 12 is a process flow diagram 1200 in which, at 1210, data is received that specifies privacy parameters ε, δ which govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. The received data is then used, at 1220, to calculate an expected membership advantage ρα that corresponds to a likelihood of an adversary successfully identifying a member in the dataset. Such calculating can be based on an overlap of two probability distributions. The calculated expected membership advantage ρα can be used, at 1230, when applying the DP algorithm to a function over the dataset to result in an anonymized function output (e.g., a machine learning model, etc.).



FIG. 13 is a process flow diagram 1300 in which, at 1310, data is received that specifies privacy parameters ε, δ which govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset. Thereafter, at 1320, the received data is used to calculate an adversarial posterior belief bound ρc that ρccustom character, to a function over the dataset to result in an anonymized function output (e.g., machine learning model, etc.).



FIG. 14 is a process flow diagram in which, at 1410, a dataset is received. Thereafter, at 1420, at least one first user-generated privacy parameter is received which governs a differential privacy (DP) algorithm to be applied to a function evaluated over the received dataset. Using the received at least one first user-generated privacy parameter, at least one second privacy parameter is calculated, at 1430, based on a ratio or overlap of probabilities of distributions of different observations. Subsequently, at 1440, the DP algorithm is applied, using the at least one second privacy parameter, to the function over the received dataset. At least one machine learning model can be trained, at 1450, using the dataset which, when deployed, is configured to classify input data.


The machine learning model(s) can be deployed once trained to classify input data when received.


The at least one first user-generated privacy parameter can include a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output. With such an arrangement, the calculated at least one second privacy parameter can include privacy parameters ε, δ and the calculating can be based on ρc which are bound by the posterior belief ρc as applied to the dataset.


In another variation, the at least one first user-generated privacy parameter includes privacy parameters ε, S. With such an implementation, the calculated at least one second privacy parameter can include an expected membership advantage ρα that corresponds to a likelihood of an adversary successfully identifying a member in the dataset and the calculating can be based on an overlap of two probability distributions.


In still another variation, the at least one first user-generated privacy parameter can include privacy parameters ε, S. With such an implementation, the calculated at least one second privacy parameter can include an adversarial posterior belief bound ρc that corresponds to a likelihood to re-identify data points from the dataset based on a differentially private function output and the calculating can be based on a conditional probability of different possible datasets.



FIG. 15 is a diagram 1500 illustrating a sample computing device architecture for implementing various aspects described herein. A bus 1504 can serve as the information highway interconnecting the other illustrated components of the hardware. A processing system 1508 labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM) 1512 and random access memory (RAM) 1516, can be in communication with the processing system 1508 and can include one or more programming instructions for the operations specified here. Optionally, program instructions can be stored on a non-transitory computer-readable storage medium such as a magnetic disk, optical disk, recordable memory device, flash memory, or other physical storage medium.


In one example, a disk controller 1548 can interface with one or more optional disk drives to the system bus 1504. These disk drives can be external or internal floppy disk drives such as 1560, external or internal CD-ROM, CD-R, CD-RW, or DVD, or solid state drives such as 1552, or external or internal hard drives 1556. As indicated previously, these various disk drives 1552, 1556, 1560 and disk controllers are optional devices. The system bus 1504 can also include at least one communication port 1520 to allow for communication with external devices either physically connected to the computing system or available externally through a wired or wireless network. In some cases, the at least one communication port 1520 includes or otherwise comprises a network interface.


To provide for interaction with a user, the subject matter described herein can be implemented on a computing device having a display device 1540 (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information obtained from the bus 1504 via a display interface 1514 to the user and an input device 1532 such as keyboard and/or a pointing device (e.g., a mouse or a trackball) and/or a touchscreen by which the user can provide input to the computer. Other kinds of input devices 1532 can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback by way of a microphone 1536, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The input device 1532 and the microphone 1536 can be coupled to and convey information via the bus 1504 by way of an input device interface 1528. Other computing devices, such as dedicated servers, can omit one or more of the display 1540 and display interface 1514, the input device 1532, the microphone 1536, and input device interface 1528.


One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.


These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores.


In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” In addition, use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible.


The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.

Claims
  • 1. A computer-implemented method for anonymized analysis of datasets comprising: receiving data specifying a bound for an adversarial posterior belief ρc that corresponds to a likelihood to re-identify data points from a dataset based on a differentially private function output;calculating, based on the received data, privacy parameters ε, δ which govern a differential privacy (DP) algorithm to be applied to a function to be evaluated over a dataset, the calculating being based on a ratio of probabilities distributions of different observations which are bound by the posterior belief ρc as applied to the dataset; andapplying, using the calculated privacy parameters ε, δ, the DP algorithm to the function over the dataset.
  • 2. The method of claim 1, wherein the probability distributions are generated using a Gaussian mechanism with an (ε, δ) guarantee that perturbs a result of the function evaluated over the dataset, preventing a posterior belief greater than ρc on the dataset.
  • 3. The method of claim 1, wherein the probability distributions are generated using a Laplacian mechanism with an F guarantee that perturbs a result of the function evaluated over the dataset, preventing a posterior belief greater than ρc on the dataset.
  • 4. The method of claim 1 further comprising: anonymously training at least one machine learning model using the dataset after application of the DP algorithm to the function over the dataset.
  • 5. The method of claim 4 further comprising: deploying the trained at least one machine learning model to classify further data input into the at least one machine learning model.
  • 6. The method of claim 1, wherein ε=log (ρc/(1−ρc) for a series of (ε, δ) or ε anonymized function evaluations with multidimensional data.
  • 7. The method of claim 4 further comprising: calculating a resulting total posterior belief ρc using a sequential composition or Rényi differential privacy (RDP) composition; andupdating the at least one machine learning model using the calculated resulting total posterior belief ρc.
US Referenced Citations (3)
Number Name Date Kind
20180004978 Hebert Jan 2018 A1
20190238516 Weggenmann Aug 2019 A1
20220067505 Liu Mar 2022 A1
Non-Patent Literature Citations (24)
Entry
Abadi et al., (2016) “Deep Learning with Differential Privacy,” arXiv:1607.00133v2 [stat.ML] Oct. 24, 2016, 14 pages.
Anonymous, (2009) “Differential privacy and the secrecy of the sample,” Oddly Shaped Pegs. Online. Retrieved https://adamdsmith.wordpress.com/2009/09/02/sample-secrecy/, pp. 1-4.
Beimel et al., “Bounds on the Sample Complexity for Private Learning and Private Data Release,” pp. 1-18 (2014).
Bernau et al., (2020) “Assessing Differentially Private Deep Learning With Membership Inference,” arXiv:1912.11328v4 [cs.CR] May 26, 202 May 27, 2020, pp. 1-17.
Dwork et al., (2010) “Boosting and Differential Privacy,” 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, IEEE Computer Society, pp. 51-60.
Dwork et al., (2014) “The Algorithmic Foundations of Differential Privacy,” Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407.
Dwork et al., (2016) “Concentrated Differential Privacy,” arXiv:1603.01887 [cs.DS] Mar. 16, 2016, pp. 1-28.
Dwork et al., “Our Data, Ourselves: Privacy via Distributed Noise Generation,” pp. 1-18 (2006).
Dwork, “Differential Privacy,” pp. 1-12 (2006).
Eibl et al., (2018) “The influence of differential privacy on short term electric load forecasting,” Energy Informatics, 1(Suppl 1):48.
Hsu et al., (2014) “Differential Privacy: An Economic Method for Choosing Epsilon,” arXiv:1402.3329 [cs.DB] Feb. 13, 2014, pp. 1-29.
Kalrouz et al., (2015) “The Composition Theorem for Differential Privacy,” Proceedings of the 32nd International Conference on Machine Learning, Lille, France, JMLR: W&CP, vol. 37, 10 pages.
Lee et al., (2011) “How Much Is Enough? Choosing ϵ for Differential Privacy,” X. Lai, J. Zhou, and H. Li (Eds.): Springer-Verlag Berlin Heidelberg, pp. 325-340.
Lee et al., (2012) “Differential Identifiability,” KDD'12, Aug. 12-16, 2012, Beijing, China, pp. 1041-1049.
Li et al., (2011), “On Sampling, Anonymization, and Differential Privacy: Or, k-Anonymization Meets Differential Privacy,” arxiv.org/abs/1101.2604v2, pp. 1-12.
Li et al., (2013) “Membership Privacy: A Unifying Framework For Privacy Definitions,” CCS'13, Nov. 4-8, 2013, Berlin, Germany, 13 pages.
Lyubashevsky et al., (2013) “On Ideal Lattices and Learning with Errors Over Rings,” pp. 1-34.
Mahalanobis, (1936), “On the Generalized Distance in Statistics,” Journ. Asiat. Soc. Bengal, pp. 49-55.
Mardia et al., (1979), Multivariate Analysis, Academic Press Limited, London, pp. 1-518.
McMahan et al., (2018) “Learning Differentially Private Recurrent Language Models,” arXiv:1710.06963v3 [cs.LG] Feb. 24, 2018, pp. 1-14.
Nissim et al., (2007) “Smooth Sensitivity and Sampling in Private Data Analysis,” STOC'07, Jun. 11-13, 2007, San Diego, CA, pp. 1-10.
Samarati et al., “Protecting Privacy when Disclosing Information: κ-Anonymity and Its Enforcement through Generalization and Suppression,” 19 pages. (1998).
Shokri et al., (2017) “Membership Inference Attacks Against Machine Learning Models,” arXiv:1610.05820 [cs.CR] Mar. 31, 2017, pp. 1-16.
Thakkar et al., (2019) “Differentially Private Learning with Adaptive Clipping,” arXiv:1905.03871v1 [cs.LG] May 9, 2019, pp. 1-9.
Related Publications (1)
Number Date Country
20220138348 A1 May 2022 US