SYSTEMS AND METHODS FOR A PRIVACY PRESERVING TEXT REPRESENTATION LEARNING FRAMEWORK

Information

  • Patent Application
  • 20210342546
  • Publication Number
    20210342546
  • Date Filed
    April 30, 2021
    3 years ago
  • Date Published
    November 04, 2021
    3 years ago
Abstract
Various embodiments of a computer-implemented system which learns textual representations while filtering out potentially personally identifying data and retaining semantic meaning within the textual representations are disclosed herein.
Description
FIELD

The present disclosure generally relates to natural language processing; and in particular, to a computer-implemented system and method for learning textual representations of user-generated textual information which preserves semantic meaning while removing potential personal information.


BACKGROUND

Textual information is one of the most significant portions of data that users generate by participating in different online activities such as leaving online reviews and posting tweets. On one hand, textual data includes abundant information about users' behavior, preferences and needs, which is critical for understanding them. For example, textual data has been historically used by service providers to track users' responses to products and provide the user with personalized services. On the other hand, publishing intact user-generated textual data makes users vulnerable against privacy issues. The reason is that the textual data itself includes sufficient information that causes the re-identification of users in the textual database and the leakage of their private attribute information.


These privacy concerns mandate data publishers to protect users' privacy by anonymizing the data before sharing it. However, traditional privacy preserving techniques such as k-anonymity and differential privacy are inefficient for user-generated textual data because this data is highly unstructured, noisy and unlike traditional documental content, can include large amounts of short and informal posts. Moreover, these solutions may impose a significant utility loss for protecting textual data as they may not explicitly include utility into their design objectives. It is thus challenging to design effective anonymization techniques for user-generated textual data which preserve both privacy and utility.


It is with these observations in mind, among others, that various aspects of the present disclosure were conceived and developed.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram showing an architecture for a computer-implemented text representation learning system;



FIG. 2 is a block diagram showing an auto-encoder of the system of FIG. 1;



FIG. 3 is a block diagram showing a semantic discriminator of the system of FIG. 1;



FIG. 4 is a block diagram showing a private attribute discriminator of the system of FIG. 1;



FIG. 5 is a flowchart showing a process flow for optimizing the text representation learning system of FIG. 1;



FIG. 6 is a flowchart showing a process flow for iteratively training the system of FIG. 1 to learn an amount of noise to add to a text representation;



FIG. 7A is a graph showing private attribute prediction with respect to sentiment prediction (F1) for different contribution values of a private attribute discriminator of the text representation learning system of FIG. 1;



FIG. 7B is a graph showing sentiment prediction accuracy for different contribution values of a private attribute discriminator of the text representation learning system of FIG. 1;



FIG. 7C is a graphical representation showing private attribute prediction with respect to part-of-speech tagging for different contribution values of a private attribute discriminator of the text representation learning system of FIG. 1;



FIG. 7D is a graphical representation showing part-of-speech tagging accuracy for different contribution values of a private attribute discriminator of the text representation learning system of FIG. 1; and



FIG. 8 is a simplified diagram showing an example device for implementation of the framework of FIG. 1.





Corresponding reference characters indicate corresponding elements among the view of the drawings. The headings used in the figures do not limit the scope of the claims.


DETAILED DESCRIPTION

Various embodiments of a framework for learning text representations of a document while maximizing semantic meaning and minimizing private attributes within text representations are disclosed herein. In some embodiments, the framework includes an auto-encoder for learning a text representation of a document, a differential-privacy-based noise adder for adding noise to the text representation, and semantic and private attribute discriminators to optimize the differential-privacy-based noise adder to ensure that semantic meaning is retained by the text representation while obfuscating private attributes. Referring to the drawings, embodiments of the system are illustrated and generally indicated as 100 in FIGS. 1-8.


Referring to FIGS. 1-4, a double privacy preserving text representation learning framework 100, also referred to as DPText, is disclosed herein. The framework 100 learns a modified latent representation 124 for a document 10 that (1) is differentially private and thus protects users against leakage of their identity, (2) obscures users' private information (e.g., age, location, gender), and (3) retains high utility and semantic meaning for a given task. The present framework 100 includes four main components, 1) an auto-encoder 102 (FIG. 2), 2) differential-privacy-based noise adder 104, 3) a semantic meaning discriminator 106 (FIG. 3), and 4) a private attribute discriminator 108 (FIG. 4). It is further theoretically shown herein that a resultant modified latent representation 124 is differentially private. The effectiveness of the present framework 100 is also shown on real-world datasets in two important natural language processing tasks, i.e., sentiment prediction and part-of-speech (POS) tagging. The theoretical and empirical results show the effectiveness of DPText in minimizing chances of learned textual representation re-identification, obscuring private attribute information and preserving semantic meaning of the text.


Referring to FIG. 1, a document 10 can include textual information for analysis by the framework 100. The framework 100 includes an auto-encoder 102 configured to extract an initial latent representation z 122 for the document 10. The auto-encoder 102 extracts the initial latent representation z 122 and further minimizes a reconstruction error between the initial latent representation z 122 and the textual information within the document 10 itself. Once the initial latent representation z 122 is obtained, a differential privacy adder 104 is deployed along with associated semantic meaning discriminators 106 and private attribute discriminator 108 to add random noise, i.e., Laplacian noise, to the initial latent representation 122 with respect to a given privacy budget, denoted herein as ϵ.


If one were to publish a text representation without proper anonymization, an adversary can learn the original text or infer if a targeted user's latent textual representation is in the database or which record is associated with it. Besides guaranteeing differential privacy, the act of adding noise minimizes the chance of the text re-identification and original text recovery. However, simply adding noise to the initial latent representation z 122 not only may destroy the semantic meaning of the text, but also does not necessarily prevent leakage of private attribute information from the text information on its own. Semantic meaning of the text data is task-dependent. For example, for sentiment analysis, sentiment is one of the semantic meanings of the given text and sentiment prediction is a classification task. Private-attribute information is also another important aspect of user privacy and includes information that the user does not want to disclose such as age, gender, and location.


It is therefore necessary to add an optimal amount of noise s to the original latent representation z 122. This challenge is approached by learning an amount of the added noise s using the privacy budget ϵ. As shown, the semantic meaning discriminator DS 106 and the private attribute discriminator DP 108 are also utilized to infer the amount of noise s to be added to the original latent representation z 122 by differential privacy adder 104. The semantic meaning discriminator DS 106 ensures that the noise added by differential privacy adder 104 does not destroy the semantic meaning with respect to a given task. The private attribute discriminator DP 108 guides the amount of noise s added by differential privacy adder 104 by ensuring that a resultant modified latent representation 124 does not include users' private information.


To incorporate the two discriminators DS 106 and DP 108 into determining an optimal amount of noise, an objective function is modeled as a minmax game among the two introduced discriminators, DS 106 and DP 108. Assume that there are T private attributes in the document 10. Let θDPt and θDS respectively demonstrate parameters of the private-attribute discriminator DP 108 and the semantic meaning discriminator DS 106. Correct labels for the t-th sensitive attribute and semantic classification task in the n-th document are also represented by pn,t and yn, respectively. With N documents, an objective function is written as follows:












min


θ

D
S


,
ϵ





max


{

θ

D
P
t


}


t
=
1

T





1
N






n
=
1

N






𝒟
S




(



y
^

n

,

y
n


)






-

α


1
T






t
=
1

T






𝒟
P
t




(



p
^


n
,
t


,

p

n
,
t



)




+


λΩ


(
θ
)








s
.
t
.




ϵ





c
1





(
1
)







where c1 is a predefined privacy budget constraint, custom-characterDPt and custom-characterDS denote cross entropy loss function, {circumflex over (p)}t is a predicted t-th private attribute, ŷ is a predicted semantic, and Ω(θ) is a parameter regularizer. θ={θDS, ∈, {θDPt}t=1T} is a set of all parameters to be learned including parameters of the semantic meaning discriminator model DS 106 and private-attribute discriminator model DP 108 and the privacy budget ϵ. Note that the resultant modified latent representation {tilde over (z)}=z+∈ 124 satisfies {tilde over (∈)}-differential privacy, where {tilde over (∈)}≤c1 is an optimal learned budget.


Problem Statement

Let χ={x1 . . . , xN} denote a set of N documents and custom-character={p1, . . . , pN} denotes a set of T private and sensitive attributes. Each document xi 10 includes a sequence of words, i.e., xi={xi1, . . . xim}. zicustom-characterd×1 is denoted as the context representation 122 of the original document xi 10. The framework 100 aims to preserve users' privacy by preventing a potential adversary from inferring whether a target text representation is in the dataset or which record is associated with it or being able to learn the target users' private attribute information.


PROBLEM 1. Given a set of documents χ, set of sensitive attributes custom-character, and given task T, learn a function f that can generate and release the modified latent representation {tilde over (z)}i 124 for each document xi so that, 1) the adversary cannot re-identify a targeted text representation and infer whether or not this latent representation is in the database, 2) the adversary cannot infer the targeted user's private attributes custom-character from the modified latent representation {tilde over (z)}i 124 and 3) the modified latent representation {tilde over (z)}i 124 is good for the given task custom-character, i.e., {tilde over (z)}i=f(xi,custom-character).


Differential Privacy Overview

Differential privacy protects a user's privacy during statistical query over a database by minimizing the chance of privacy leakage while maximizing the accuracy of queries. Differential privacy provides a strong privacy guarantee. The intuition behind differential privacy is that the risk of user's privacy leakage should not increase as a result of participating in a database. Differential privacy guarantees that existence of an instance in the database does not pose a threat to its privacy as the statistical information of data would not change significantly in comparison to the case that the instance is absent. This makes it challenging for an adversary to re-identify an instance and infer whether the instance is in the database or not or decide which record is associated with it. An algorithm with privacy property is denoted by custom-characterp, which is randomized so that the re-identification of the data on the adversary's side is very difficult. Differential privacy can be formally defined:


DEFINITION 1. ∈—Differential Privacy. An algorithm custom-characterp is ∈-differential private if for any subset of outputs custom-character and for all datasets custom-character1 and custom-character2 differing in at most one element:














(



𝒜
p



(

𝒟
1

)



R

)






(



𝒜
p



(

𝒟
2

)



R

)





e
ϵ





(
2
)







where custom-characterp(custom-character1) and custom-characterp(custom-character2) are the outputs of the algorithm for input datasets custom-character1 and custom-character2 respectively and custom-character is the randomness of the noise in the algorithm


Here ∈ is called privacy budget and it can be also shown that Eq. 2 is equivalent to









log


(


P
(


𝒜
p



(


𝒟
1

=
r

)




P
(


𝒜
p



(


𝒟
2

=
r

)




)





ϵ




for some point r in the output range. Note that larger values of ∈ (e.g., 10) results in larger privacy loss while smaller values (e.g., ∈≤0.1) indicate the opposite. For example, a small ∈ means that the output probabilities of custom-character1 and custom-character2 at r are very similar to each other which demonstrates more privacy. An uncertainty should be introduced in the output of a function (i.e., algorithm) to be able to hide the participation of an individual in the database. This is quantified by sensitivity, which is the amount of the change in the output of function custom-character made by a single data point in the worst case


Definition 2. custom-character1-sensitivity. The custom-character1-sensitivity of a vector-valued function custom-character is the maximum change in the custom-character1 norm of the value of the function custom-character when one input changes. More formally, the custom-character1-sensitivity Δ(custom-character) if custom-character is defined as










Δ


(
𝒜
)


=



max

𝒳
,

𝒳








𝒳
,

𝒳





=
1












𝒜


(
𝒳
)


-

𝒜


(

𝒳


)





1






(
3
)







where χ and χ′ are two datasets differ in one entry.


Framework Details and Construction

Referring again to FIGS. 1 and 2, the details of the double privacy preserving text representation learning framework 100 are discussed herein. This framework 100 includes four major components: 1) an auto-encoder 102 for text representation, 2) differential-privacy-based noise adder 104, 3) a semantic meaning discriminator 106, and 4) a private attribute discriminator 108. The auto-encoder 102 aims to learn a content representation z 122 of a document 10 by minimizing a reconstruction error between the latent representation z 122 and the text of the document 10. Then, the differential-privacy-based noise adder 104 adds random noise, i.e., Laplacian noise, to the initial latent representation z 122 with respect to privacy budget ∈ to further satisfy the differential privacy guarantee. Since adding noise neither guarantees semantic meaning preservation nor necessarily prevents leakage of private attributes, the semantic meaning and private attribute discriminators 106 and 108 are utilized to infer an optimal amount of added noise s. The semantic meaning discriminator custom-characterS 106 ensures that the added noise does not destroy the semantic meaning with respect to a given task. The private attribute discriminator custom-characterP 108 also guides the amount of added noise by ensuring that the manipulated representation does not include users' private information. Note that it is assumed that the framework 100 is trusted and therefore everything to the left of the privacy barrier (the red dashed line in FIG. 1) including the original textual information and intermediate results, are kept private. The final modified latent representation {tilde over (z)}i 124 which is to the right of the privacy barrier is released to the public. The final modified latent representation {tilde over (z)}i 124 1) is differentially private, 2) obscures private attribute information, and 3) preserves semantic meaning.


Content Representation Extraction

Referring to FIGS. 1 and 2, let x={x1, . . . , xm} be a textual document 10 with m words where each word is from a fixed vocabulary set V with size |ν|=K. The auto-encoder A 102 is used to extract content representation z 122 from document x 10. Let EA:χ→custom-character be an encoder 141 that can infer the latent representation z 122 for a given document x 10, and DA: custom-character→χ be a decoder that reconstructs the document 10 from its initial latent representation z 122.


Recurrent neural networks (RNNs) are effective for summarizing and learning semantics of unstructured noisy short texts. In one embodiment, an encoder 141 is built from a first RNN to learn the initial latent representation z 122 of texts. The encoder 141 can learn a probability distribution over a sequence when trained to predict the next symbol in a sequence. The encoder 141 includes a hidden state S and an optional output which operates on a word sequence x={x1, . . . , xm}. At each time step t, the hidden state st of the encoder 141 is updated by:










s
t

=



f

e

n

c




(


s

t
-
1


,

x
t


)


.





(
4
)







After reaching the end of the given document 10, the last hidden state of the encoder 141 is used as the latent representation z∈custom-characterd×1 122 of the document x 10. A gated recurrent unit (GRU) is used as the cell type to build the encoder 141, which is designed in a manner to have a more persistent memory. Let θe denote parameters for the encoder EA 141. Then:






z=E
A(x,θe)  (5)


Decoder {circumflex over (x)}=DA(z, θd) 142 serves as a check for encoder 141 and takes the initial latent representation z 122 found by encoder 141 as input to start the generation process. θd denotes parameters for the decoder DA 142, which is built using a second RNN. The decoder DA 142 generates an output word sequence {circumflex over (x)}={{circumflex over (x)}1, . . . , {circumflex over (x)}m}. At each time step t, a hidden state of the decoder 142 is computed as:






s
t
=f
dec(st-1,{circumflex over (x)}t)  (6)


where s0=z. The word at step t is predicted using a softmax classifier:











x
^

t

=

softmax


(

W


(
S
)


S
t



)






(
7
)







Where softmax(.) is a softmax activation function, W(S)custom-character|ν|×(d+k) with d+k as the dimension of the hidden state in each layer, and {circumflex over (x)}tcustom-character|ν| is a probability distribution over the vocabulary. Here V denotes a fixed vocabulary set with size |ν|=K. {circumflex over (x)}t,j is defined as the probability of choosing j-th word vj∈ν as:






{circumflex over (x)}
t,j
=p({circumflex over (x)}t=vj|{circumflex over (x)}t-1,{circumflex over (x)}t-2, . . . ,{circumflex over (x)}1)  (8)


The probability of generating an output sequence {circumflex over (x)}={{circumflex over (x)}1, . . . , {circumflex over (x)}m} given the input document x is:










p


(



x
^

|
x

,

θ
d


)


=




t
=
1


t
=
m




p


(




x
^

t




x
^


t
-
1



,


x
^


t
-
2


,





,


x
^

1

,
z
,

θ
d


)







(
9
)







The encoder 141 and decoder 142 of the auto-encoder 102 of the framework 100 are jointly trained to minimize the negative conditional log-likelihood for all documents. A loss function 143 is defined as:











auto

=

-




i
=
1

m



log


p


(




x
^

i

|

x
i


,





θ
d

,

θ
e


)









(
10
)







Where θe and θd are the set of model parameters for the encoder 141 and decoder 142, respectively. The trained auto-encoder EA 102 is used to obtain the content representation z∈custom-characterd×1 122 according to Eq. 5 where d is the size of textual representation.


Adding Noise

Textual information is rich in content and publishing this data without proper anonymization lead to privacy breach and revealing the identity of an individual. This can let the adversary infer if a targeted user's latent textual representation is in the database or which record is associated with it. Moreover, publishing a document's latent representation could result in leakage of the original text. In fact, recent advancement in adversarial machine learning shows that it is possible to recover the input textual information from its latent representation. In this case, if an adversary has preliminary knowledge of the training model, they can readily reverse engineer the input, for example, by a GAN attack algorithm. It is thus essential to protect the textual information before publishing it.


The goal is thus to add noise to the initial latent representation z 122 such that the differential privacy condition is satisfied. In one embodiment, the initial latent representation z 122 is perturbed at noise adder 104 by adding Laplacian noise as follows:












z
^



(
i
)


=



z


(
i
)


+

s


(
i
)





L

a


p


(
b
)





,

b
=

Δ
ϵ


,

i
=
1

,





,
d




(
11
)







where ϵ is the privacy budget, Δ is the L1-sensitivity of the latent latent representation z, d the dimension of z, s the noise vector, s(i) and z(i) are the i-th element for vectors s and z, respectively. Δ=2d. Note that each element of the noise vector is drawn from Laplacian distribution. The optimal privacy budget c is iteratively found using the semantic meaning discriminator DS 106 and the private attribute discriminator DP 108, and the process of adding noise s to the initial latent representation z 122 runs concurrently with finding the optimal privacy budget ϵ until an optimal modified latent representation 2122 is reached.


Preserving Semantic Meaning: Semantic Meaning Discriminator

Referring to FIGS. 1 and 3, perturbing the latent representation z 122 of the given text by adding noises to it (Eq. 11) prevents the adversary from re-constructing the text from its latent representation and guarantees differential privacy. However, this approach may destroy the semantic meaning of the text data. Semantic meaning is task-dependent, e.g., classification is one of the common tasks. In order to preserve the semantic meaning of the context 122 representation, it is necessary to add an optimal amount of noise to the latent representation 122 which does not destroy the semantic meaning of the text data while ensuring data privacy. The challenge is approached using the semantic discriminator 106 by learning an optimal amount of added noise s denoted by privacy budget ∈ 125 in terms of training a classifier 161:






ŷ=softmax({circumflex over (z)};θDs)  (12)


where θDs 166 are weights associated with the softmax function and ŷ represents an inferred label 164 for classification.


To preserve the semantic meaning of the text representation, a noisy latent representation is needed which retains high utility and accordingly includes enough information for a downstream task, e.g., classification. The classifier 161 of the semantic discriminator DS 106 is defined that aims to assign a correct class label to a modified latent representation {circumflex over (z)}(i) 124, whose loss function 163 is minimized as follows,











min


θ

D
S


,
ϵ







(


y
^

,
y

)



=


min


θ

D
S


,
ϵ







i
=
1

C




-

y


(
i
)




log



y
^



(
i
)









(
13
)







where C is the number of classes, and £ denotes the cross entropy loss function. A one-hot encoding of a ground truth 162 for the classification task is also denoted by y and y(i) represents the i-th element of y, i.e., the ground truth label for i-th class.


To learn the value of the privacy budget ∈ 125, a reparameterization process is employed. Instead of directly sampling noise s(i) from a Laplacian distribution (i.e., Eq. 11), this process first samples a value r from a uniform distribution, i.e. r˜[0,1], and then rewrites the amount of added noise s(i) as follows:











s


(
i
)


=


-

Δ
ϵ


×

sgn


(
r
)




ln


(


1
-
2

|
r
|

)




,

i
=
1

,
2
,

.



.



.





,
d




(
14
)







This is equivalent to sampling noise s from







Lap


(

Δ
ϵ

)


.




The advantage of doing so is that the parameter ∈ is now explicitly involved in the representation of the added noise, s, which makes it possible to use back-propagation to find the optimal value of ∈. Large privacy budget ϵ could result in large privacy bounds. Hence, a constraint, ∈<c1 is added where c1 is a predefined constraint.


Another challenge here is that ŷ is inferred from {circumflex over (z)} after introducing noise to the initial latent representation z. The noise is also sampled from the Laplacian distribution which results in large variance in the training process. To solve this issue and make the model more robust, K copies of noise are sampled for each given document. In other words, Eq. 13 can be re-written as follows:











min


θ

D
S


,
ϵ







D
S




(


y
^

,
y

)



=



min


θ

D
S


,
ϵ





1
K







k
=
l


K






(



y
^

k

,
y

)





=



min


θ

D
S


,
ϵ





1
K






k
=
l

K






i
=
1

C




-

y


(
i
)




log




y
^

k



(
i
)








s
.
t
.




ϵ








c
1







(
15
)







where the goal is to minimize loss function custom-characterDS w.r.t. the parameters {θDS, ∈} and ŷk=softmax({tilde over (z)}k; θDS). Note that {circumflex over (z)}k=z+sk in which sk is the k-th sample of the noise calculated with Eq. 14.


Following minimization and resultant determination of a privacy budget ∈ 125, an error 126 is computed between predicted label ŷ 161 and ground truth label y 162.


Private Attribute Discriminator and Privacy Preservation

Referring to FIGS. 1 and 4, the disclosure further addresses how adding noise s to the latent representation z 122 of the text can prevent adversaries from learning the input textual information and guarantee differential privacy. Another important aspect of learning privacy preserving text representation is to ensure that sensitive and private information of the users such as age, gender, and location is not captured in the latent representation.


An adversary cannot design a private attribute inference attack better than what it has already anticipated. In this spirit, the idea of adversarial learning is leveraged. In particular, it is necessary to train the private attribute discriminator Dp 108 to accurately identify the private information from the latent representation z 122, while learning the modified latent representation 2124 that can fool the discriminator and minimize leakage of private attributes, which results in a representation that does not contain sensitive information. Private attribute discriminator 108 uses a classifier 181 to attempt to predict a private attribute label 184 using a ground truth label 182. Ultimately, a goal of private attribute discriminator 108 would be to find parameters that would prevent any classifier such as classifier 181 from accurately predicting private attribute labels. Assume that there are T private attributes (e.g., age, gender, location). Let pt represent the ground truth 182 (i.e., correct label) for the t-th sensitive attribute and θDpt demonstrate parameters 186 of discriminator model DP 108 for the t-th sensitive attribute. The adversarial learning can be formally written as:












min


{

θ

D
P
t


}


t
=
1

T





max
ϵ





D
P




=


min


{

θ

D
P
t


}


t
=
1

T





max
ϵ




1

K
.
T
.







t
=
1

T






k
=
1

K






D
P
t




(



p
^

P
k

,

p
t


)








,


s
.
t
.




ϵ



c
1






(
16
)







Where custom-characterDPt denotes a cross entropy loss function and {circumflex over (p)}tk=softmax({tilde over (z)}k, θDPt) is the predicted t-th sensitive attribute label 184 using the k-th sample. The outer minimization 183 finds the strongest private attribute inference attack and the inner maximization 185 seeks to fool the discriminator by obscuring private information. In other words, the outer minimization 183 seeks convergence of the discriminator parameters 186 while the outer maximization 185 seeks to find the privacy budget value ∈ 125 to maximize a loss between a predicted label 184 of a private attribute and an actual ground truth label 182. The private attribute discriminator 108 finds parameters θDpt 186 and a privacy budget value ∈ 125 that cause the classifier 181 to fail to classify private attributes. Following maximization and resultant determination of a privacy budget ∈ 125, an error 187 is computed based on the predicted private attribute label {circumflex over (p)} 184 and ground truth value 182 of the private attribute p.


Optimization Function

In the previous sections, it was discussed how to: (1) add noise to prevent the adversary from reconstructing the original text from the latent representation and minimize the chance of privacy breach by satisfying differential privacy (Eq. 11), (2) control the amount of the added noise to preserve the semantic meaning of the textual information for a given task (Eq. 15), and (3) control the amount of the added noise so that user's private information is masked (Eq. 16). Inspired by the idea of adversarial learning, all three are achieved at once by modeling the objective function as a minmax game among the semantic meaning discriminator DS 106 and the private attribute discriminator DP 108, as follows:













min


θ

D
S


,
ϵ





max


{

θ

D
P
t


}


t
=
1

T






D
S




-

αℒ

D
P



=


min


θ

D
S


,
ϵ





max


{

θ

D
P
t


}


t
=
1

T





1
K






k
=
1

K



[





(



y
^

k

,
y

)


-

α


1
T






t
=
1

T






D
P
t




(



p
^

t
k

,

p
t


)





]






,


s
.
t
.




ϵ



c
1






(
17
)







where α controls the contribution of the private attribute discriminator in the learning process. This objective function seeks to minimize privacy leakage with respect to the attack, minimize loss in the semantic meaning of the textual representation, and protect private information. With N documents, Eq. 13 is written as follows:












min


θ

D
S


,
ϵ





max


{

θ

D
P
t


}


t
=
1

T





1
N






n
=
1

N



[


1
K






k
=
1

K



[





(



y
^

n
k

,

y
n


)


-

α


1
T






t
=
1

T






D
P
t




(



p
^

n
k

,

p

n
,
t



)





]



]





+

λ


Ω


(
θ
)




,


s
.
t
.




ϵ



c
1






(
18
)







Where θ={θDS, ∈, {θDPt}t=1T} is the set of all parameters to be learned, Ω(θ) is the regularizer for the parameters such as Frobenius norm and λ is a scalar to control the amount of contribution of the regularization Ω(θ)


The aim of this objective function is to perturb the original text representation by adding a proper amount of noise to it in order to prevent an adversary from inferring existence of the target textual representation in the database, reconstructing the user's original text and learning user's sensitive information from the latent representation, while preserving the semantic meaning of the modified representation for a given specific task. It is stressed that the resultant text representation satisfies {tilde over (∈)}-differential privacy, where {tilde over (∈)}≤c1 is the optimal learned privacy budget. This is further discussed below.












Algorithm 1: The Learning Process of DPTEXT model















Input: Training data χ, θDS, ϵ, {θDPt}t=1T, batch size b, c1 and α.


Output: The privacy preserving learned text representation {tilde over (z)}








1:
Pre-train the document auto-encoder Eα to obtain the text representations according to Eq. 5



as z = EA(x, θe)


2:
repeat








3:
Sample a mini-batch of b samples {xi}i=1b from χ


4:
Add noise s to initial document representation zi and get the new document representation



{tilde over (z)}i, i = 1,2,..., b via Eq. 14


5:
Train semantic discriminator DS by gradient descent (Eq. 15)


6:
Train private attribute discriminator DP via Eq. 16.








7:
until Convergence









The optimization process is illustrated in Algorithm 1 and FIGS. 1-5. Block 210 of process flow 200 shows obtaining θDS, ∈, {θDPt}t=1T and training data χ. First, the initial latent representation 122 of all documents custom-character={zi, . . . , zN} is obtained in Line 1, as further illustrated in FIGS. 1 and 2 and as further shown in block 220 of FIG. 5. Then, as illustrated in lines 2-7 and as shown in block 230 of FIG. 5 and elaborated on in FIG. 6, noise s is added to the initial latent text representation zi 122 to obtain a new modified latent representation {tilde over (z)}i 124. s is iteratively optimized to retain semantic meaning using semantic discriminator DS 106 while preventing recovery of private attributes using private attribute discriminator DP 108. In particular, as shown in block 232 of FIG. 6, a mini-batch of b samples from the training data are sampled and in block 234, noise s is added to initial text representation 122. Next, the semantic discriminator DS 106 is trained in Line 5 at block 236 and private attribute discriminator DP 108 is trained in Line 6 at block 238. Recall that there is a constraint on the variable ∈, i.e., ∈<c1. To satisfy this constraint, the idea of the projected gradient descent is used, wherein the gradient descent is performed in one step, i.e. ∈−γ×ε where γ is the learning rate. Then, the parameter ϵ is projected back to the constraint. This means that if ∈<c1, then set ∈=c1, otherwise, keep the value of ∈. The modified latent representation {tilde over (z)}i 124 can be then calculated for each given document 10 according to the value of optimal learned privacy budget ç≤c1 using Eq. 11. Note that any model can be used for semantic discriminator DS 106 and private attribute discriminator DP 108.


Theoretical Analysis

Here, it is shown that the learned text representation using DPText is {tilde over (∈)}-differential privacy where {tilde over (∈)}≤c1 is the learned optimal privacy budget. In particular, the privacy guarantee for the final noisy latent representation {tilde over (z)}i for each given document is proven. The theoretical findings confirm the fact that DPText minimizes the chance of revealing existence of textual representations in the database.


Theorem 1. Let {tilde over (∈)}≤c1 be the optimal value learned for the privacy budget variable ∈ w.r.t the semantic meaning and private attribute discriminators. Let z1 be the original latent representation for document xi, i=1, . . . , N inferred using Eq. 5 and. Moreover, let Δ denote the L1-sensitivity of the textual latent representation extractor function discussed herein. If each element si(l), l=1, . . . , d in noise vector si is selected randomly from








Lap


(

Δ

ϵ
~


)




(

Δ
=

2

d


)


,




the final noisy latent representation {tilde over (z)}i=zi+si satisfies {tilde over (∈)}-differential privacy


Proof. First the change of z is bound when one data point in the database changes. This gives the L1-sensitivity of the textual latent representation extractor function discussed above.


Recall the way z is calculated using Eq. 5. Function tanh is used in GRU to build the RNN which is used above to find the latent representation of a given document. The output of tanh function is within range [−1,1]. This indicates that value of each element z(1), l=1, . . . , d in the latent representation vector z is within range [−1,1]. If one data point changes (i.e., removed from the database), the maximum change in value of each element z(l) is 2. Since the dimension of z is d, the maximum change in the L1 norm of z happens when all of its elements, z(l), have the maximum change. According to Definition. 2, the L1-sensitivity of z is Δ=2×d.


Now, assume that {tilde over (∈)}≤c1 is the optimal value for the learned privacy budget. Then each element ins (i.e., s(l), l−1, 2, . . . , d) is distributed as






Lap


(

Δ

ϵ
~


)





based on Eq. 11 which is equal to randomly picking each s(l) from the






Lap


Δ

ϵ
~






distribution, whose probability density function is










P


r


(

s


(
l
)


)



=



ϵ
˜


2

Δ





e

-



ϵ
~





s


(
l
)





Δ



.














Let custom-character1 and custom-character2 be any two datasets only differ in the value of one record. Without loss of generality it is assumed that the representation of the last document is changed from zn to zn′. Since the L1-sensitivity of z is Δ=2d, then ∥zn−zn″∥1≤Δ. Then:











P


r


[



z
n

+

s
n


=

r
|

𝒟
1



]




Pr


[



z
n


+

s
n



=

r
|

𝒟
2



]



=






l


{

1
,
2
,





,
d

}





Pr


(

r
-


z
n



(
l
)



)







l


{

1
,
2
,





,
d

}





Pr


(

r
-


z
n




(
l
)



)




=






l


{

1
,
2
,





,
d

}





Pr


(


s
n



(
l
)


)







l






{

1
,
2
,





,
d

}



Pr


(


s
n




(
l
)


)





=



e

-





~



Σ
l


|


s
n



(
l
)


|

Δ



/

e

-





~



Σ
l


|


s
n




(
l
)


|

Δ




=



e





~





Σ
l






s
n




(
l
)





-




s
n



(
l
)






Δ




e





~




Σ
l



(





s
n




(
l
)


-


s
n



(
l
)





)



Δ



=




~







s
n


-

s
n




1


Δ









(
l9
)







where sn and sn′ are the corresponding noise vectors with respect to the learned {tilde over (∈)} when the input are custom-character1 and custom-character2, respectively. The first inequality also follows from the triangle inequality, i.e. |a|−|b|≤|a−b|. The last equality follows from the definition of L1-norm.


Since sn=r−zn and sn′=r−zn′ then:





sn′−sn1=∥(r−zn′)−(r−zn)∥1=∥zn′−zn1≤Δ  (20)


This follows from the definition of L1-sensitivity. Eq. 19 is re-written:












P


r


[



z
n

+

s
n


=

r
|

𝒟
1



]




Pr


[



z
n


+

s
n



=

r
|

𝒟
2



]





e



ϵ
˜






s
n


-

s
n




1


Δ




e



ϵ
˜


Δ

Δ



=

e

ϵ
˜






(
21
)







So, the theorem follows and the final noisy latent representation is {tilde over (ϵ)}-differentially private.


Experimental Results

In this section, experiments are conducted on real-world data to demonstrate the effectiveness of DPTEXT in terms of preserving both privacy of users and utility of the resultant representation for a given task. Specifically, this section aims to answer the following questions:


Q1—Utility: Does the learned text representation preserve the semantic meaning of the original text for a given task?


Q2—Privacy: Does the learned text representation obscure users' private information?


Q3—Utility-Privacy Relation: Does the improvement in privacy of learned text representation result in sacrificing the utility?


To answer the first question (Q1), experimental results for DPTEXT were reported with respect to two well-known text-related tasks, i.e., sentiment analysis and part-of-speech (POS) tagging. Sentiment analysis and POS tagging have many applications in Web and user-behavioral modeling. Recent research showed how linguistic features such as sentiment are highly correlated with users' demographic information. Another group of research shows the effectiveness of POS tags in predicting users' age and gender information. This makes users vulnerable against inference of their private information. Therefore, to answer the second question (Q2), different private information, i.e., age, location, and gender, and report results for private attribute prediction task are considered. To answer the third question (Q3), the utility loss is investigated against privacy improvement of the learned text representation


Data. A dataset from TrustPilot is used. On TrustPilot, users can write reviews and leave a one to five star rating. Users can also provide some demographic information. In the collected dataset, each review is associated with three attributes, gender (male/female), age, and location (Denmark, Germany, France, United Kingdom, and United States). First, all non-English reviews based on LANGID.PY are discarded, and only reviews classified as English with a confidence greater than 0.9 are kept. Age attribute is categorized into three groups, over-45, under-35, and between 35 and 45. 10,000 reviews are subsampled for each location to balance the five locations. Each review's rating score is considered as the target sentiment class.


Model and Parameter Settings. For the document auto-encoder A, a single-layer RNN is used with GRU cell of input/hidden dimension with d=64. For semantic and private attribute discriminators, feed-forward networks are used with single hidden layer with the dimension of hidden state set as 200, and a sigmoid output layer, which is determined through grid search. The parameters α and λ are determined through cross-validation, and are set as α=1 and λ=0.01. The upper-bound constraint c1 for the value of parameter ∈ is also set as c1=0.1 to ensure the ∈-differential privacy, ∈=0.1 for the learned representation.


Part of Speech Tagging

Part-of-speech (POS) tagging is another language processing application which is framed as a sequence tagging problem.


Data. For this task a manually POS tagged version of TrustPilot dataset in English is used. This data is obtained and includes 600 sentences, each tagged with POS information based on a Google Universal POS tagset and also labeled with both gender and age of the users. The gender attribute is categorized into male and female, and age attribute is categorized into two groups over-45, under-35. Web English Tree-bank (WebEng) is used as a pre-training tagging model because of the small quantity of text available for this task. WebEng is similar to TrustPilot datasets with respect to the domain as both contains unedited user generated textual data


Model and Parameter Settings. Similar to the sentiment analysis task, a single-layer RNN is used with GRU cell of input/hidden dimension with d=64 for document auto-encoder A 104. For semantic discriminator 106 (i.e., POS tag predictor), a bi-directional long short-term memory network is used:






h
i=LSTM(xi,hi−1h), hi′=LSTM(xi,hi+1′;θh′) yi=Categorical(ϕ([hi;hi′]);θ0)  (22)


Where xi|i=1m is the input sequence with m words, hi is the i-th hidden state, h0 and hm+1′ are terminal hidden states set to zero, [.;.] denotes vectors concatenation and ϕ is a linear transformation. The dimension of the hidden layer is set as 200. A dropout rate of 0.5 is applied to all hidden layers during training


For the private attribute discriminator 108, feed-forward networks are used with single hidden layer with the dimension of hidden state set as 200, and a sigmoid output layer (determined via grid search). The input to this network is final hidden representation [hm; h0′]. For hyperparameters, values of α and λ are set as α=1 and λL=0.01 which are determined through cross-validation. The upper-bound constraint for the value of E is also set as c1=0.1.


Experimental Design

Ten-fold cross validation was performed for POS tagging and sentiment analysis tasks. State-of-the-art research is followed and accuracy score reported to evaluate the utility of the generated data for the given POS tagging or sentiment analysis task. In particular, for the sentiment prediction task, accuracy was reported for correctly predicting rating of reviews. Tagging accuracy for POS tagging task was also reported. To examine the text representation in terms of obscuring private attributes, test performance was reported in terms of F1 score for predicting private attributes. Note that the private attributes for sentiment task include age, gender and location while private attributes for tagging task include gender and age.


DPText is reported in both tasks with the following baselines:


ORIGINAL: This is a variant of DPText and publishes the original representation z 122 without adding noise or utilizing DS discriminator 106 or DP discriminator 108.


DIFPRIV: This baseline adds Laplacian noise to the original representation z 122 according to Eq. 11






(


i
.
e
.

,

Lap


(

Δ
ϵ

)


,

ϵ
=

0.

1


,

Δ
=

2

d



)




without utilizing DS and DP discriminators 106 and 108. Note that this method makes the final representation e-differentially private. The model was compared against this method to investigate the effectiveness of semantic and private attribute discriminators 106 and 108.


ADV-ALL: This method utilizes the idea of adversarial learning and has two components, generator, discriminator. It generates a text representation that has high quality for the given task but has poor quality for inference of private attributes. The model was compared against this method to see how well adding optimal value of noise can preserve privacy in practice


In both tasks, semantic discriminator DS 106 is trained on the training data and applied to test data for predicting sentiment and POS tags. Similarly, private attribute discriminator DP 108 can be applied where it plays the role of an adversary trying to infer the private attributes of the user based on the textual representation. Private attribute discriminator DP 108 is also trained on the training data and applied to test data for evaluation. Higher accuracy score for semantic discriminator DS 106 indicates that representation has high utility for the given task, while lower F1 score for private attribute discriminator DP 108 demonstrates that the textual representation has higher privacy for individuals due to obscuring their private information


Experimental Results

Performance Comparison. For evaluating the quality of the learned text representation, questions Q1, Q2 and Q3 are answered for two different natural language processing tasks, i.e., sentiment prediction and POS tagging. The experimental results for different methods are demonstrated in Table 1.









TABLE 1





Accuracy for sentiment prediction and POS tagging and


F1 for evaluating private attribute prediction task.







(a) Sentiment Prediction Task










Sentiment
Private Attribute (F1)













Model
(Acc)
Age
Loc
Gen







ORIGINAL
0.7493
0.3449
0.1539
0.5301



DIFPRIV
0.7397
0.3177
0.1411
0.5118



ADV-ALL
0.7165
0.3076
0.1080
0.4716



DPTEXT
0.7318
0.1994
0.0581
0.3911











(b) POS Tagging Task











POS Tagging
Private Attribute (F1)













Model
(Acc)
Age
Gen







ORIGINAL
0.8913
0.4018
0.5627



DIFPRIV
0.8982
0.3911
0.5417



ADV-ALL
0.8901
0.3514
0.5008



DPTEXT
0.9257
0.2218
0.3865










Utility (Q1):

Sentiment Prediction Task. The results of sentiment prediction for DPTEXT is comparable to the ORIGINAL approach. This means that the representation by DPTEXT preserves the semantic meaning of the textual representation according to the given task (i.e., high utility). DIFPRIv performs slightly better than DPTEXT in preserving semantic meaning of the text. The reason is that DPText applies noise at least as strong as DIFPRIV (or even more) and adding more noise results in bigger utility loss. Despite of adding more noise than DIFPRIV, the accuracy of DPTEXT is still comparable with DIFPRIV. This confirms the role of semantic meaning discriminator DS in preserving utility and semantic meaning as it explicitly takes utility into consideration when adding noise. It is also observed that DPTEXT has better performance in terms of predicting sentiment in comparison to AD V-ALL. DPTEXT is different from AD V-ALL as it manipulates the original text representation by adding noise to it while AD V-ALL generates a privacy preserving text representation from scratch. The benefit of DPTEXT over AD V-ALL is two-fold. First, the framework does not depend on the process which generates the original representation. In other words, this representation could be generated via any model such as doc2vec. Second, adding Laplacian noise to the text representation prevents adversary from learning the original input text through reverse engineering by a GAN attack algorithm and also minimizes re-identification of users by guaranteeing ∈-differential privacy


POS Tagging Task. The accuracy of POS tagging task is higher when DPText is utilized rather than when ORIGINAL is used. This is because POS tagging results are biased toward gender, age and location. In other words, this information affects the performance of tagging task. Removing private information from the latent representation results in removing this type of bias for tagging task. Therefore, the learned representation is more robust and results in a more accurate tagging. DPText also has better performance than DIFPRIV due to removal of private information and thus bias. Besides, results demonstrate that DPText outperforms ADV-ALL. These results indicate the effectiveness of DPText in preserving semantic meaning of the learned text representation


Privacy (Q2):

Sentiment Prediction Task. In the sentiment prediction task, DPTEXT has significantly lower F1 score for inferring all three private attributes in comparison to ORIGINAL. This shows that DPTEXT outputs text representations that outperforms ORIGINAL in terms of obscuring private information. Moreover, it was also observed that DPTEXT has significantly better performance in hiding private information than DIFPRIV. This indicates that solely adding noise and satisfying ϵ-differential privacy does not protect textual information against other types of attacks and leakage of users' private attributes. This further demonstrates the importance of private attribute discriminator DP in obscuring users' private information. It is also observed that the learned textual representation via DPTEXT hides more private information than AD V-ALL (lower F1 score). These results indicate that DPTEXT can successfully obscure private information


POS Tagging Task. In the POS tagging task, F1 scores of DPText for predicting gender and age private attributes are significantly lower than ORIGINAL approach. These results demonstrate the effectiveness of DPText in obscuring users' private attribute. Similarly, comparing F1 scores of DPText and DIFPRIV shows that the final text representation output of DPText contains less private attribute information. This confirms the incapability of DIFPRIV in obscuring users' private information, and clearly shows the effectiveness of private attribute discriminator DP. This confirms that satisfying differential privacy does not necessarily protect against other types of attacks such as leakage of users' private attributes. Moreover, DPText outperforms AD V-ALL method in terms of hiding user's age and gender information. It confirms that the learned textual latent representation by DPText preserves privacy by eliminating their sensitive information with respect to the POS tagging task.


Utility-Privacy Relation (Q3):

Sentiment Prediction Task. For the sentiment prediction task, DPText has achieved the highest accuracy and thus reached the highest utility in comparison to other methods. It also has comparable utility results to ORIGINAL. However, ORIGINAL utility is preserved at the expense of significant privacy loss. In other words, ORIGINAL is not able to obscure users' private attribute information. Moreover, although DIFPRIV satisfies differential privacy and its performance is comparable with DPText for predicting sentiment, it performs poorly in obscuring private information. DIFPRIV may provide weaker privacy guaranty comparing with DPText since learned E in DPText can be smaller than ∈=0.1 in DIFPRIV. In contrast, DPText has significantly better (best) results in terms of privacy compared to the other approaches and also achieves the least utility loss in comparison to AD V-ALL. These results show that DPText not only protect users' privacy with respect to two different types of attacks, but also preserves semantic meaning of the given text with respect to to the task in hand.


POS Tagging Task. For the POS tagging task, the resultant representation from DPText achieves the highest utility in comparison to all other baselines. It also provides a more accurate tagging than ORIGINAL approach as it removes the bias from the textual representation by obscuring age and gender attributes information. Moreover, DPText has the lowest F1 scores for predicting age and gender attributes amongst all approaches meaning that it performs the best in obscuring users' private attributes information. These results show the effectiveness of DPText in preserving semantic meaning and obscuring private information for more accurate tagging.


The results for two natural language processing tasks indicate that DPText learns a textual representation that (1) does not contain private information, (2) guaranties differential privacy and thus protects users against leakage of their identity, and (2) preserves the semantic meaning of the representation for the given task.


Impact of Different Components. In this subsection, the impact of different private attribute discriminators on obscuring users' private information is investigated. To achieve this goal, three variants of the disclosed framework are explored, i.e., DPTEXTAGE, DPTEXTGEN, and DPTEXTLOC. In each of these variants, the model is trained with discriminator of just one of the private attributes. For example, DPTEXTAGE is trained solely with age discriminator and does not use any other private attribute discriminators during training phase. The performance comparisons for both sentiment prediction and POS tagging tasks are shown in Table 2.









TABLE 2





Impact of different private attribute discriminators on


DPText for sentiment prediction and POS tagging tasks.







(a) Sentiment Prediction Task










Sentiment
Private Attribute (F1)













Model
(Acc)
Age
Loc
Gen







DPTEXT
0.7318
0.1994
0.0581
0.3911



DPTEXTAGE
0.7573
0.2248
0.1012
0.3982



DPTEXTLOC
0.7360
0.2861
0.0731
0.4100



DPTEXTGEN
0.7347
0.2997
0.0623
0.4053











(b) POS Tagging Task











POS Tagging
Private Attribute (F1)













Model
(Acc)
Age
Gen







DPTEXT
0.9257
0.2218
0.3865



DPTEXTAGE
0.9218
0.2111
0.4179



DPTEXTGEN
0.9361
0.2412
0.3916










Sentiment Prediction Task. In sentiment prediction task, it is observed that using solely one of the private attribute discriminators can result in a representation which performs better in terms of sentiment prediction, in comparison to DPText in which all three private attributes discriminators are used (i.e., higher utility). This shows that obscuring all private attributes results in adding more noise and thus losing more of quality of resultant text representation. However, these variants perform poorly in terms of obscuring private attributes in comparison to the original DPText model. This shows that obscuring a specific private attribute can help with hiding information of other private attributes as well. This is because of the hidden relationship between different private attributes. In summary, these results indicate that although using one discriminator in the training process can help in preserving more semantic, it can compromise the effectiveness of learned representation in obscuring attributes


POS Tagging Task. In the POS tagging task, results show that DPText achieves the best performance in tagging task (i.e., higher utility) in comparison to other methods that solely use one of the private attribute discriminators. The reason is that presence of age and gender related information in the text can negatively affect the tagging performance due to existing bias. Therefore, DPTEXT is thus more effective in removing information of all private attributes and hidden existing bias in comparison to DPTEXTAGE and DPTEXTGEN. Removing bias leads to more accurate tagging. Similar to sentiment prediction task, it is observed that DPTEXTGEN with only gender attribute discriminator is less effective than DPTEXT in terms of hiding private attributes information. DP-TEXTAGE however, has the best results in terms of obscuring age attribute information while it is less effective in terms of hiding gender attribute information. This shows the hidden relationship between different private attributes.


Parameter Analysis. DPText has one important parameter α which controls the contribution from private attribute discriminator DP. The effect of this parameter is investigated by varying it as [0.125, 0.25, 0.5, 1, 2, 4, 8, 16]. ORIGINAL-{AGE/GEN/Loc} shows the results for the corresponding task when the original text representation has been utilized. Results are shown in FIGS. 7A and 7B, and FIGS. 7C and 7D for sentiment prediction and POS tagging, respectively.


Parameter α controls the contribution of private attribute discriminator. However, it is surprisingly observed that in both sentiment prediction and POS tagging tasks with the increase of α, the F1 scores for prediction of different private attributes decrease at first up to the point that α=1 and then it increases. This means that the private attributes were obscured more accurately at the beginning with the increase of α and less later. Moreover, with the increase of α, the accuracy of sentiment prediction task decreases. This shows that increasing the contribution of private attribute discriminator lead to decrease in the utility of resultant text representation. In case of POS tagging, the accuracy first increases and then decreases after α=1. This shows that removing the age and gender attributes related information results in removing the bias from learned text representation and improve the tagging task. However, after α=1 the utility of resultant representation decreases. Those patterns are useful for selecting the value of parameter α in practice


Moreover, in both tasks, setting α=0.125 results in an improvement in terms of the amount of hidden private information in comparison to the results of using the original representation. This observation supports the importance of the private attribute discriminator. Another observation is that, after α=1, continuously increasing α degrades the performance of hiding private attributes (i.e., increasing F1 scores) in both sentiment prediction and POS tagging tasks. This is because the model could overfit by increasing α which lead to an inaccurate learned text representation in terms of preserving private attributes and semantic meaning of the text.



FIG. 8 is a schematic block diagram of an example device 300 that may be used with one or more embodiments described herein, e.g., as a component of framework 100.


Device 300 comprises one or more network interfaces 310 (e.g., wired, wireless, PLC, etc.), at least one processor 320, and a memory 340 interconnected by a system bus 350, as well as a power supply 360 (e.g., battery, plug-in, etc.).


Network interface(s) 310 include the mechanical, electrical, and signaling circuitry for communicating data over the communication links coupled to a communication network. Network interfaces 310 are configured to transmit and/or receive data using a variety of different communication protocols. As illustrated, the box representing network interfaces 310 is shown for simplicity, and it is appreciated that such interfaces may represent different types of network connections such as wireless and wired (physical) connections. Network interfaces 310 are shown separately from power supply 360; however, it is appreciated that the interfaces that support PLC protocols may communicate through power supply 360 and/or may be an integral component coupled to power supply 360.


Memory 340 comprises a plurality of storage locations that are addressable by processor 320 and network interfaces 310 for storing software programs and data structures associated with the embodiments described herein. In some embodiments, device 300 may have limited memory or no memory (e.g., no memory for storage other than for programs/processes operating on the device and associated caches).


Processor 320 comprises hardware elements or logic adapted to execute the software programs (e.g., instructions) and manipulate data structures 345. An operating system 342, portions of which are typically resident in memory 340 and executed by the processor, functionally organizes device 300 by, inter alia, invoking operations in support of software processes and/or services executing on the device. These software processes and/or services may comprise DPText process/services 344, described herein. Note that while DPText process/services 344 is illustrated in centralized memory 340, alternative embodiments provide for the process to be operated within the network interfaces 310, such as a component of a MAC layer, and/or as part of a distributed computing network environment.


It will be apparent to those skilled in the art that other processor and memory types, including various computer-readable media, may be used to store and execute program instructions pertaining to the techniques described herein. Also, while the description illustrates various processes, it is expressly contemplated that various processes may be embodied as modules or engines configured to operate in accordance with the techniques herein (e.g., according to the functionality of a similar process). In this context, the term module and engine may be interchangeable. In general, the term module or engine refers to model or an organization of interrelated software components/functions. Further, while the DPText process 344 is shown as a standalone process, those skilled in the art will appreciate that this process may be executed as a routine or module within other processes.


It should be understood from the foregoing that, while particular embodiments have been illustrated and described, various modifications can be made thereto without departing from the spirit and scope of the invention as will be apparent to those skilled in the art. Such changes and modifications are within the scope and teachings of this invention as defined in the claims appended hereto.

Claims
  • 1. A method of generating a modified latent text representation for a document, comprising: utilizing a processor in communication with a tangible storage medium storing instructions that are executed by the processor to perform operations comprising: generating an initial latent representation representative of text in a document;inferring an amount of noise to be added to the initial latent representation by finding a privacy budget value that minimizes a loss between a predicted semantic label and a ground truth semantic label for the initial latent representation and maximizes a loss between a predicted private attribute label and a ground truth private attribute label for the initial latent representation; andadding the amount of noise to the initial latent text representation to generate a modified latent text representation.
  • 2. The method of claim 1, wherein the initial latent representation is generated using an auto-encoder trained to generate the initial latent representation from the text in the document.
  • 3. The method of claim 2, further comprising training the autoencoder by: generating an encoded latent representation representative of the text in the document by applying an encoder to the document;constructing a reconstructed document including reconstructed text representative of the text in the encoded latent representation by applying a decoder to the encoded latent representation; andidentifying a plurality of autoencoder parameters that minimize a loss between the text in the document and the reconstructed text in the reconstructed document.
  • 4. The method of claim 1, further comprising: optimizing a set of semantic discriminator classifier parameters and a privacy budget value that minimize the loss between the predicted semantic label and the ground truth semantic label for the initial latent representation.
  • 5. The method of claim 4, further comprising: adding a first amount of noise to the initial latent representation to generate a modified latent representation.
  • 6. The method of claim 4, further comprising: generating the predicted semantic label by applying a classifier to the modified latent representation;minimizing a loss between the predicted semantic label and the ground truth semantic label; andidentifying the set of semantic discriminator classifier parameters associated with the lowest loss value between the predicted semantic label and the ground truth semantic label.
  • 7. The method of claim 1, further comprising: selecting the privacy budget value that is associated with a lowest loss value between the predicted semantic label and the ground truth semantic label and that is associated with a highest loss value between the predicted private attribute label and a ground truth private attribute label.
  • 8. The method of claim 1, further comprising: optimize a set of private attribute discriminator parameters and a privacy budget value that maximize the loss between the predicted private attribute label and the ground truth private attribute label for the initial latent representation.
  • 9. The method of claim 8, wherein the optimization of the set of private attribute discriminator parameters is modeled as a minmax game.
  • 10. The method of claim 8, further comprising: generating the predicted private attribute label by applying a classifier to the modified latent representation;maximizing a loss between the predicted private attribute label and the ground truth private attribute label; andselecting the set of private attribute discriminator classifier parameters associated with the lowest loss value between the predicted private attribute label and the ground truth private attribute label.
  • 11. The method of claim 8, further comprising: determining the amount of noise to add based on the privacy budget value by sampling a value r from a uniform distribution such that:
  • 12. The method of claim 1, wherein the step of finding the privacy budget value is iteratively repeated until convergence.
  • 13. The method of claim 12, wherein the process of adding the amount of noise to the initial latent text representation to generate a modified latent text representation runs concurrently with finding the privacy budget value.
  • 14. A computer system for generating a modified latent text representation for a document, comprising: at least one processor in communication with a memory and operable for execution of a plurality of modules, the plurality of modules including: an auto-encoder configured to generate an initial latent representation representative of text in a document;a noise adder module configured to receive the initial latent representation and add an amount of noise to the initial latent representation to generate a modified latent text representation based on a privacy budget value;a semantic meaning discriminator module configured to optimize a set of semantic discriminator classifier parameters and the privacy budget value such that a loss is minimized between the predicted semantic label and the ground truth semantic label for the initial latent representation; anda private attribute discriminator module configured to optimize a set of private attribute discriminator parameters and the privacy budget value such that a loss is maximized between the predicted private attribute label and the ground truth private attribute label for the initial latent representation.
  • 15. The computer system of claim 14, wherein the auto-encoder module is configured to: generate an encoded latent representation representative of the text in the document by applying an encoder to the document;construct a reconstructed document including reconstructed text representative of the text in the encoded latent representation by applying a decoder to the encoded latent representation; andidentify a plurality of autoencoder parameters that minimize a loss between the text in the document and the reconstructed text in the reconstructed document.
  • 16. The computer system of claim 14, wherein the semantic meaning discriminator module is configured to: generate the predicted semantic label by applying a first classifier to the modified latent representation;minimize the loss between the predicted semantic label and the ground truth semantic label; andidentify the set of semantic discriminator classifier parameters associated with the lowest loss value between the predicted semantic label and the ground truth semantic label.
  • 17. The computer system of claim 16, wherein the first classifier is implemented using a recurrent neural network that takes the set of semantic discriminator classifier parameters and the modified latent text representation as input.
  • 18. The computer system of claim 14, wherein the private attribute discriminator module is configured to: generate the predicted private attribute label by applying a second classifier to the modified latent representation;maximize a loss between the predicted private attribute label and the ground truth private attribute label; andselect the set of private attribute discriminator classifier parameters associated with the lowest loss value between the predicted private attribute label and the ground truth private attribute label.
  • 19. The computer system of claim 18, wherein the second classifier is implemented using a recurrent neural network that takes the set of private attribute discriminator classifier parameters and the modified latent text representation as input.
  • 20. The computer system of claim 18, wherein the privacy budget value is associated with a lowest loss value between the predicted semantic label and the ground truth semantic label and a highest loss value between the predicted private attribute label and a ground truth private attribute label.
CROSS-REFERENCE TO RELATED APPLICATIONS

This is a non-provisional application that claims benefit to U.S. Provisional Patent Application Ser. No. 63/018,287 filed Apr. 30, 2020, which is herein incorporated by reference in its entirety.

GOVERNMENT SUPPORT

This invention was made with government support under W911NF-15-1-0328 awarded by the Army Research Office, under 1614576 awarded by the National Science Foundation and under N00014-17-1-2605 awarded by the Office of Naval Research. The government has certain rights in the invention.

Provisional Applications (1)
Number Date Country
63018287 Apr 2020 US