This disclosure is generally related to machine learning and data classification. More specifically, this disclosure is related to a system and method for differentially private pool-based active learning.
In the field of machine learning, an essential operation involves training a classifier using labeled data. Traditional non-interactive supervised learning is often label-hungry, i.e., a very large number of labeled samples are necessary to train an accurate classifier. In contrast, active learning approaches seek to train a classifier using fewer informative samples, rather than employing the very large number of labeled samples. This can be particularly useful when very little labeled data is available, or when labeling is expensive.
One specific field of active learning involves creating and analyzing privacy-aware variants, which can be relevant in many practical applications. One such practical application is federated learning, in which an accurate model is to be trained using data that is distributed over a large number of clients. One way to achieve this is for a centralized node to send an initial crude model to the clients, ask the clients to independently update the model based on their local data, and then aggregate the individual client models. In this approach, although the clients do not send any data to the aggregator, privacy cannot be guaranteed. An adversarial aggregator can observe the client models and make inferences about a client's local and potentially sensitive data using model inversion and membership inference attacks.
Some differentially private mechanisms have been proposed to provide strong statistical guarantees against the success of such attacks. However, these differentially private mechanisms have been proposed for training machine learning models in the traditional non-interactive supervised learning setting (where an abundance of labeled data is available). There has been significantly less investigation on differentially private mechanisms for active learning.
One embodiment provides a system for facilitating data classification. During operation, the system determines a version space associated with a set of data comprising a pool of unlabeled samples and a first plurality of labeled samples, wherein the version space includes a first set of classifiers corresponding to the first plurality of labeled samples. The system selects, from the pool of unlabeled samples, a second plurality of unlabeled samples comprising informative samples and non-informative samples, wherein a respective informative sample corresponds to a first hyperplane which intersects the version space, and wherein a respective non-informative sample corresponds to a second hyperplane which does not intersect the version space. The system acquires labels corresponding to the second plurality of unlabeled samples to obtain a third plurality of labeled samples. The system updates the first set of classifiers based on the third plurality of labeled samples to obtain a second set of classifiers in the version space, thereby improving accuracy of the first set of classifiers.
In some embodiments, selecting the second plurality of unlabeled samples is determined using randomized trials with respect to a Bernoulli distribution.
In some embodiments, selecting the second plurality of unlabeled samples comprises selecting the informative samples from the second plurality of unlabeled samples by determining whether an informative sample should be selected in a randomized trial according to a first random probability distribution (e.g., a Bernoulli distribution). In response to determining that a first informative sample should be selected in a randomized trial with respect to the first random probability distribution, the system acquires a label corresponding to the first informative sample; and in response to determining that a second informative sample should not be selected in a randomized trial with respect to the first random probability distribution, the system returns the second informative sample to the pool of unlabeled samples.
In some embodiments, selecting the second plurality of unlabeled samples comprises selecting the non-informative samples from the second plurality of unlabeled samples by determining whether a non-informative sample should be selected in a randomized trial with respect to a second random probability distribution. In response to determining that a first non-informative sample should be selected in a randomized trial with respect to the second random probability distribution, the system acquires a label corresponding to the first non-informative sample; and in response to determining that a second non-informative sample should not be selected in the randomized trial with respect to the second random probability distribution, the system removes the second non-informative sample from the pool of unlabeled samples.
In some embodiments, the version space represents a volume comprising: the first set of classifiers indicated as points in an input space associated with the set of data; the pool of unlabeled samples indicated as a first set of hyperplanes in the input space; and labeled samples, including one or more of the first and the third plurality of labeled samples, indicated as a second set of hyperplanes in the input space.
In some embodiments, the system updates the first set of classifiers based on the third plurality of labeled samples and further based on the first plurality of labeled samples.
In some embodiments, the first plurality of labeled samples and the third plurality of labeled samples comprise currently labeled samples. The system trains a classifier for the set of training data based on all the currently labeled samples.
In some embodiments, the first plurality of labeled samples and the third plurality of labeled samples comprise currently labeled samples. The system trains a classifier for the set of training data based on a subset of the currently labeled samples, wherein the subset contains a plurality of recently labeled samples and excludes a plurality of older labeled samples.
In some embodiments, updating the first set of classifiers is based on one or more of: an output perturbation; an objective perturbation; and an exponential mechanism.
In some embodiments, a respective classifier is a Support Vector Machine (SVM) classifier. The system orders the unlabeled samples based on a closeness to an optimal classifier for the labeled samples to obtain an ordered list of unlabeled samples; and for each unlabeled sample in a first portion of the ordered list, in descending order, the system performs the following operations. In response to determining that a first unlabeled sample should be selected in a randomized trial with respect to a first random probability distribution, the system acquires a label corresponding to the first unlabeled sample; and in response to determining that a second unlabeled sample should not be selected in a randomized trial with respect to the first random probability distribution, the system returns the second unlabeled sample to the pool of unlabeled samples.
In some embodiments, determining the first portion of the ordered list is based on determining whether a respective sample falls in an informative band associated with the optimal classifier.
In the figures, like reference numerals refer to the same figure elements.
The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
I. Introduction and Overview
The embodiments described herein provide a system for efficiently training a machine learning model using the active learning modality while preserving the privacy of data points chosen for training. The system is based on the fact that active learning involves both selection of samples for labeling (“selection step” or “sample selection step”) and update of a classifier as new labeled samples become available (“update step” or “classifier update step”). The system can preserve differential privacy during the selection step by randomizing the sample selection procedure. The system can also preserve differential privacy during the update step by randomizing the classifier model that is released to the public.
As described herein, traditional non-interactive supervised learning is often label-hungry, i.e., a very large number of labeled samples are necessary to train an accurate classifier. In contrast, active learning approaches seek to train a classifier using fewer informative samples, rather than employing the very large number of labeled samples. This can be particularly useful when very little labeled data is available, or when labeling is expensive.
Creating and analyzing privacy-aware variants in active learning can be relevant in many practical applications, such as federated learning, in which an accurate model is to be trained using data that is distributed over a large number of clients. One way to achieve this is for a centralized node to send an initial crude model to the clients, ask the clients to independently update the model based on their local data, and then aggregate the individual client models. In this approach, although the clients do not send any data to the aggregator, privacy cannot be guaranteed. An adversarial aggregator can observe the client models and make inferences about a client's local and potentially sensitive data using model inversion and membership inference attacks.
Some differentially private mechanisms have been proposed to provide strong statistical guarantees against the success of such attacks. However, these differentially private mechanisms have been proposed for training machine learning models in the traditional non-interactive supervised learning setting (where an abundance of labeled data is available). There has been significantly less investigation on differentially private mechanisms for active learning.
The embodiments described herein provide a system which facilitates data classification, specifically, which facilitates achieving differential privacy in a pool-based active learning setting. The system can select unlabeled samples for querying in a privacy-aware manner. Subsequently, the system can perform a classifier update in a privacy-aware manner. The privacy-aware selection step may be performed based on, e.g., a Bernoulli selection, a version space concept, and an informative band around the classifier. The privacy-aware update step of the model may be based on, e.g., an output perturbation, an objective perturbation, and an exponential mechanism. Furthermore, the model may be updated using a previous classifier and new labeled data. The model may also be updated without a previous classifier but using all data labeled so far (e.g., all currently labeled data), or on a subset of all currently labeled data.
The system analyzes pool-based active learning under a differential privacy guarantee. At every active learning iteration, the system selects some samples to be labeled by an oracle, which returns new labels. The system then uses these new labels to update the classifier. In preserving differential privacy during both the sample selection step and the classifier update step, the system uses the concept of a version space of possible hypotheses (i.e., classifiers). This concept helps establish a principled notion of the informativeness of a pool sample: When informative samples are labeled and used for training, the version space shrinks, and yields more and more accurate classifiers. The version space concept and an analysis of using the version space in an active learning workflow without privacy considerations is described below in Section III.
To provide differential privacy, the system queries the oracle with both informative and non-informative samples using a simple randomized sampling scheme. The system describes the differential privacy guarantee, and also characterizes the increase in label complexity due to the privacy mechanism, as described below in Section IV. Applying this theoretical analysis in practice using an implementation of a Support Vector Machine (SVM)-based active learner is described below in Section V.
Users of the system can include an individual with a smartphone, a mobile device, or a computing terminal. Users of the system can also include any client in a federated learning setting, which is a machine learning setting where a goal is to train a high-quality centralized model with training data distributed over a large number of clients each with potentially unreliable and relatively slow network connections. Thus, the embodiments described herein can result in more efficiently training the machine learning model, which can also result in an improved model and a more efficient overall user experience. Furthermore, by efficiently training a machine learning model using the active learning modality while preserving the privacy of the data points chosen for training, the embodiments described herein provide an improvement in the functioning of a computer in terms of both efficiency and performance. The embodiments described herein also result in an improvement to several technologies and technical fields, including but not limited to: artificial intelligence; machine learning and analytics; database protection; data mining (including of a significant volume of data); data classification; data regressions; and anomaly detection.
Exemplary Computer System
During operation, device 108 can request from device 104 training data (not shown), and device 104 can send training data 120 to device 108. Training data can include a plurality of labeled data or labeled samples. Device 108 can receive training data 120 (as training data 122), and perform a train model 124 function based on training data 122. The model can include one or more classifiers based on training data 122 (e.g., the labeled samples). Device 108 can also have access to a set of data which includes a pool of unlabeled samples and a plurality of already labeled samples (i.e., training data previously received by device 108 and used by device 108 to train the model, also referred to as “previously labeled samples”).
Subsequently, user 112 via device 102 can send a request model 130 to device 108. Device 108 can receive request model 130 (as a request model 132), and can perform a determine version space 134 function. The version space is constructed using the labeled samples received to date by device 108, such as both training data 122 and previously labeled samples. The version space can also represent a volume which includes: a first set of classifiers indicated as points in an input space associated with an overall set of data; a pool of unlabeled samples indicated as a first set of hyperplanes in the input space; and the labeled samples received to date, including training data 122, previously labeled samples, and any subset of all currently labeled samples, where the labeled samples are indicated as a second set of hyperplanes in the input space. An exemplary version space is described below in relation to
Device 108 can select, from the pool of unlabeled samples, a plurality of unlabeled samples (via a select unlabeled samples 136 function), which can include both informative and non-informative samples. An informative sample can correspond to a hyperplane which intersects the version space, while a non-informative sample can correspond to a hyperplane which does not intersect the version space, as described below in relation to
Based on the received labels 146, device 108 can update the model by performing an update classifier(s) 148 function. As a result, device 108 can obtain a second set of classifiers which improve the accuracy of the first set of classifiers. Device 108 can return a model with updated classifier(s) 150 to device 102. Device 102 can receive model 150 (as a model with updated classifier(s) 152).
Upon receiving model 152, device 102 can perform an action (function 160). For example, device 102 can display, on its display screen 114, a visual indication of model 152. The visual indication can also include the determined version space 134 which indicates the unlabeled samples as a first type of hyperplane and the labeled samples as a second type of hyperplane. The visual indication can also depict the input space as the version space, specifically, as a volume bounded by the hyperplanes representing the labeled samples, and can also indicate the first set of classifiers (and/or the obtained set of classifiers) as points in the input space. An exemplary version space is described below in relation to
Furthermore, upon viewing the visual indication of model 152 on display 114, user 112 can perform an action (function 162). The action can include sending another manual request to update the model. In some embodiments, user 112 can review the data, classifiers, and labeled samples, and, e.g., review the classified data in light of other historical data, remediate a physical issue related to the classified data, and perform any physical action relating to the visual indication of model 152 on display 114. For example, user 112 can perform an action which can affect and improve the operation and performance of a physical (e.g., manufacturing) system associated with the data set of the input space. The action can be a remedial or a corrective action to ensure consistent and smooth operation of the overall physical system. User 112 can also monitor, observe, and classify subsequent testing data to determine whether the actions of user 112 have the intended effect.
In some embodiments, device 108 trains the model without receiving any requests from another device (e.g., without receiving request model 130/132 from device 102). Device 108 can subsequently publish, or make available publicly, the trained model at each and every iteration, based on the parameters used as described below in relation to Section IV.D. Device 108 can therefore provide differential privacy in a pool-based setting for active learning, based on the methods described herein.
II. Related Work
Differential privacy has been extensively studied and applied. However, as noted earlier, there has been significantly less investigation on differentially private mechanisms for active learning. In one work, a differentially private anomaly detector is trained using active learning in a streaming (online) modality. See M. Ghassemi, A. Sarwae, and R. Wright, “Differentially private online active learning with applications to anomaly detection,” in Proc. Workshop in Artificial Intelligence and Security, pages 117-128, 2016 (hereinafter “Ghassemi”). In Ghassemi, informative samples from the data stream are selected for labeling by an oracle, and using the new labels, a classifier is updated until it reaches a desired accuracy. Another work describes a heuristic to identify and select informative samples (see S. Tong and D. Koller, “Support vector machine active learning with applications to text classification,” Journal of machine learning research, 2(November):45-66, 2001 (hereinafter “Tong-Koller”)).
However, Tong-Koller is not adapted to the differential privacy of active learning in a pool-based setting, and Ghassemi involves selecting informative samples from a data stream or an online setting. That is, if a sample is not selected the first time it occurs, it is never examined again. In contrast, in the embodiments described herein, the system considers a pool-based setting, i.e., a data sample that is not initially chosen for labeling can be returned to the pool for a possible later labeling opportunity.
Furthermore, in the embodiments described herein, the system analyzes the differentially private mechanism using the version space concept. The version space concept has been used extensively in the analysis of active learning, in particular, to prove convergence of the classifier and to derive bounds on the label complexity. Analyzing the evolution of a differentially private active learner from the perspective of its shrinking version space can be useful for at least two reasons. First, it suggests a natural and principled approach to choose samples for labeling while preserving privacy, and tells how this approach can be approximated for use with practical classifier models. Second, it indicates precisely when adding noise to a classifier to make it differentially private, will also make it less accurate. This, in turn, reveals both good and bad ways to perform the classifier update step.
III. Active Learning Setting: No Privacy
This section provides a review of the version space concept and the disagreement coefficient associated with a classifier. In the embodiments described herein, only the two-class problem is considered, as it is widely encountered in concept learning, anomaly detection, and other related problems. For simplicity, the development is restricted to the case in which the two classes are linearly separable, although generalizing to the agnostic case is also possible. Thus, the classifier is a hyperplane that separates the data samples into classes with labels ±1.
Assume that the active learner has: n training samples denoted by the (sample, label) pairs as ={(xi, yi), xi∈d, i=1, 2, . . . , n,}; and a pool of m unlabeled samples as ={zj∈d, j=1, 2, . . . , m}
The xi and zj above belong to an input space χ. An initial classifier w0 has been trained on . The system can query an oracle (e.g., device 104 in
A. Version Space
For labeled data xi, i=1, 2, . . . , n separated by a hyperplane w, a hypothesis h(⋅) is defined as h(xi)=wTxi/∥w∥ where ∥⋅∥ is the 2 norm and (⋅)T is the transpose operator. As a result, in the separable case, a label is assigned as yi=1 if h(xi)>1 and yi=−1 otherwise. Thus, yih(xi)>0.
Definition 1: The version space is the set of all possible hypotheses that separate the labeled data in the feature space χ. The version space is defined in terms of the hypotheses h as well as the hyperplanes w as:
The version space concept can be used to describe the evolution of the active learner, both in the non-private case and the differentially private case. Consider a dual representation in which the points in the input space χ are hyperplanes in the hypothesis space , while candidate separators w are just points in . In this representation, it can be shown that the optimal classifier w* is the center of mass of . An approximation of w* is a classifier that maximizes the margin with respect to each class, given by a Support Vector Machine (SVM) classifier.
Moreover, it can be seen from diagram 250 that unlabeled samples with hyperplanes which intersect version space 260 are informative, as compared to unlabeled samples with hyperplanes which do not intersect version space 260 (i.e., that pass outside version space 260). For example, hyperplane 258 (indicated by a green dashed line) does not intersect version space 260, and therefore corresponds to an unlabeled sample which is non-informative. On the other hand, a hyperplane 256 (also indicated by a green dashed line) does intersect version space 260, and therefore corresponds to an unlabeled sample which is informative. Thus, the unlabeled sample whose hyperplanes intersect version space 260 may be considered informative and good candidates for querying, while the unlabeled samples whose hyperplanes do not intersect version space 260 may be considered non-informative and weak candidates for querying.
B. Active Learning in the Pool-Based Setting
For the non-private case, consider the popular Cohn Atlas Ladner (CAL) algorithm for active learning in the separable case. This approach may not necessarily be constructive, and is meant to develop a theoretical understanding. To construct an actual active learner, certain modifications are made, as described below in Section V. The task of the active learner is to query an oracle for labels of points in and, using the received labels, keep updating both the version space and the classifier. Let t=1, 2, . . . T denote the step number at which the classifier and version space are updated. Let t be the set of samples that have been queried, labeled, and removed from the pool after the end of the tth step. Define 0=Φ, the empty set. At the beginning of the (tth) step, assume that c unlabeled samples are drawn from \t−1 are to be queried, where \ is the set difference operator. After training using the newly available labels, the classifier wt can be released.
C. Informative Samples Reduce the Version Space
The CAL method can be described as choosing the c samples per step. The version space can be denoted after the tth step by t. Recall that points in the current pool, \t, belong to the input space χ, and are thus hyperplanes in the hypothesis space . By definition (and with reference to
D. Label Complexity
The label complexity of a classifier can be defined as the number of labels that must be obtained before the classifier can be trained to a desired accuracy. For traditional non-interactive supervised learning, the label complexity required to reach an error η∈[0,1] with respect to the optimal classifier is given by Ω(d/η). Here, d is the Vapnik and Chervonenkis (“VC”) dimension of the classifier. The VC dimension is the cardinality of the largest set of points that can be separated with zero error by the given classifier. In practice, the accuracy can be computed over a labeled dataset—termed the “holdout” dataset—that is representative of the underlying data distribution but is not used in training.
Because invoking the oracle is costly, the label complexity must be controlled. It is well known that applying active learning heuristics to choose only informative samples to be queried can incur significantly lower label complexity than non-interactive supervised learning which trains on all samples.
Lemma 1: The active learning workflow described in Section III-C can output a hypothesis with an error less than η with high probability, after O(log(1/η)) rounds.
The proof requires that a large enough number of informative samples (denoted by γ≤c) be labeled prior to learning wt, for every t. γ depends on the Vapnik and Chervonenkis (“VC”) dimension of the classifier and the “disagreement coefficient.” It is sufficient to note that choosing γ samples ensures that the version space t shrinks fast enough with increasing t, resulting in more and more accurate classifiers. The label complexity of active learning is thus given by O(γ log(1/η)). In other words, the label complexity is proportional to log(1/η) compared to 1/η for non-interactive supervised learning.
IV. Active Learning Setting with Differential Privacy
In the embodiments described herein, the system facilitates differentially private pool-based active learning. An analysis of pool-based active learning under the differential privacy paradigm is described herein. The adversarial model is described, and then the privacy aware active learning workflow is given. The workflow can satisfy the differential privacy guarantees. Then, the system can quantify the price paid for privacy in the form of increased label complexity. The effects of differentially private mechanisms in the version space are examined. Careful consideration of the privacy/performance tradeoff is necessary while updating the active learner in the differentially private setting.
A. Adversarial Model Under Differential Privacy
The adversarial model is slightly different from the one usually encountered for the standard supervised learning scenario. Because the learner typically starts with a small training set (i.e., n is small), it is not a goal to protect the privacy of the training set. Instead, the system aims to protect the privacy of the pooled samples whose labels are queried and used to update the classifier. Hence, the assumption is made that the adversary knows the training set of n samples. Furthermore, the assumption is made that the adversary possesses an adjacent pool of m samples denoted by ′={z′j, j=1, 2, . . . , m}, where there is a particular i≤m such that z′i≠zi while for all valid j≠i it holds that z′j=zj. Crucially, the adversary does not know the index i of the sample that differs.
Moreover, the adversary should not be able to identify zi by observing any model update wt. Furthermore, the adversary should not be able to discover whether zi was used to train wt. Let the vector of classifiers be WT=(w1, w2, . . . , wT). Let c unlabeled samples be drawn from the pool and examined at each step, as in the non-private case. Let QT=(S1, S2, . . . , ST) with Sk=(s(k−1)c+1, . . . skc). Here, Sk, k∈{1, . . . , T} is a binary vector of length c, containing selection results, in which s(k−1)c+j=1 if the jth sample from Sk was chosen for labeling by the oracle, and s(k−1)c+j=1 if it was not chosen for labeling by the oracle.
It is required that the probability of deriving a certain classifier will change by at most a small multiplicative factor, even if the differing sample was used for training the learner or not. Concretely, using the definition of ϵ-differential privacy:
P(WT,QT|)≤exp(ϵ)P(WT,QT|′)
Note that the adversary's view can include only the adjacent pool ′, and the outputs of the adversary are the vector classifiers WT and the binary vector QT that indicates which samples are chosen for labeling.
B. Differentially Private Active Learning Workflow
As before, an initial classifier wO is trained on n training samples. Based on the assumptions, wO is available to the adversary. One goal is to improve the accuracy of the classifier with the help of the pool , without revealing to the adversary which samples in caused the model updates. The same publishing schedule can be retained, i.e., c samples are queried at each step, and the classifier models w1, w2, . . . , wT, are published and available to the adversary. To achieve privacy, the following privacy-aware version of the workflow in Section III is described below.
Assume that the system is at the beginning of the tth step of the model update process, the version space is t−1, and the corresponding model maintained by the learner is wt−1. The learner has access to the pool t−1=\t−1. Now, consider the hyperplanes in the hypothesis space corresponding to the unlabeled samples in t−1. Some of these hyperplanes intersect t−1, while others pass outside it.
In the non-private case, a hyperplane that intersects t−1 represents an informative sample whose label should be queried. However, in the differentially private version, a different approach must be adopted in order to choose samples for querying. Concretely, for each i=1, 2, . . . |t−1|, if the hyperplane corresponding to zi∈t−1 passes through t−1, query zi with probability p>½ (e.g., a “first random probability distribution”). Otherwise, if the hyperplane corresponding to zi passes outside t−1, query zi with probability 1−p (e.g., a “second random probability distribution”). If zi was informative but not chosen for querying, it is returned to the pool for possible querying later. If zi was non-informative and not chosen for querying, it is discarded or removed from the pool. The procedure is repeated for zi+1 until c samples from t−1 have been examined. Note that this is inefficient compared to the non-private version because, in order to achieve privacy, not all informative samples (i.e., those whose hyperplanes intersect t−1) are chosen, and some non-informative samples (i.e., those whose hyperplanes do not intersect t−1) are chosen. The inefficiency depends on p.
Let us denote the non-private classifier trained using the newly labeled points by wti. From this classifier, the system can device and release a ϵm-differentially private classifier wt. At each update step, the adversary's view can include the previously released (differentially private) classifier wt−1, an adjacent pool ′, and the binary vector St, indicating which samples have been chosen for labeling just before updating the classifier (as defined in Section IV-A). Thus, by applying the definition of differential privacy, the system can obtain, for each t<T:
P(wt|wt−1,St,)≤exp(ϵ)P(wt|wt−1,St,′) Equation (1)
The approach of the embodiments described herein is agnostic to the particular mechanism used to achieve ϵm-differential privacy in wt. Thus, the system can use output perturbation, objective perturbation, or the exponential mechanism. To reiterate: (a) the model updates (or classifier models) w1, w2, . . . , wT are derived using a differentially private mechanism; and (b) wt has the desired accuracy η. As a result, a few privacy claims may be stated.
Proposition 1: As described above, at each step t, let Bernoulli(p) sampling be used to query samples whose hyperplanes intersect t−1, and Bernoulli(1−p) sampling is used to query samples whose hyperplanes do not intersect t−1. For p≥½, this selection procedure is ϵp-differentially private with
Proof: Assume that the samples in and ′ are ordered consistently (e.g., in exactly the same sequence). While this may be a conservative assumption, the situation can occur, for example, if the learner and the adversary use a known algorithm to rank unlabeled samples based on their informativeness. Assume that the adversary has observed wt and knows its version space ′t. Since and ′ differ in one element, ′t may or may not be the same as t.
Let si denote the selection variable for a pool sample zi∈ or z′i∈′. Notes that si=1 indicates that zi (or equivalently z′i) is selected for querying, while si=0 indicates otherwise. Then, to construct the bound for si=1, we use the worst case situation in which the hyperplane corresponding to zi intersects t but the hyperplane corresponding to z′i does not intersect t. Thus,
The situation si=0 is argued similarly, for the worst case where the hyperplane corresponding to zi does not intersect t but the hyperplane corresponding to z′i does intersect ′t. As p>½, we drop the absolute value notation and the result follows.
Theorem 1: Suppose the differentially private learner is trained over T steps with c samples labeled per step. Let ||=|′|>Tc. The released classifiers are ϵ-differentially private, where ϵ=ϵm+ϵp.
Proof: Again assume that, at every step t, the adjacent pools t and ′t are indexed consistently (that is, ordered in exactly the same sequence). The classifier is updated at each step using a ϵm-differentially private mechanism. This results in:
The assumption is that the sample that differs between and ′ was chosen for querying at step i≤T. To obtain Equation (2), observe that wt depends only on samples from and on the immediately previous classifier wt−1. Note that step 1 is not necessarily the first time the differing sample was encountered. For instance, the sample may have been informative at step t<i (i.e., as intersecting the version space t−1) but not selected for querying in the Bernoulli sampling process, and thus returned to the pool t. This creates the possibility of the differing sample being chosen at a later step. The set τ⊆{1, 2, . . . , T} in the second term of Equation (3) is the set of all steps (or time instances) at which the differing sample could be encountered in the pool-based active learning scenario. Note that this is significantly different from a stream-based (online) setting in which the differing sample is seen only once, whether it is chosen for querying or not. To obtain Equation (4), the system can use the definition of in Sk Section IV-A.
For p≥½, the double product term in Equation (4) is maximized in two cases. Either of the following two situations may occur: (a) the differing sample, which belongs to , intersects i−1, and is queried by the learner, whereas its counterpart in ′ does not intersect ′i−1, but is queried by the adversary; or (b) at any step t≤T, the differing sample does not intersect t−1, and is not queried by the learner, whereas its counterpart in ′ intersects ′t−1, but is not queried by the adversary. In either situation (a) or (b), the probability ratio is p/(1−p). Because it is assumed and stipulated (as above) that a non-informative sample—one whose hyperplane does not intersect t−1—that is not chosen for querying is removed from the pool, the situation (b) occurs at most once in T steps. Subsequently, we can bound the ratio in Equation (1) using Proposition 1 as:
where the last inequality follows from Proposition 1 and the definition of ϵm in Equation (1).
C. Effect of Privacy on Label Complexity
Proposition 2: For ½≤p<1 and classification probability η, consider the privacy-aware active learning workflow as described in Section IV.B. The label complexity of this approach is O((1/p) log(1/η)).
Proof: As noted earlier in the Lemma 1, the active learning algorithm without privacy outputs a hypothesis with error probability less than η in O(log(1/η)) rounds, provided at least γ informative samples are labeled at the tth step. In contrast, for the differentially private case, samples whose hyperplanes intersect t are queried only with probability p. Other non-informative samples are queried with probability 1−p to create uncertainty about which samples from the pool are labeled. The non-informative samples do not contribute to the shrinkage of t. Thus, to ensure that at least γ informative samples are labeled per privacy-preserving selection step, it is necessary to query O(γ/p) samples per step. With this larger number of per-step queries, the conditions of Lemma 1 are again met, and the system can obtain a hypothesis with error less than η with high probability, after O(log(1/η)) rounds. The effective label complexity is thus O((γ/p)log(1/η)), and the result follows. As p≥1\2, this is only a moderate increase in label complexity.
D. Version Space of Utility of DP Mechanisms
The differentially private classifier wt is, at best, a noisy approximation of the optimal classifier wt* corresponding to the version space t. For the linearly separable case, if the noise is small enough to keep wt inside t, its consistency with respect to t is preserved. However, if too much noise is added, then wt moves out of t, which means that it will classify some of the labeled samples incorrectly.
This has important implications for how we evolve the classifier wt. One way is to update the previous noisy classifier wt−1 using the new labels obtained in the tth step. The preceding argument, however, suggests that this might compromise the classifier's consistency with respect to the version space.
A better approach is to first preserve all the samples labeled by the oracle until step t (i.e., “all currently labeled samples to date”), and then train a differentially private wt from those samples, without using wt−1. In some embodiments, the system can train a differentially private wt based on a subset of all the currently labeled sample to date, without using wt−1.
V. An SVM-Based Active Learner
Consider an experimental learner designed to evaluate SVM-based active learning with and without privacy. For simplicity, only one sample is queried at each step t, and its label is added to the set of already known labels. Using the available labels, a new non-private classifier wtSVM is trained using a dual SVM solver. To choose the most informative sample in the non-private, the following heuristic approach (sometimes referred to as “uncertainty sampling”) may be used:
Choose the sample closest to the hyperplane representing the SVM classifier. This gives a concrete approach to training the active learner without having to explicitly maintain the version space. To see why this is reasonable, recall that the optimal classifier wt* is the center of mass of the version space t. Choosing the pool sample z whose hyperplane halves t reduces the error of wt* exponentially with the number of queried samples. Then, if t has a regular shape, the hyperplane corresponding to z would pass very close to wt*. Moreover, it turns out that the non-private SVM classifier (denoted by wtSVM) is an approximation to wt*. Hence, the system can choose to query the sample z whose hyperplane is closest to wtSVM. This heuristic approach leverages the version space-based development from the previous section, without requiring us to explicitly keep track of t. Concretely, this way of choosing a sample z to be queried ensures that t keeps shrinking reasonably fast with increasing t. As a consequence, a sequence of increasingly accurate classifiers, wtSVM are learned. In the non-private case, each released classifier is given by wt=wtSVM.
To perform differentially-private sample selection for querying (again without having to explicitly maintain the version space), the system can maintain a ranked list of pool points, based on their distance to wtSVM. Then, the system can implement a Bernoulli-p trial, i.e., toss a coin with bias p and if the coin lands heads, query the top-ranked pool point (i.e., the closest pool point). If the coin lands tails, repeat the Bernoulli-p trial with the second closest pool point, and so on, until a sample is queried. All samples not chosen for querying can be returned to the pool for ranking and possible re-use in subsequent learning steps. The system can then retrieve the label of the single queried sample, add it to the set of already known labels, and use the dial SVM solver to derive a new clean classifier wtSVM.
To guarantee differential privacy in the update step, the sensitivity-based output perturbation approach may be used, where scalar zero-mean Laplace-distributed noise is added to each component of wtSVM. Thus, in the differentially private case, each released classifier is given by wt=wtSVM+νt. To obtain νt, a conventional approach for non-interactive supervised SVM learning may be used. Specifically, the scale parameter λt of the Laplacian noise components νti, i=1, . . . , d is given by:
where L is the Lipschitz constant, C is a cost parameter in the SVM dual optimization problem, κ is the kernel upper bound, d is the feature dimension and nt is the number of labeled samples in t∪ used to train the active learner at step t. In this situation with the default 2-norm kernel, L=1, κ=1 and C is input to the dual optimization problem. It is also possible to have derived wtSVM and νt by solving the primal problem, but it has been experimentally discovered that the primal solution is more sensitive to noise. A detailed comparison of primal and dual solvers for SVMs in the privacy-aware active learning context is not included herein. The distribution of noise added to each component of wtSVM is then given by the following relation for i=1, . . . , d:
The inverse dependence of λt on nt indicates a second privacy-utility tradeoff in addition to the increase in label complexity: Although active learning guarantees that nt<<||, the inverse dependence unfortunately means that a classifier trained on nt samples should be released with more noise than one trained on all || samples. The extra noise may shift wt out of t, the version space of the corresponding noiseless classifier, thereby reducing its accuracy.
VI. Experimental Evaluation
A synthetic dataset of 120 2-dimensional points is generated in two linearly separable classes.
An objective is to examine, first, the effect of differential privacy in the selection step (ϵr) on the label complexity, and second, the effect of differential privacy in the update step (ϵm) on the accuracy of the final released classifier wT. For each privacy setting, i.e., (ϵr, ϵm), the system can run the differentially private active learning experiment 5000 times.
The label complexity depicted in the histograms of
The accuracy plots in
VI. Summary
Thus, in the embodiments described herein, the system analyzes differentially private active learning from the perspective of its steadily shrinking version space in a pool-based setting. The privacy guarantees are described, and the analysis also reveals tradeoffs that must be considered in the design of differentially private active learning schemes. First, privacy-aware sample selection causes only a moderate increase in the label complexity. Second, privacy-aware learner updates require adding noise to the classifier, which may reduce its accuracy. Notably, the amount of noise added can be significantly more than that observed in non-interactive supervised learning because fewer samples are used for training. Care should be taken to ensure that noise added in successive update steps does not have a cumulative detrimental effect on the accuracy of the classifier.
In summary, it is preferable to train the active learner anew at each querying step, using all available labeled samples, rather than updating an existing noisy learner.
VII. Exemplary Methods for Facilitating Data Classification and Achieving Differential Privacy in a Pool-Based Setting for Active Learning
The system acquires labels corresponding to the second plurality of unlabeled samples to obtain a third plurality of labeled samples (operation 506). The system updates the first set of classifiers based on the third plurality of labeled samples to obtain a second set of classifiers in the version space, thereby improving accuracy of the first set of classifiers. In some embodiments, the system trains a classifier for the set of training data based on all the current labeled samples, which can include the first plurality of labeled samples and the third plurality of labeled images (e.g., the most recently determined labels), as described above in Section IV.D. In other embodiments, the system trains a classifier for the set of training data based on a subset of the currently labeled samples, wherein the subset contains a plurality of recently labeled samples and excludes a plurality of older labeled samples. The system can determine “recently” labeled and “older” labeled samples based on a predetermined threshold or time period, which can be automatically configured by the system or set by a user or administrator of the system.
The system determines whether a non-informative sample should be selected in a randomized trial with respect to a second random probability distribution (operation 612). If the non-informative sample should be selected in a randomized trial with respect to the second random probability distribution (decision 614), the system acquires a label corresponding to the non-informative sample (operation 616). If the non-informative sample should not be selected in a randomized trial with respect to the second random probability distribution, the system removes the non-informative sample from the pool of unlabeled samples (operation 618).
Exemplary Computer and Communication System
Content-processing system 818 can include instructions, which when executed by computer system 802, can cause computer system 802 to perform methods and/or processes described in this disclosure. Specifically, content-processing system 818 may include instructions for sending and/or receiving data packets to/from other network nodes across a computer network (communication module 820). A data packet can include data, a request, labels, a model, a classifier, training data, labeled samples, and unlabeled samples.
Content-processing system 818 can further include instructions for determining a version space associated with a set of data comprising a pool of unlabeled samples and a first plurality of labeled samples, wherein the version space includes a first set of classifiers corresponding to the first plurality of labeled samples (version space-determining module 822). Content-processing system 818 can include instructions for selecting, from the pool of unlabeled samples, a second plurality of unlabeled samples comprising informative samples and non-informative samples, wherein a respective informative sample corresponds to a first hyperplane which intersects the version space, and wherein a respective non-informative sample corresponds to a second hyperplane which does not intersect the version space (unlabeled sample-selecting module 824). Content-processing system 818 can include instructions for acquiring labels corresponding to the second plurality of unlabeled samples to obtain a third plurality of labeled samples (label-acquiring module 826). Content-processing system 818 can include instructions for updating the first set of classifiers based on the third plurality of labeled samples to obtain a second set of classifiers in the version space, thereby improving accuracy of the first set of classifiers (classifier-updating module 828 and model-training module 830).
Data 832 can include any data that is required as input or that is generated as output by the methods and/or processes described in this disclosure. Specifically, data 832 can store at least: data; a set of data; an input space; a classifier; a set of classifiers; a version space; an unlabeled sample; a labeled sample; a pool of unlabeled samples; a plurality of labeled samples; a label; a hyperplane; an informative sample; a non-informative sample; an indicator of whether a sample is informative or non-informative; a random probability distribution; a Bernoulli distribution; an indicator of whether a sample should be selected in a randomized trial with respect to a random probability distribution; an indicator of whether a sample meets a random probability distribution; an indicator of whether to acquire a label for a sample, return the sample to a pool of unlabeled samples, or to discard or remove the sample from the pool of unlabeled samples; an updated or trained classifier; recently labeled samples; older labeled samples; a predetermined threshold or time period; an output perturbation; an objective perturbation; an exponential mechanism; an optimal classifier; a support vector; an SVM classifier; margins; a list; an ordered list; a portion of an ordered list; an informative band; and an informative band associated with an optimal classifier.
The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.
The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium.
Furthermore, the methods and processes described above can be included in hardware modules or apparatus. The hardware modules or apparatus can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), dedicated or shared processors that execute a particular software module or a piece of code at a particular time, and other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.
The foregoing descriptions of embodiments of the present invention have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
9607272 | Yu | Mar 2017 | B1 |
20170116544 | Johnson | Apr 2017 | A1 |
20180040336 | Wu | Feb 2018 | A1 |
20190227980 | McMahan | Jul 2019 | A1 |
Number | Date | Country | |
---|---|---|---|
20210174153 A1 | Jun 2021 | US |