The present disclosure relates generally to training and use of machine learning systems and more specifically to private and interpretable machine learning systems.
The demand for intelligent applications calls for deploying powerful machine learning systems. Most existing model training processes focus on achieving better prediction results in a homogeneous environment, where each of the relevant participants are in the same party. However, in an assembly line of business-to-business (B2B) project, the participants are usually from multiple parties, e.g., data providers, model providers, model users, etc., which brings new trust concerns into consideration. For example, a new hospital may want to adopt machine learning techniques to help their doctors find better diagnostic and treatment solutions for coming patients. Due to the computation ability and data limitations, the new hospital may not be able to train their own machine learning models. A model provider may help the new hospital to train a diagnostic model based on knowledge from other hospitals. However, to protect patient privacy, the other hospitals do not permit their private data to be exposed to the new hospital.
Accordingly, it would be advantageous to have systems and methods for training and using machine learning systems that protect data privacy and honor the trust between cooperating parties.
In the figures, elements having the same designations have the same or similar functions.
In view of the need for a data privacy protected machine learning system, embodiments described herein provide a private and interpretable machine learning framework that distills and perturbs knowledge from multiple private data providers to enforce privacy protection and utilizes the interpretable learning for understandable feedbacks for model users.
As used herein, the term “network” may include any hardware or software-based framework that includes any artificial intelligence network or system, neural network or system and/or any training or learning models implemented thereon or therewith.
As used herein, the term “module” may include any hardware or software-based framework that performs one or more functions. In some embodiments, the module may be implemented using one or more neural networks.
In some examples, each data provider 110 has trained a teacher module by using its own private data 105, and thus does not permit any knowledge of the teacher module to be shared with the model user 130, e.g., to train the student module for the model user. In this case, public data is used to train the student module for the model user with a comparable performance with those teacher modules. Specifically, queries from unlabeled public data is fed to the teacher modules to generate responses. Such responses that carry knowledges of the teacher modules are not shared with the student module directly. Instead, only perturbed knowledge 103 from the teacher modules are used to update the student module.
In some examples, when the data provider 110 does not trust the model provider 120, the data provider(s) 110 may perturb the outputs from the teacher modules and send the perturbed outputs (e.g., perturbed knowledge 103) to the model provider 120 for student module training. In some examples, if the model provider 120 is an entrusted party to the data provider(s) 110, the model provider 120 may receive and perturb the knowledge from the teacher modules of the data providers 110. In any case, the private and interpretable machine learning training framework 125 trains the student module with perturbed knowledge from the teacher modules. A private and interpretable student module 108 may then be delivered to the model users 130.
Thus, the private and interpretable machines learning training framework 125 honors the trust concerns between multiple parties, e.g., by protecting data privacy of the data providers 110 from the model users 130. To protect the sensitive private data 105, the private and interpretable machines learning training framework 125 deploys the private knowledge transfer based on knowledge distillation with perturbation, e.g., see perturbed knowledge 103.
In addition, compared with a sample-by-sample query training in existing systems, the private and interpretable machines learning training framework 125 obtains perturbed knowledge 103 from the data providers 110 in a batch-by-batch query manner, which reduces the number of queries that are sent to the teacher modules. The batch loss generated based on a batch of sample queries by each teacher module, instead of a per-query loss, further reduces the chance of recovering private knowledge of the teacher modules at the student module, and thus data privacy of the data providers 110 is further protected.
Memory 220 may be used to store software executed by computing device 200 and/or one or more data structures used during operation of computing device 200. Memory 220 may include one or more types of machine readable media. Some common forms of machine readable media may include floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
Processor 210 and/or memory 220 may be arranged in any suitable physical arrangement. In some embodiments, processor 210 and/or memory 220 may be implemented on a same board, in a same package (e.g., system-in-package), on a same chip (e.g., system-on-chip), and/or the like. In some embodiments, processor 210 and/or memory 220 may include distributed, virtualized, and/or containerized computing resources. Consistent with such embodiments, processor 210 and/or memory 220 may be located in one or more data centers and/or cloud computing facilities.
Computing device 200 further includes a communication interface 235 that is operable to receive and transmit data to one or more other computing devices, such as the data providers 110. In some examples, data may be sent to or received from the teacher modules 230 at the data providers 110 via the communication interface 235. In some examples, each of the one or more teacher modules 230 may be used to receive one or more queries and generate a corresponding result. In some examples, each of the one or more teacher modules 230 may also handle the iterative training and/or evaluation of its respective teacher module 230 as is described in further detail below. In some examples, each of the one or more teacher modules 230 may include a machine learning structure, such as one or more neural networks, deep convolutional networks, and/or the like.
Memory 220 includes a student module 240 that may be used to implement a machine learning system and model described further herein and/or to implement any of the methods described further herein. In some examples, student module 240 may be trained based in part on perturbed knowledge 103 provided by the one or more teacher modules 230. In some examples, student module 240 may also handle the iterative training and/or evaluation of student module 240 as is described in further detail below. In some examples, student module 240 may include a machine learning structure, such as one or more neural networks, deep convolutional networks, and/or the like.
In some examples, memory 220 may include non-transitory, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 210) may cause the one or more processors to perform the methods described in further detail herein. In some examples, each of the teacher modules 230 and/or student module 240 may be implemented using hardware, software, and/or a combination of hardware and software. As shown, computing device 200 receives an input batch of training data 250 (e.g., from a public dataset), and generates a private and interpretable model 108, e.g., for model user 130.
As discussed above and further emphasized here,
To utilize the teacher modules 230a-n to train the student module 240, a batch of public data samples xp is sent from the public data source 302 to the teacher modules 230a-n, each of which generates a respective output. Let cit denote the output of the last hidden layer of the i-th teacher module, and the output cit is sent to a respective classification module 310a-n. In some examples, each classification module 310a-n then generates a softmax probability according to Equation (1):
Pit=softmax(cit/τ) Eq. (1)
where Pit denotes a vector of probabilities corresponding to different classes, and τ is the temperature parameter (e.g., τ=1, etc.). When τ>1, the probabilities of the classes whose normal values are near zero may be increased. Thus, the relationship between various classes is embodied as knowledge in the classification output, e.g., the soften probability Pit. In some examples, the classification modules 310a-n are one example type of the teacher modules, and the teacher modules 230a-n may be any other models.
In some examples, the model provider 120 may receive and aggregate the classification output Pit from all classification modules 310a-n as classification output according to Equation (2):
where N is the number of teacher modules. To learn the aggregated knowledge, the student module 240 is trained to minimize the difference between its own classification output and the aggregated classification output Pt from the teacher modules, e.g., the knowledge distillation loss. At the knowledge distillation loss module 320, the knowledge distillation loss Lk is calculated according to Equation (3):
LK(Xp,Pt;Θs)=C(Ps,Pt,Θs) Eq. (3)
where Θs is the student module trainable parameters; Ps denotes the classification output from the student modules 240 over the same public samples xp; and C( ) denotes the cross-entropy loss, e.g., C(Ps, Pt, Θs)=−Σy∈YPs(y) log (Pt(y)), denotes the classification label. Specifically, the same public samples xp from the public data source 302 is sent to the student module 240, and the output cs of the last hidden layer of the student module 240 is sent to the classification module 315 to generate the classification output Ps=softmax (cs). The classification output Ps is in turn fed to the knowledge distillation loss module 320.
As any information related to teacher modules 230a-n, e.g., data, outputs, loss or gradients, is sensitive and would raise the privacy concerns of data providers 110, no such information is passed from the teacher side 300 to the student side 301 for training without data protection. Specifically, original sensitive teacher module information, such as the knowledge distillation loss Lk is perturbed by adding random noise during the training process of the student module 240. In this case, the student would learn the knowledge from its teachers with privacy guarantee.
In some examples, instead of computing an aggregated distillation loss as shown in Equation (3), the knowledge distillation loss module 320 may compute a distillation loss for each teacher module (ith teacher module) according to Equation (4):
LK(i)(Xp,Pit;Θs)=C(Ps,Pit,Θs)) Eq. (4)
The bound module 323 then bounds each batch loss LK(i) for the ith teacher module by a threshold D. In this way, the sensitivity of the aggregated batch loss can be controlled. Specifically, the max value of ∥LK(i)∥2 is controlled within a given bound D. If ∥LK(i)∥2>D, the value of ∥LK(i)∥2 is scaled down as Equation (5):
Otherwise, ∥LK(i)∥2 maintains its the original value. After bounding at the bounding module 323, the sensitivity of the bounded batch loss
When the threshold D is set too large, the batch loss will be perturbed by excessive noise. On the other hand, when the bounding threshold D is too small, the batch loss may be over-clipped and lose its accuracy. In order to resolve this dilemma, an auxiliary teacher module may be trained to generate an adaptive norm bound D. The auxiliary teacher module is trained by semi-supervised learning based on the public data from the public data source 302. The auxiliary batch loss between the student module 240 and the auxiliary teacher may be constantly monitored to set norm bound D as the average value of the auxiliary batch loss over a period of time. In this manner, the norm bound D may be dynamically adjusted during training.
The perturbation module 330 then receives a set of bounded batch losses
The perturbation module 330 then adds Gaussian noise into the aggregated batch loss {circumflex over (L)}K to preserve privacy of the sensitive data from the teacher modules 230a-n according to Equation (6):
{tilde over (L)}Kƒ{circumflex over (L)}K+N(0,σ2D3I) Eq. (6)
where {tilde over (L)}K denotes the perturbed aggregated batch loss, and N(0, σ2D2I) denotes a Gaussian distribution with a norm of 0 and a variance of σ2D2σ a is a parameter that is indicative of the privacy loss budget of the training process.
In some examples, the aggregated batch loss Lk may be perturbed with the Gaussian noise N(0, σ2D2I) directly, e.g., with or without being bounded by the threshold D according to Equation (5).
The perturbed aggregated batch loss{tilde over (L)}K is then sent to the student module 240 via the backpropagation path 350 to update the parameters Θs of the student module 240.
For many machine learning techniques, in addition to the distribution of prediction results from
Specifically, a batch of public data samples xp is sent from the public data source 302 to the teacher modules 230a-n. The hint modules 312a-n in each of the teacher modules 230a-n generate a hint output oh, respectively. The hint output Oh from each of the hint modules 312a-n may represent intermediate information of the teacher modules 230a-n, respectively. The same batch of public data samples xp is sent from the public data source 302 to the student module 240. The guided layer 318 at the student module 240 generates a guided intermediate output g (xp; Θg), where g (.; Θg) represents the student model up to the guided intermediate layer with parameter Θg. The student intermediate output is then sent to the adaptation layer 325 which is configured to learn information from different formatting representations, as the guided intermediate outputs g(xp; Θg) of the student module 240 may not have the same dimensional representations as the hint intermediate outputs oh of the teacher modules 230a-n. The adaptation layer 325 may be denoted by h (.; Θa) with parameter Θa. Thus, the hint loss module 355 receives the hint outputs from the teacher modules 230a-n, the adapted outputs from the adaptation layer 325, and computes a hint loss under L2 norm according to Equation (7):
LH(xp,oh;Θg,Θa)=½∥h(g(xp;Θg);Θa)−oh∥2 Eq. (7)
In some examples, the hint loss module 355 computes a hint loss LH(i), i=1, . . . N, for each teacher module 230a-n according to Eq. (7) and sends the computed hint losses to the bound module 333 for bounding. In some examples, the hint loss module 355 computes and outputs an aggregated hint loss for all teacher modules by taking the average of the hint losses of all teacher modules 230a-n to the perturbation module 360. The bounding module 333 may be optional.
In some examples, the bounding module 333 (which may be similar to the bounding module 323) bounds each hint loss LH(i) according to Equation (5). The threshold D that is used to bound the hint loss LH(i) may be different from, or the same as the threshold that is used to bound the knowledge distillation loss LK(i) used in Equation (5).
The bounded hint losses
The perturbation module 360 (which may be similar to the perturbation module 330) then adds Gaussian noise into the aggregated hint loss {circumflex over (L)}H to preserve privacy of the sensitive data from all teacher modules 230a-n according to an equation similar to Equation (6).
The perturbed aggregated batch loss {tilde over (L)}H is then sent to the student module 240 via the backpropagation path 351 to update the parameters Θg of the student module g(.; Θg) up to the guided layer 318 and the parameters Θa of the adaptation layer 325 h(.; Θa).
Because the training data of the teacher modules 230a-n are disjoint with each other, after bounding, the sensitivity of the aggregated bounded knowledge distillation loss {tilde over (L)}K or the aggregated bounded distillation loss {tilde over (L)}H is D/N. Thus, each query approximated by the knowledge distillation structure shown in
Specifically, if T denotes the number of total batches of queries sent to the teacher modules 230a-n from the public data source 302, for constants δ>0, ε<α1T, where ai is a constant, the overall privacy loss is (ε, δ) using moments accountant, when a is set according to Equation (8):
where T 32 (Th+Td)|xp|/S with Th represents the total number of learning epochs for hint learning, Td represents the total number of learning epochs for knowledge distillation learning, and S representing the batch size. Further discussion on privacy tracking using moments accountant can be found in “Deep Learning With Differential Privacy” by Abadi et al., pp. 308-318, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, which is hereby expressly incorporated by reference herein in its entirety.
In some examples, the structure described in
L=Iterpretable_learning(Θs, xp) Eq. (8)
where L represents the human understandable interpretation such as super-pixels or keywords, etc, and Interpretable_learning( ) may be a general model-agnostic approach to interpret the outputs of the student module 240. An example Interpretable_learning( ) can be found in “Why should I trust you?: Explaining the predictions of any classifier” by Ribeiro et al., pp. 1135-1144. Proceedings of the 22nd ACM SIGKDD International Conference On Knowledge Discovery And Data Mining, 2016, which is hereby expressly incorporated by reference herein in its entirety.
In some examples, in order to interpret a sample x, the interpretable learning module 365 transforms x into a binary interpretable version denoted by x′. The interpretable learning module 365 then generates M samples close to x′ and obtains the model predictions of the M generated samples from the student module 240. A number of L features are selected from the model predictions from the M samples as the explanations for sample x. To interpret the student module 240 as a model instead of a single instance, the interpretable learning module 365 may adopt a sub-modular pick algorithm for selecting the most representative instances which can cover different features. Further details of the sub-modular pick algorithm can be found in Ribeiro et al.
At a process 402, one or more teacher modules (e.g., see teacher modules 230a-n in
At a process 404, a student module (e.g., student module 240 in
At a process 406, a batch of queries from the public data is sent to the one or more teacher modules. In some examples, the batch of queries may be similar to the public data samples xp described in relation to
At process 408, perturbed knowledge of teacher modules is obtained based on the batch of queries. In some examples, the perturbed knowledge may include a perturbed knowledge distillation batch loss {tilde over (L)}K generated at the perturbation module 330 in
At a process 410, the perturbed knowledge is used to update and/or improve the student module. In some examples, the perturbed knowledge such as the perturbed knowledge distillation batch loss {tilde over (L)}K and/or the perturbed hint batch loss {tilde over (L)}H are used to backpropagate the student module to update the student model parameters, as discussed in relation to backpropagation paths 350-351 in
At process 412, a human understandable interpretation of the trained student module is provided to the model user. In some examples, human understandable interpretation such as super-pixels or keywords are incorporated as part of the student module to provide to the model user, as discussed in relation to the interpretable learning module 365 in
In some examples, algorithm 500 adopts a duration Th for hint training. Specifically, a hint learning process may be iteratively performed to update parameters Θa, Θg of the student model from time t=1 to Th. The hint learning process is further discussed in relation to
In some examples, algorithm 500 adopts a duration Td for knowledge distillation. Specifically, a knowledge distillation process may be iteratively performed to update parameters Θs of the student model from time t=1 to Td. The knowledge distillation process is further discussed in relation to
Method 600a starts at process 602, at which a batch of public samples are obtained (e.g., from public data source 302 in
At process 604, the outputs of the last hidden layers of each teacher modules, in response to inputs of the batch of public queries, are obtained. In some examples, the outputs may be similar to teacher module outputs Oh discussed in relation to
At process 606, a knowledge distillation loss is computed for each teacher module based on student model parameters and the output from the respective teacher in response to the batch of queries, e.g., see Eq. (3).
At process 608, the computed knowledge distillation loss for each teacher module is bounded according to a pre-defined threshold, e.g., see Eq. (5).
At process 610, an average bounded knowledge distillation loss is computed by taking an average of the computed knowledge distillation losses for all teachers. In some examples, a weighted average of the computed knowledge distillation losses for all teachers may be computed, e.g., with a set of weights defined by the model provider 120 or the model user 130 with preference or emphasis on knowledge from one or more particular teacher modules.
At process 612, a perturbed knowledge loss is computed by adding noise to the average bounded knowledge distillation loss, e.g., see Eq. (6).
At process 614, the student module is backpropagated based on the perturbed knowledge loss to update the parameters of the student module, e.g., similar to Θs discussed in relation to
At process 616, the method 600a determines whether there are more batches of queries from public data samples for training. When there are more batches of training samples, method 600a repeats processes 602-614 to update the student model parameters using the next batch of training samples. When no more batch of training samples are available, method 600a proceeds to process 618, at which method 600a determines whether the pre-defined learning epoch Td has lapsed. When the learning epoch has lapsed, method 600a proceeds to process 620, where human understandable interpretations of the most updated student model are generated, e.g., according to Eq. (8). When the learning epoch has not lapsed, the method 600a repeats 602-616 to re-train the student model based on the available public data samples.
Method 600b starts at process 601, at which a batch of public samples are obtained, e.g., similar to process 602. In some examples, the batch of public samples may have a batch size of S that is similar to, or different from the batch size used in knowledge distillation learning at process 602.
At process 603, the hint outputs of each teacher module, in response to inputs of the batch of public sample queries, are obtained. In some examples, the outputs may be similar to teacher module outputs Oh discussed in relation to
At process 605, a hint loss is computed for each teacher module based on student guided layer outputs, student guided model parameters, student adaptation layer parameters, and the output from the respective teacher in response to the batch of queries, e.g., according to Eq. (7).
At process 607, the computed hint loss for each teacher module is bounded according to a pre-defined threshold, e.g., according to Eq. (5).
At process 609, an average bounded hint loss is computed by taking an average of the computed hint losses for all teacher modules. In some examples, a weighted average of the computed hint losses for all teachers may be computed, e.g., with a set of weights defined by the model provider 120 or the model user 130 with preference or emphasis on hints from one or more particular teacher modules.
At process 611, a perturbed hint is computed by adding noise to the average bounded knowledge distillation loss, e.g., according to Eq. (6).
At process 613, the student module is backpropagated based on the perturbed hint loss to update the parameters of the student guided model and the parameters for the adaptation layer, e.g., similar to Θg, Θa discussed in relation to
At process 615, the method 600b determines whether there are more batches of queries from public data samples for hint training. When there are more batches of training samples, method 600b repeats processes 601-613 to update the parameters of the student guided model and the parameters for the adaptation layer using the next batch of training samples. When no more batch of training samples are available, method 600b proceeds to process 617, at which method 600b determines whether the pre-defined learning epoch TH has lapsed. When the learning epoch has lapsed, method 600b proceeds to process 619, where method 600b proceeds with knowledge distillation learning to further update parameters of the student module, e.g., starting at process 602. When the learning epoch has not lapsed, the method 600b repeats 601-617 to re-train the student model based on the available public data samples.
As discussed in relation to
Chart 700a shows the impact of various hint learning epochs Th on the accuracy of the student module. Specifically, without hint learning, i.e., Th=0, the accuracy of student module is determined by the distillation learning. In this case, a small value of distillation learning epoch Td significantly deteriorates the student module performance. However, this performance degradation may be mitigated by the hint learning, even with a small value of hint learning epoch Th. When T>10, the performance difference between Td=72 and Td=120 is negligible. Thus, it may suggest that the hint learning helps to improve the student module performance with little privacy loss. In some example, a hint learning epoch Th=30 may be used to achieve satisfactory performance.
Chart 700b shows the impact of epoch Td for distillation learning on the student module performance. As shown at chart 700b, the performance of student module may be more effective when the value of distillation learning epoch Td increases, because more perturbed private knowledge is transferred from the teacher modules. In some examples, the distillation learning epoch Td=72 may be used to achieve satisfactory performance.
Chart 700c shows the impact of batch size S on the student module performance. The performance of student improves with a smaller batch size S as shown in chart 700c. A large value of batch size S leads to a smaller number of batch query requests to the teacher modules, and thus the privacy of the teacher modules may be better protected. In order to balance the effectiveness and the privacy protection, in some examples, the batch size may be set as 128.
Chart 700d shows the impact of perturbation noise scale on the student module performance. A larger noise scale may help to protect the data privacy, but also reduces the student module performance in general. However, as the norm bound and additional perturbation noise may act as regularization roles during training, a relatively large value of noise scale may be used for privacy preserving.
Chart 700e shows the impact of compression rate on the student module performance. The student module performance may improve with a larger size of the neural network that the student module is built on. Student module with a very large size of neural network, however, requires more public data and more queries for a stable and effect model.
Chart 700f shows the impact of the number of teachers on the student module performance. The performance of the student is also shown with different teacher modules. In some examples, when a total of 40,000 sensitive data samples are used, the sensitive data samples may be split to train the teacher modules based on the number of teachers, such as one teacher with 40,000 samples, two teachers trained with 20,000 samples for each, and four teachers trained with 10,000 samples. The performance of the teacher module may increase with more training samples, but the student module performance may improve when trained with more teacher modules.
As shown in charts 800a-c, the student module achieves accuracies of 75.57%, 95.01% and 98.68% on CIFAR MNIST and SVHN with (5.48, 10−5), (3.00, 10−6), and (2.74, 10−5) differential privacy, respectively. Charts 800a-c also show that the accuracy generally increases with a larger privacy budget. Meanwhile, the student modules may outperform the teacher modules on MNSIT and SVHN in general, and very close in performance to the teacher modules on CIFAR even with a small privacy budget.
The student module outperforms the average performance of its teachers on both SVHN and MNIST datasets, and still obtain comparable performance on CIFAR. On MNIST, the interpretable student achieves 11.9×compression ratio and 0.75×speed-up on interpretation with 0.12% accuracy increase. On SVHN, the interpretable student model is also better than the teacher models on model size (3.8×compression ratio), efficiency (2.5×speed-up) and effectiveness (+0.67% accuracy). On CIFAR, the accuracy decreases less than 1% using only 39.5% times. Therefore, by applying hint learning and knowledge distillation to train the student module, the interpretable student module is effective and efficient.
Some examples of computing devices, such as computing device 100 may include non-transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 220) may cause the one or more processors to perform the processes of methods 400, 600a-b and/or Algorithm 500 of
This description and the accompanying drawings that illustrate inventive aspects, embodiments, implementations, or applications should not be taken as limiting. Various mechanical, compositional, structural, electrical, and operational changes may be made without departing from the spirit and scope of this description and the claims. In some instances, well-known circuits, structures, or techniques have not been shown or described in detail in order not to obscure the embodiments of this disclosure Like numbers in two or more figures represent the same or similar elements.
In this description, specific details are set forth describing some embodiments consistent with the present disclosure. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are meant to be illustrative but not limiting. One skilled in the art may realize other elements that, although not specifically described here, are within the scope and the spirit of this disclosure. In addition, to avoid unnecessary repetition, one or more features shown and described in association with one embodiment may be incorporated into other embodiments unless specifically described otherwise or if the one or more features would make an embodiment non-functional.
Although illustrative embodiments have been shown and described, a wide range of modification, change and substitution is contemplated in the foregoing disclosure and in some instances, some features of the embodiments may be employed without a corresponding use of other features. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. Thus, the scope of the invention should be limited only by the following claims, and it is appropriate that the claims be construed broadly and in a manner consistent with the scope of the embodiments disclosed herein.
The present disclosure claims benefits of and commonly-owned U.S. provisional application Nos. 62/810,345, filed Feb. 25, 2019, and 62/810,843, filed Feb. 26, 2019, both of which are hereby expressly incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
10282663 | Socher et al. | May 2019 | B2 |
10346721 | Albright et al. | Jul 2019 | B2 |
20160350653 | Socher et al. | Dec 2016 | A1 |
20170024645 | Socher et al. | Jan 2017 | A1 |
20170032280 | Socher et al. | Feb 2017 | A1 |
20170140240 | Socher et al. | May 2017 | A1 |
20180082171 | Merity et al. | Mar 2018 | A1 |
20180096219 | Socher et al. | Apr 2018 | A1 |
20180096267 | Masekara et al. | Apr 2018 | A1 |
20180101768 | Laine et al. | Apr 2018 | A1 |
20180121787 | Hashimoto et al. | May 2018 | A1 |
20180121788 | Hashimoto et al. | May 2018 | A1 |
20180121799 | Hashimoto et al. | May 2018 | A1 |
20180129931 | Bradbury et al. | May 2018 | A1 |
20180129937 | Bradbury et al. | May 2018 | A1 |
20180129938 | Xiong et al. | May 2018 | A1 |
20180143966 | Lu et al. | May 2018 | A1 |
20180144208 | Lu et al. | May 2018 | A1 |
20180144248 | Lu et al. | May 2018 | A1 |
20180268287 | Johansen et al. | Sep 2018 | A1 |
20180268298 | Johansen et al. | Sep 2018 | A1 |
20180300317 | Bradbury | Oct 2018 | A1 |
20180300400 | Paulus | Oct 2018 | A1 |
20180336198 | Zhong et al. | Nov 2018 | A1 |
20180336453 | Merity et al. | Nov 2018 | A1 |
20180349359 | McCann et al. | Dec 2018 | A1 |
20180373682 | McCann et al. | Dec 2018 | A1 |
20180373987 | Zhang et al. | Dec 2018 | A1 |
20190108432 | Lu et al. | Apr 2019 | A1 |
20190108439 | Lu et al. | Apr 2019 | A1 |
20190130206 | Trott et al. | May 2019 | A1 |
20190130221 | Bose | May 2019 | A1 |
20190130248 | Zhong et al. | May 2019 | A1 |
20190130249 | Bradbury et al. | May 2019 | A1 |
20190130273 | Keskar et al. | May 2019 | A1 |
20190130312 | Xiong et al. | May 2019 | A1 |
20190130896 | Zhou et al. | May 2019 | A1 |
20190130897 | Zhou et al. | May 2019 | A1 |
20190149834 | Zhou et al. | May 2019 | A1 |
20190188568 | Keskar et al. | Jun 2019 | A1 |
20190213482 | Socher et al. | Jul 2019 | A1 |
20190251168 | McCann et al. | Aug 2019 | A1 |
20190251431 | Keskar et al. | Aug 2019 | A1 |
20190258714 | Zhong et al. | Aug 2019 | A1 |
20190258901 | Albright et al. | Aug 2019 | A1 |
20190258939 | Min et al. | Aug 2019 | A1 |
20200175193 | Pesé | Jun 2020 | A1 |
Entry |
---|
Abadi et al., “Deep Learning with Differential Privacy,” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1-14, 2016 2016. |
Bach et al., “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation,” PLoS ONE, 10(7): e0130140, pp. 1-46, 2015 2015. |
Beimel et al., “Bounds on the Sample Complexity for Private Learning and Private Data Release,” Machine Learning, 94(3):401-437, 2014 2014. |
Claerhout et al., “Privacy Protection for Clinical and Genomic Data: The Use of Privacy-Enhancing Techniques in Medicine,” International Journal of Medical Informatics, 74(2):257-265, 2005, pp. 1-9 2005. |
Devlin et al., “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 1-16 2018. |
Dwork et al., “The Algorithmic Foundations of Differential Privacy,” Foundations and Trends in Theoretical Computer Science, 2014, vol. 9(3-4), pp. 211-407 2014. |
Dwork, “Differential Privacy,” Encyclopedia of Cryptography and Security, 2011, pp. 338-340 2011. |
He et al., “Deep Residual Learning for Image Recognition,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1-8 2016. |
Hinton, et al., “Distilling the Knowledge in a Neural Network,” Twenty-eighth Conference on Neural Information Processing Systems 2014 Deep Learning and Representation Learning Workshop, pp. 1-9 2015. |
Hitaj et al., “Deep Models under the GAN: Information Leakage from Collaborative Deep Learning,” Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1-16 2017. |
Krizhevsky et al., “Learning Multiple Layers of Features from Tiny Images,” pp. 1-60, 2009. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1 222.9220&rep=rep 1&type=pdf 2009. |
Laine et al., “Temporal Ensembling for Semi-Supervised Learning,” Fifth International Conference on Learning Representations, 2017, pp. 1-13 2017. |
Lecun et al., “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, 1998, vol. 86 (11), pp. 1-46 1998. |
Montavon et al., “Methods for Interpreting and Understanding Deep Neural Networks,” Digital Signal Processing, 2018, vol. 73, pp. 1-15 2018. |
Netzer et al., “Reading Digits in Natural Images with Unsupervised Feature Learning,” Proceedings of the 24th International Conference on Neural Information Processing Systems, 2011, pp. 1-9 2011. |
Papernot et al., “Semi-Supervised Knowledge Transfer for Deep Learning from Private Training Data” Fifth International Conference on Learning Representations, 2017, pp. 1-16 2017. |
Papernot et al., “Scalable Private Learning with PATE,” Sixth International Conference on Learning Representations, 2018, pp. 1-34 2018. |
Park et al., “Adversarial Dropout for Supervised and Semi-Supervised Learning,” 32nd AAAI Conference on Artificial Intelligence, 2017, pp. 3917-3924 2017. |
Ribeiro et al., “Why Should I Trust You?: Explaining the Predictions of Any Classifier,” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Mining, 2016, pp. 1-10 2016. |
Romero et al., “FitNets: Hints for Thing Deep Nets,” 3th International Conference on Learning Representatives, pp. 1-13, 2015. arXiv:1412.6550 2015. |
Samek et al., “Explainable Artificial Intelligence: Understanding, Visualizing, and Interpreting Deep Learning Models,” 2017, pp. 1-8 arXiv:1708.08296 2017. |
Silver et al., “Mastering the Game of Go Without Human Knowledge,” Nature, 2017, 550: 354-359, pp. 1-18 2017. |
Simonyan et al., “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps,” Second International Conference on Learning Representations, 2014, pp. 1-8 2013. |
Triasteyn et al., “Generating Artificial Data for Private Deep Learning,” pp. 1-8, 2018. arXiv: 1803.03148 2018. |
Zhang et al., “Interpretable Convolutional Neural Networks,” Proceedings of the IEEE Conference on Computer Vision and Patter Recognition, 2018, pp. 8827-8836 2018. |
Number | Date | Country | |
---|---|---|---|
20200272940 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62810843 | Feb 2019 | US | |
62810345 | Feb 2019 | US |