The present invention generally relates to generative adversarial networks and more specifically relates to novel and inventive methods for training conditional generative adversarial networks.
Deep learning influences almost every aspect of the machine learning and artificial intelligence. It gives superior results for classification and regression problems compared to classical machine learning approaches. Deep learning can also have significant impacts on generative models. One of the most interesting challenges in artificial intelligence is in training conditional generative models (or generators) which are able to provide labeled adversarial samples drawn from a specific distribution. Conditional generative models are models which can generate a class-specific sample given the right latent input. As one example, these generators can learn the data distribution for male/female faces and produce outputs that match a single (male/female) class.
Systems and methods for training a conditional generator model in accordance with embodiments of the invention are described. One embodiment includes a method that receives a sample, and determines a discriminator loss for the received sample. The discriminator loss is based on an ability to determine whether the sample is generated by the conditional generator model or is a ground truth sample. The method determines a secondary loss for the generated sample and updates the conditional generator model based on an aggregate of the discriminator loss and the secondary loss.
In a further embodiment, the steps of receiving, determining a discriminator loss, determining a secondary loss, and updating the conditional generator model are performed iteratively.
In still another embodiment, determining the secondary loss for a first iteration is performed using a first secondary loss model and determining the secondary loss for a second iteration is performed using a different second secondary loss model.
In a still further embodiment, the sample comprises an associated label and wherein determining the secondary loss comprises using a classification model trained to classify the sample into one of a plurality of classes.
In another embodiment, the classification model is a pre-trained model that is not modified during the training of the conditional generator model.
In yet another embodiment, determining the secondary loss comprises using a regression model trained to predict a continuous aspect of the sample.
In a yet further embodiment, determining the secondary loss comprises determining a plurality of secondary losses using a plurality of different secondary models.
In another additional embodiment, the aggregate is a weighted average of the discriminator loss and the secondary loss.
In a further additional embodiment, the steps of receiving, determining a discriminator loss, determining a secondary loss, and updating the conditional generator model are performed iteratively, wherein a weight used for calculating the weighted average is different between different iterations.
In another embodiment again, the method further generates a set of outputs using the trained conditional generator model. The set of outputs includes a set of samples and a set of associated output labels. The method trains a new model using the set of generated outputs that does not include any samples.
Additional embodiments and features are set forth in part the description follows, or may be learned by the practice of the invention. A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings, which forms a part of this disclosure.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The description and claims will be more fully understood with reference to the following figures and data graphs, which are presented as exemplary embodiments of the invention and should not be construed as a complete recitation of the scope of the invention.
Turning now to the drawings, systems and methods for training a conditional generative model and for generating conditional outputs are described below. In many embodiments, conditional generative models comprise a generator, a discriminator, and a secondary loss model. Generators and discriminators can be adversarially trained to generate outputs that approximate a “true” data set. Discriminators can be a binary classifier that determines whether a sample is generated or is a genuine sample coming from a database. Discriminators in accordance with some embodiments of the invention can be autoencoders. In a number of embodiments, generators can include a deep neural network which accepts a vector from a latent space (e.g., uniformly distributed noise) and outputs a sample of the same type as the true data set.
In a variety of embodiments, secondary loss models can be used to provide additional feedback for training a generator to be able to generate specific outputs. Secondary loss models in accordance with many embodiments of the invention can include, but are not limited to, classifiers and regressive models. In several embodiments, systems and methods train a deep conditional generator by placing a classifier in parallel with the discriminator and back propagate the classification error through the generator network. In some embodiments, secondary errors (e.g., classification error, regression error, etc.) are aggregated with a discriminator error and back propagated through a generator model. Systems and methods in accordance with a variety of embodiments of the invention can train a discriminator and a secondary loss model to generate separate losses, which can be combined to train a generator to generate conditional outputs. The method is versatile and is applicable to many variations of generative adversarial network (GAN) implementations, and also gives superior results compared to similar methods.
A system that provides for training a conditional generative model and for generating conditional outputs in accordance with some embodiments of the invention is shown in
Server systems 110, 140, and 170 are connected to the network 160. Each of the server systems 110, 140, and 170 is a group of one or more servers communicatively connected to one another via internal networks that execute processes that provide cloud services to users over the network 160. For purposes of this discussion, cloud services are one or more applications that are executed by one or more server systems to provide data and/or executable applications to devices over a network. The server systems 110, 140, and 170 are shown each having three servers in the internal network. However, the server systems 110, 140 and 170 may include any number of servers and any additional number of server systems may be connected to the network 160 to provide cloud services. In accordance with various embodiments of this invention, a network that uses systems and methods that train and apply conditional generative models in accordance with an embodiment of the invention may be provided by a process (or a set of processes) being executed on a single server system and/or a group of server systems communicating over network 160.
Users may use personal devices 180 and 120 that connect to the network 160 to perform processes for providing and/or interacting with a network that uses systems and methods that train and apply conditional generative models in accordance with various embodiments of the invention. In the shown embodiment, the personal devices 180 are shown as desktop computers that are connected via a conventional wired connection to the network 160. However, the personal device 180 may be a desktop computer, a laptop computer, a smart television, an entertainment gaming console, or any other device that connects to the network 160 via a wired connection. The mobile device 120 connects to network 160 using a wireless connection. A wireless connection is a connection that uses Radio Frequency (RF) signals. Infrared signals, or any other form of wireless signaling to connect to the network 160. In
A conditional generation element for training a conditional generative model in accordance with a number of embodiments is illustrated in
Although a specific example of a conditional generation element is illustrated in FIG. 2, any of a variety of secure conditional generation elements can be utilized to perform processes similar to those described herein as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
A conditional generation application in accordance with a number of embodiments of the invention is illustrated in
Generator engine 305 is a conditional generator that can be trained to generate specific types of outputs. Generator engines in accordance with various embodiments of the invention can include (but are not limited to) decoders and deconvolutional neural networks. Generator engines in accordance with many embodiments of the invention can take random samples as inputs to generate outputs similar true samples, or to match a target distribution. In many embodiments, generator engines receive a random set of inputs, along with a specified class or a continuous aspect, in order to generate an output of the specified class or with the specified continuous aspect.
Discriminator engine 310 can be used to determine whether an input (or sample) is from a target distribution, or whether it is a generated output of a generator model. Adversarial training of discriminator engines and generator engines can simultaneously train a generator to produce more realistic outputs while also training a discriminator to more effectively distinguish between generated and true inputs.
Secondary loss engine 315 can be used to refine a generator engine, such that the generator engine not only generates outputs that are similar to the target distribution, but are able to generate specific types of outputs within the target distribution. For example, a target distribution may include human faces, while a secondary loss engine may include a classifier to classify human faces as male or female. Secondary loss engines in accordance with several embodiments of the invention can include one or more models, including (but not limited to) binary classifiers, multi-class classifiers, and/or regression models. In some embodiments, secondary loss engines are pre-trained models that trained on a large corpus of training data to classify into a number of different classes. The loss from a secondary loss engine can then be used to train a generator to generate better samples of a specific class or distribution within the target population.
Error computation engine 325 can be used to compute an aggregate loss for the conditional generation application. In many embodiments, aggregate losses are a combination of a discriminator loss and one or more secondary losses from one or more secondary models. Aggregate losses in accordance with certain embodiments of the invention use a weighted combination of the discriminator and secondary loss to control the effects of each model on the training of a generator model. In some embodiments, error computation engines can change the weightings of the different losses between different epochs of training (e.g., in an alternating fashion), allowing the model to emphasize different aspects of the training in different epochs.
Although a specific example of a conditional generation application is illustrated in
When training a conditional generative model, other solutions are often not versatile enough to be applied to different GAN variations. In some embodiments, processes can mix losses from a discriminator and a secondary model to train a generator. However, mixing the loss of discriminator and the classifier can alter the training convergence, especially if the output of the discriminator is from a different type compare to the classifier's output. For example, in certain cases, outputs from a discriminator include an image (2D matrix), while outputs of a classifier include a (1D) vector. Merging the losses for output types into a single loss can alter the convergence of the network. Systems and methods for training a conditional generator model in accordance with a number of embodiments of the invention are independent of the generator and discriminator structure, i.e., the presented method can be applied to any model that is already converging. By separating the secondary term into a separate secondary model, the secondary loss can be separated from the discriminator's loss function, facilitating the discriminator's ability converge.
An example of a process for training a conditional generator model is conceptually illustrated in
Process 400 determines (415) a secondary loss based on a secondary model. Secondary models can be pre-trained on a large corpus of inputs to compute an output (such as, but not limited to, a classification, predicted value, etc.). Secondary losses in accordance with many embodiments of the invention can be computed to determine the accuracy of the secondary model's outputs versus the expected output. In some embodiments, secondary losses can reflect a secondary model's accuracy and/or confidence in labeling an input correctly.
Process 400 computes (420) an aggregate error. Aggregate errors in accordance with some embodiments of the invention can include weighted averages of a discriminator loss and one or more secondary losses. In certain embodiments, aggregate errors can weight different losses based on a variety of factors including (but not limited to) a relative size of each loss, a desired effect for the discriminator and/or the secondary loss model, etc. In some embodiments, computation engines can change the weightings of the different losses between different epochs of training (e.g., in an alter gating fashion), allowing the model to emphasize different aspects of the training in different epochs.
Process 400 updates (425) the generator model based on the computed aggregate error. In many embodiments, computed aggregate errors are back propagated through the generator model, allowing the generator to update the weights of the model to not only generate more realistic (or true) samples, but also to be able to generate specific types or classes of samples. Processes for training a conditional generator model in accordance with a variety of embodiments of the invention can be performed iteratively while the model converges. The process 400 then ends.
Although a specific example of a process for training a conditional generator is illustrated in
In many embodiments, secondary losses can include losses for any of a variety of models that can be used to refine the outputs of a generator. Secondary losses can include (but are not limited to) losses for binary classifiers, multi-class classifiers, and/or regressors. Similar methods can be applied to any GAN framework regardless of the model structures and/or loss functions. In a number of embodiments, different secondary loss models can be used for different iterations. Secondary loss models in accordance with some embodiments of the invention can include multiple different secondary loss models. Details regarding examples of different secondary structures are also described below.
In a number of embodiments, secondary losses for two-class problems are investigated. In some embodiments, classifiers can include a binary classifier with binary cross-entropy loss function.
Binary classifier The notations used in below are as follows:
For a fixed Generator and Discriminator, the optimal Classifier is
wherein CG,D* is the optimal classifier, and pX
The objective function for the model is given by:
O(G,D,C)=V(G,D)+ce(C) (2)
This can be rewritten as
O(G,D,C)=V(G,D)−z˜pZ
which is given by
O(G,D,C)=V(G,D)−{∫pZ
Considering G(z1)=x1 and G(z2)=x2,
O(G,D,C)=V(G,D)−{∫pX
The function ƒ →m log(ƒ)+n log(1−ƒ) reaches its maximum at
for any (m,n) ∈ 2\{0,0}.
The maximum value for ce(C) is log(4) and is achieved if and only if pX
−ce(C)=x˜pX
results in
ce(CG,D*)=−log(½)−log(½)=log(4) (7)
To show that this is the maximum value, from equation 5,
which is equal to
results in
Where K is the Kullback-Leibler divergence, which is always positive or equal to zero.
Minimizing the binary cross-entropy loss function ce for the classifier C is increasing the Jensen-Shannon divergence between pX
considering equation 10 and 11, it gives
ce(CG,D*)=log(4)−2JSD(pX
minimizing ce is equal to maximizing JSD(PX
As shown, placing the classifier C and adding its loss value the generative framework in accordance with a number of embodiments of the invention can push the generator to increase the distance of samples that are drawn from a specific class with respect to the other class. Processes in accordance with many embodiments of the invention can increase the Jensen Shannon Divergence (JSD) between classes generated by the deep generator so that the generator can produce samples drawn from a desired class. For example, in the male/female face scenario, a partition of Z space can be used to generate male and another partition to generate female samples.
In a number of embodiments, secondary losses for multi-class problems are investigated. In such cases, the classifier can be a multiple class classifier with a categorical cross-entropy loss function.
The terms used below are as follows:
In the multiple classes case, the classifier C has N outputs, where N is the number of the classes. In this approach, each output of the classifier can correspond to one class. For a fixed Generator and Discriminator, the optimal output for class c (c′th output) is:
Considering just one of the outputs of the classifier, the categorical cross-entropy can be reduced to binary cross-entropy given by
ce(C(c))=−z˜pZ
which is equal to
By considering G(zi)=xi,
The function ƒ→ m log(ƒ)+n log(1−ƒ) gets its maximum at
for (m,n) ∈2\ {0,0}. The maximum value for cce(C) is N log(N) and is achieved if and only if pX
From equation 13,
where KL is the Kullback-Leibler divergence, which is always positive or equal to zero.
Now consider pX
It can be shown that minimizing Lcce increases the Jensen-Shannon Divergence between pX
which can be rewritten as
which is equal to
This equation can be rewritten as
wherein the H(p) is the Shannon entropy of the distribution p. The Jensen Shannon divergence between N distributions pX
From equations 23 and 24,
Accordingly, minimizing Lcce increases the JSD term.
In some embodiments secondary losses can include losses for a regression network. A new loss function in accordance with a variety of embodiments of the invention can be used in conjunction with a regression network. In numerous embodiments, the regression error can be back-propagated through the generator, allowing the generator to be trained while constraining the generated samples to any continuous aspect of the original database. For example in a face generation application, given the right latent sequence, a generator can be trained to create faces with particular landmarks.
The following loss function is introduced for the regression network.
LR=∫∫dpz(z)(−log(1−(y−R(G(z)))))dz (26)
wherein z is the latent space variable dpz(z) is the distribution of an infinitesimal partition of latent space, y is the target variable (ground truth), R is the regression function and G is the generator function. For the loss function in equation 26 the optimal regressor is
wherein p(x) is the distribution of the generator's output, c is post-integration constant, and y is the target function.
Considering the inner integration of equation 26 and by replacing G (z)=x, the extremum of the loss function with respect to R is
which can be written as
this results in
Minimizing the loss function in equation 26 decreases the entropy of the generator's output. By replacing equation 27 in 26,
which can be rewritten as
LR=−∫px(x)log(px(x))dx+log(c)=H(px(x))+log(c) (32)
wherein H is the Shannon entropy. Minimizing LR results in decreasing H(px(x)).
Adding the regressor to the model decreases the entropy of the generated samples. This is expectable since the idea is to constrain the output of the generator to obey some particular criteria. For any two sets of samples and their corresponding targets (y1 and y2), the loss function in equation 1 increases the Jensen Shannon Divergence (JSD) between generated samples for these two sets. Consider z1 and z2 are two partitions of the latent space correspond to two sets of samples with targets y1 and y2. In this case, the loss function in equation 26 is given by:
Considering G(z1)=x1,G(z2)=x2, c1=1−y1 and c2=1−y2 equation 33 simplifies to
LR=−∫px
To find the optimum R(x) the derivative of the integrand is set to zero given by
which results in
By replacing equation 36 in equation 34 it simplifies to
which can be rewritten as
which equals to
RL=−log(c1−c2)−log(c2−c1)−log(4)−2JSD(px
minimizing RL, increasing JSD(px
Although specific examples of loss functions are described above, one skilled in the art will recognize that other loss functions can be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention.
Sample outputs for a process that implements a binary classifier in accordance with an embodiment of the invention are illustrated in
Sample outputs for a process that implements a multi-class model are illustrated in
Sample outputs for a process that implements a regression model for a secondary loss in accordance with an embodiment of the invention are illustrated in
Outputs generated by a conditional generator in accordance with numerous embodiments of the invention can be used to train a new model. Generated outputs in accordance with some embodiments of the invention can include (but are not limited to) images, text, labels, and other forms of data. In numerous embodiments, the new model is trained solely on generated outputs, and does not include any data from a ground truth database of real images. By using such generated images, sensitive information from a ground truth database s, health information, etc.) can be protected and training dataset, while still allowing d to be trained on a realistic distribution of data.
Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that the present invention may be practiced otherwise than specifically described. Thus embodiments of the present invention should be considered in all respects as illustrative and not restrictive.
The present invention claims priority to U.S. Provisional Patent Application Ser. No. 62/686,472 entitled “Generative adversarial network”, filed Jun. 18, 2018. The disclosure of U.S. Provisional Patent Application Ser. No. 62/686,472 is herein incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
10152970 | Olabiyi | Dec 2018 | B1 |
10650306 | Kumar | May 2020 | B1 |
10783875 | Hosseini-Asi | Sep 2020 | B2 |
11048999 | Miyato | Jun 2021 | B2 |
11106182 | Hosseini-Asi | Aug 2021 | B2 |
11188783 | Cricri | Nov 2021 | B2 |
11250329 | Karras | Feb 2022 | B2 |
11335118 | Kaneko | May 2022 | B2 |
11501438 | Xu | Nov 2022 | B2 |
20180314716 | Kim | Nov 2018 | A1 |
20190046068 | Ceccaldi | Feb 2019 | A1 |
20190130903 | Sriram | May 2019 | A1 |
20190147320 | Mattyus | May 2019 | A1 |
20190147333 | Kallur Palli Kumar | May 2019 | A1 |
20190197368 | Madani | Jun 2019 | A1 |
20190214135 | Wu | Jul 2019 | A1 |
20190236759 | Lai | Aug 2019 | A1 |
20190244681 | Gurcan | Aug 2019 | A1 |
20190279075 | Liu | Sep 2019 | A1 |
20190295302 | Fu | Sep 2019 | A1 |
20190333623 | Hibbard | Oct 2019 | A1 |
20190355103 | Baek | Nov 2019 | A1 |
20190385346 | Fisher | Dec 2019 | A1 |
20200134473 | Miyato | Apr 2020 | A1 |
20200210842 | Baker | Jul 2020 | A1 |
20200294201 | Planche | Sep 2020 | A1 |
20210249142 | Lau | Aug 2021 | A1 |
20210287073 | Miyato | Sep 2021 | A1 |
Entry |
---|
Miyato et Koyama “cGANs with Projection Discriminator” Feb. 15, 2018, arXiv: 1802.05637v1, pp. 1-21. (Year: 2018). |
Chrysos et al., “Robust Conditional Generative Adversarial Networks” May 22, 2018, arXiv: 1805.08657v1, pp. 1-15. (Year: 2018). |
Bazrafkan et al., “Versatile Auxiliary Classifier + Generative Adversarial Network (VAC+GAN)” May 1, 2018, arXiv: 1805.00316v1, pp. 1-10. (Year: 2018). |
Bazrafkan et al., “Versatile Auxiliary Regressor with Generative Adversarial network (VAR+GAN)” May 28, 2018, arXiv: 1805.10864v1, pp. 1-14. (Year: 2018). |
Yang et al., “Unsupervised Text Style Transfer using Language Models as Discriminators” May 31, 2018, arXiv: 1805.11749v2, pp. 1-12. (Year: 2018). |
Ghafoorian et al., “EL-GAN: Embedding Loss Driven Generative Adversarial Networks for Lane Detection” Jun. 14, 2018, arXiv: 1806.05525v1, pp. 1-14. (Year: 2 018). |
Zhu et al., “Toward Multimodal Image-to-Image Translation” May 2, 2018, arXiv: 1711.11586v3, pp. 1-12. (Year: 2018). |
Li et al., “Facial Image Attributes Transformation via Conditional Recycle Generative Adversarial Networks” May 11, 2018, pp. 511-521. (Year: 2018). |
Di et al., “GP-GAN: Gender Preserving GAN for Synthesizing Faces from Landmarks” Apr. 25, 2018, arXiv: 1710.00962v2. (Year: 2018). |
Jaiswal et al., “Bidirectional Conditional Generative Adversarial Networks” Mar. 15, 2018, arXiv: 1711.07461v2, pp. 1-17. (Year: 2018). |
Russo et al., “From source to target and back: Symmetric Bi-Directional Adaptive GAN” Nov. 29, 2017, arXiv: 1705.08824v2. (Year: 2017). |
Wang et al., “Every Smile is Unique: Landmark-Guided Diverse Smile Generation” Mar. 28, 2018, arXiv: 1802.01873v3. (Year: 2018). |
Belghazi et al., “Mutual Information Neural Estimation” Jun. 7, 2018, arXiv: 1801.04062v4. (Year: 2018). |
Juefei-Xu et al., “Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking” Apr. 17, 2017. (Year: 2017). |
Lin et al., “ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing” Mar. 5, 2018, arXiv: 1803.01837v1, pp. 1-13. (Year: 2018). |
Bazrafkan et al., “Face Synthesis with Landmark Points from Generative Adversarial Networks and Inverse Latent Space Mapping” Feb. 1, 2018, arXiv: 1802.00390, pp. 1-5. (Year: 2018). |
Huang et al., “Multimodal Unsupervised Image-to-lmage Translation” Apr. 12, 2018, arXiv: 1804.04732v1, pp. 1-22. (Year: 2018). |
Wang et al., “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs” Nov. 30, 2017, arXiv: 1711.11585v1, pp. 1-13. (Year: 2017). |
Hwang et al., “Learning Beyond Human Expertise with Generative Models for Dental Restorations” Mar. 30, 2018, arXiv: 1804.00064v1, pp. 1-18. (Year: 2018). |
Johnson et al., “Composite Functional Gradient Learning of Generative Adversarial Models” Jun. 8, 2018, arXiv: 1801.06309v2, pp. 1-19. (Year: 2018). |
Lee et al., “Stochastic Adversarial Video Prediction” Apr. 4, 2018, arXiv: 1804.01523v1, pp. 1-26. (Year: 2018). |
Makhzani, Alireza “Implicit Autoencoders” May 24, 2018, arXiv: 1805.09804v1, pp. 1-11. (Year: 2018). |
McDermott et al., “Semi-supervised Biomedical Translation with Cycle Wasserstein Regression GANs” Apr. 26, 2018, pp. 1-8. (Year: 2018). |
Yoon et al., “RadialGAN: Leveraging multiple datasets to improve target-specific predictive models using Generative Adversarial Networks” Jun. 7, 2018, arXiv: 1802.06403v2, pp. 1-9. (Year: 2018). |
Wang et al., “High-Quality Facial Photo-Sketch Synthesis using Multi-Adversarial Networks” Mar. 3, 2018, arXiv: 1710.10182v2, pp. 1-8. (Year: 2018). |
Zhang et al., “Decoupled Learning for Conditional Adversarial Networks” Jan. 21, 2018, arXiv: 1801.06790v1, pp. 1-9. (Year: 2018). |
Lu et al., “Conditional CycleGAN for Attribute Guided Face Image Generation” May 28, 2017, arXiv: 1705.09966v1, pp. 1-9. (Year: 2017). |
Deng et al., “UV-GAN: Adversarial Facial UV Map Completion for Pose-invariant Face Recognition” Dec. 13, 2017, arxiv: 1712.04695v1, pp. 1-10. (Year: 2017). |
Hosseini-Asi et al., “A Multi-Disciminator CycleGAN for Unsupervised Non-Parallel Speech Domain Adaptation” May 14, 2018, arXiv : 1804.00522v3, pp. 1-7. (Year: 2018). |
Zhao et al., “Modular Generative Adversarial Networks” Apr. 10, 2018, arXiv: 1804.03343v1, pp. 1-18. (Year: 2018). |
“Face Detection using Haar Cascades”, OpenCV: Open Source Computer Vision, 2018, 3 pgs. |
Arjovsky et al., “Wasserstein Generative Adversarial Networks”, Proceedings of the 34th International Conference on Machine Learning, 2017, 10 pgs. |
Asthana et al., “Incremental Face Alignment in the Wild”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2014, 8 pgs. |
Bazrafkan et al., “Versatile Auxiliary Classifier + Generative Adversarial Network (VAC+GAN): Training Conditional Generators”, arXiv:1805.00316v1 [eess.IV], May 1, 2018, 10 pgs. |
Bergstra et al., “Theano: Deep Learning on GPUs with Python”, NIPS 2011, BigLearning Workshop, Granada, Spain, 2011, 4 pgs. |
Berthelot et al., “BEGAN: Boundary Equilibrium Generative Adversarial Networks”, arXiv: 1703.10717v1 [cs.LG], Mar. 31, 2017, 9 pgs. |
Chongxuan et al., “Triple Generative Adversarial Nets”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 11 pgs. |
Clevert et al., “Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)”, arXiv: 1511.07289v1 [cs.LG], Nov. 23, 2015, 14 pgs. |
Cristinacce et al., “Feature Detection and Tracking with Constrained Local Models”, Proceedings of the British Machine Conference, Edinburgh, Scotland, Sep. 4-7, 2006, 10 pgs. |
Dieleman et al., “Lasagne: First release”, Zenodo, Software, Aug. 13, 2015, URL: http://doi.org/10.5281/zenodo.27878, 6 pgs. |
Ekbatani et al., “Synthetic Data Generation for Deep Learning in Counting Pedestrians”, DOI: 10.5220/0006119203180323, In Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2017), pp. 318-323. |
Frans, “Variational Autoencoders Explained”, kvfrans.com, Aug. 6, 2016, Retrieved from: http://kvfrans.com/variational-autoencoders-explained/, 6 pgs. |
Goodfellow et al., “Generative Adversarial Nets”, 2014. Generative adversarial nets. In Advances in neural information processing systems, 9 pgs. |
Kingma et al., “Auto-Encoding Variational Bayes”, arXiv:1312.6114v1 [stat.ML], Dec. 20, 2013, 9 pgs. |
Krizhevsky, “Learning Multiple Layers of Features from Tiny Images”, Apr. 8, 2009, 60 pgs. |
Lemley et al., “Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision”, IEEE Consumer Electronics Magazine, vol. 6, No. 2, Apr. 2017, 18 pgs. |
Liu et al., “Deep Learning Face Attributes in the Wild”, Proceedings of the IEEE International Conference on Computer Vision (ICCV), Dec. 7-13, 2015, pp. 3730-3738. |
Mirza et al., “Conditional Generative Adversarial Nets”, arXiv:1411.1784v1 [cs.LG], Nov. 6, 2014, 7 pgs. |
Odena et al., “Conditional Image Synthesis with Auxiliary Classifier GANs”, arXiv: 1610.09585v1 [stat.ML], Oct. 30, 2016, 14 pgs. |
Oord et al., “Pixel Recurrent Neural Networks”, arXiv: 1601.06759v1 [cs.CV], Jan. 25, 2016, 10 pgs. |
Radford et al., “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks”, arXiv: 1511.06434v1 [cs.LG], Nov. 19, 2015, 15 pgs. |
Rezagholizadeh et al., “Semi-supervised Regression with Generative Adversarial Networks for End to End Learning in Autonomous Driving”, Under review as a conference paper at ICLR 2018, 9 pgs. |
Salimans et al., “Improved Techniques for Training GANs”, Proceedings of the 30th Conference on Neural Information Processing Systems, Barcelona, Spain, 2016, 9 pgs. |
Wang et al., “A Hierarchical Generative Model for Eye Image Synthesis and Eye Gaze Estimation”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah, Jun. 18-23, 2018, 9 pgs. |
Wang et al., “A Universal Image Quality Index”, IEEE Signal Processing Letters, vol. 9, No. 3, Mar. 2002, 4 pgs. |
Wang et al., “Image Quality Assessment: From Error Visibility to Structural Similarity”, IEEE Transactions on Image Processing, Apr. 2004, vol. 13, No. 4, pp. 600-612. |
Xiong et al., “Supervised Descent Method and its Applications to Face Alignment”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2013, 8 pgs. |
Zhao et al., “Energy-based Generative Adversarial Network”, arXiv:1609.03126v1 [cs.LG], Sep. 11, 2016, 15 pgs. |
Number | Date | Country | |
---|---|---|---|
20190385019 A1 | Dec 2019 | US |
Number | Date | Country | |
---|---|---|---|
62686472 | Jun 2018 | US |