Hierarchical classification involves mapping input data into a taxonomic hierarchy of output classes. Many hierarchical classification approaches have been proposed. Examples include “flat” approaches, such as the one-against-one and the one-against-all schemes, which ignore the hierarchical structure and, instead, treat hierarchical classification as a multiclass classification problem that involves learning a binary classifier for all non-root nodes. Another approach is the “local” classification approach, which involves training a multiclass classifier locally at each node, each parent node, or each level in the hierarchy. A fourth common approach is the “global” classification approach, which involves training a global classifier to assign each item to one or more classes in the hierarchy by considering the entire class hierarchy at the same time.
Many automated classification approaches rely on machine learning based classifiers that have been trained to perform specific classification tasks. The accuracy of such classifiers, however, depends on having sufficient labeled data to train reliable classification models. The ability to collect high-quality and stable training data (i.e. the inferred truth) is essential for powering many supervised algorithms. These algorithms are often the foundation for modern business solutions, such as search engine rankings, image recognition, news categorization, and so on.
Hand-annotated training data have been the basis of many machine learning research. In recent years, crowdsourcing has become a common practice for generating training data, empowering researchers to outsource their tedious and labor-intensive labeling tasks to workers of crowdsourcing platforms. Crowdsourcing platforms provide large and inexpensive workforces for improved cost control and scalability. However, the unstable quality of the work produced by crowdsourcing platform workers is the main concern for crowdsourcing adopters.
Recent research shows that the best truth inference algorithm is very domain-specific, and no single algorithm outperforms others in most scenarios. Sometimes an intuitive approach like an Expectation-Maximization algorithm could be a practical solution. In the literature, research advances focus on handling task difficulty, worker bias, and worker variance. Specifically, task difficulty describes the degree of ambiguity of a question for which an annotated answer is sought; whereas worker bias and worker variance model the quality of workers to determine how likely a worker gives a wrong answer, assuming all tasks have equal difficulty.
Even though research has unveiled the challenges of crowdsourcing labeling tasks, it is undeniable that cost-effectiveness and scalability make crowdsourcing an attractive approach to generate training data.
This specification describes systems implemented by one or more computers executing one or more computer programs that can classify an item according to a taxonomic hierarchy using one or more machine learning based classifiers and one or more crowdsourcing platforms.
Embodiments of the subject matter described herein include methods, systems, apparatus, and tangible non-transitory carrier media encoded with one or more computer programs for labeling items.
In accordance with particular embodiments, an item record that includes a description of an item is received. Based on one or more machine learning based classifiers, a classification in a hierarchical classification taxonomy is inferred for the item. The hierarchical classification taxonomy includes successive levels of nodes associated with respective class labels and the classification includes an ordered sequence of one or more of the class labels in the hierarchical classification taxonomy. A labeling task is issued over a communications network to a plurality of workers participating in a crowdsourcing system. The labeling task includes evaluating the classification based at least in part on the description of the item and the one or more class labels in the classification. Evaluation decisions are received from the crowdsourcing system. The classification is validated to obtain a validation result, where the validating includes applying at least one consensus criterion to an aggregation of the received evaluation decisions. Data corresponding to the one or more class labels in the classification is routed over a communications network to respective destinations based on the validation result.
Particular embodiments of the subject matter described herein include a computer-readable data storage apparatus comprising a memory component storing executable instructions that are operable to be executed by a processor. In accordance with particular embodiments, the memory component includes executable instructions to infer for the item a classification in a hierarchical classification taxonomy comprising successive levels of nodes associated with respective class labels based on one or more machine learning based classifiers, where the classification includes an ordered sequence of one or more of the class labels in the hierarchical classification taxonomy. The memory component further includes executable instructions to issue, over a communications network, a labeling task to a plurality of workers participating in a crowdsourcing system, where the labeling task includes evaluating the classification based at least in part on the description of the item and the one or more class labels in the classification. The memory component further includes executable instructions to receive evaluation decisions regarding the labeling task from the crowdsourcing system. The memory component further includes executable instructions to validate the classification to obtain a validation result, where the executable instructions to validate comprise executable instructions to apply at least one consensus criterion to an aggregation of the received evaluation decisions. The memory component further includes executable instructions to route, over a communications network, data corresponding to the one or more class labels in the classification to respective destinations based on the validation result.
In accordance with particular embodiments, a system includes a communication interface and a processor. The communication interface is arranged to: issue, over a communications network, a labeling task to a plurality of workers participating in a crowdsourcing system, where the labeling task includes evaluating an inferred classification that includes an ordered sequence of one or more class labels in successive levels of a hierarchical classification taxonomy based at least in part on a description of the item and the one or more class labels in the classification. Respective evaluation decisions are received from the crowdsourcing system. The processor is arranged to: validate the classification to obtain a validation result, where the validating comprises applying at least one consensus criterion to an aggregation of the received evaluation decisions; and routing, over a communications network, data corresponding to the one or more class labels in the classification to respective destinations based on the validation result.
Other features, aspects, objects, and advantages of the subject matter described in this specification will become apparent from the description, the drawings, and the claims.
In the following description, like reference numbers are used to identify like elements. Furthermore, the drawings are intended to illustrate major features of exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements, and are not drawn to scale.
The specification describes examples of an effective end-to-end multi-leveled hybrid solution for improving the quality of labeled training data obtained from one or more crowdsourcing platforms. These examples are described in the context of a machine learning based hierarchical classification system that is trained to classify items into a hierarchical classification taxonomy based on labeled training data.
In some examples, each data item is classified along a respective path through one or more levels of the taxonomic hierarchy 10. In some of these examples, an item is classified along a path that includes one respective node from each level in the hierarchy from one or more high-level broad classes, through zero or more progressively narrower classes, down to the leaf node level classes. In other examples, an item is classified along multiple paths through the taxonomic hierarchy 10. In some examples, an item is classified along a partial path or segment of nodes traversing different levels in the taxonomic hierarchy 10. In some of these examples, the path information improves classification performance.
In other examples, a data item is classified at each level in a taxonomic hierarchy 10 independently of the other levels by a respective classifier (e.g., a machine learning classifier, such as a neural network based classifier for learning word embeddings and text classification). In some of these examples, each machine learning model is trained on a respective set of training data (e.g., item description data) that is relevant to the respective level in the taxonomic hierarchy 10.
The system is designed to acquire high quality labeled training data through quality control strategies that dynamically and cost-effectively leverage the strengths of both crowdsourced workers and domain experts. In this way, machine learning models are trained on a combination of crowdsourced and expert labels.
In a first operational stage, cost-effective truth inference is collected from crowdsourcing workers in a way that is designed to reduce the likelihood of receiving answers that potentially have high bias and variance. In some examples, instead of asking crowdsourcing workers to evaluate item descriptions against a single node in a taxonomic hierarchy (e.g., a leaf node corresponding to an item type), embodiments of the solution ask workers to evaluate complete or partial classification paths through successive levels in the taxonomic hierarchy. This approach increases the classification context for evaluating the item description (and potentially other data associated with the item) and, thereby, increases the likelihood of receiving high-quality and stable training data without increasing crowdsourcing costs.
In a second operational stage, when consensus in an aggregation of workers' answers for a particular task is not reached, the task is passed to one or more trained domain experts who are expected to perform labeling tasks with low worker bias and worker variance due to the training and financial incentives they receive. The trained experts are intimately familiar with the item classifications in the taxonomic hierarchy as well as the guidelines for assigning the most appropriate item category label to any given product item. In some examples, domain experts are instructed to mark high-difficulty tasks as “unsolvable” to circumvent ambiguous cases.
In some examples, collaboration between the well-trained domain experts and the crowdsourcing workers is facilitated by an automated integrated data labeling engine (IDLE) to deliver high-quality hand-annotated training data. The IDLE framework streamlines a workflow for generating high quality training data by automating the process of filtering labeled data (by crowdsourcing) and the process of relabeling filtered data (by in house domain experts). It also provides an integrated environment for managing training data generation tasks as well as for assessing the quality of classification results that are generated by the IDLE system.
The multi-level worker platform 32 has a unified interface that enables the job requester to submit a job through one or more adapters to various crowdsourcing platforms, such as MTurk and Crowdflower. Furthermore, the job requester can assign difficult labeling jobs to domain experts who sign into their IDLE system account to label data. The multi-level worker platform 32 also includes a uniform function interface for common features, such as worker exclusion and answer aggregation, across various crowdsourcing platforms 36.
The one or more adapters 38 provide respective interfaces through which a job requester can connect to a application programming interface (e.g., MTurk API) of a supported crowdsourcing platform to (1) launch a job, (2) stop a job, and (3) retrieve results. Adapters enable easy integration with different crowdsourcing platforms without making significant changes to the user experience or the rest of the IDLE system 30.
Answers returned by crowdsourcing workers are not always consistent and worker quality varies (e.g., master workers vs. non-master workers in MTurk). To address these challenges, the answer aggregation component 42 aggregates the responses received from the workers for a particular task to improve the ability to infer ground truth from the returned answers. In some examples, one or more of the following algorithms are used to aggregate task responses and assess consensus: majority voting, weighted majority voting, and Bayesian voting. In addition, an answer aggregation interface is provided to enable developers of the IDLE system 30 to easily implement customized answer aggregation algorithms. In some examples, a job requester can specify consensus rules in the form of [#answer, #yes] for determining the final answer. In some examples, a rule template defines a consensus criterion in terms of #yes/#answer level of consensus in total #answer number of answers. More elaborate answer aggregation strategies may be expressed through a sequence of consensus rules. For instance, a consensus criteria rule [3, 3] followed by rule [4, 3] collectively instruct the system to first seek unanimous consensus among 3 answers ([3, 3]) and, for questions whose answers fail to meet the first consensus criterion, the system solicits an additional answer (#answer=3+1) according to the second [4, 3] consensus criterion. In some examples, more than two consecutive consensus criteria are applied to the workers' evaluation decisions received for the crowdsourcing job.
The worker quality assessment component 40 is configured to assess worker quality. Worker's quality varies widely on crowdsourcing platforms. The fact that this quality is unknown in advance makes it even more important to assess worker's quality. Examples of the IDLE system 30 are configured to randomly select questions from a curated pool of questions with ground truth answers (called ‘golden tasks’) to estimate worker's quality. A variety of different strategies can be used to assess worker quality. In some examples, the IDLE system 30 performs a qualification test that requires workers to first pass the golden tasks before performing a job. In some examples, the IDLE system 30 performs a hidden test that mixes the golden tasks with regular job questions, and assesses a worker's quality based on the golden tasks after the job is completed. In some examples, a job requester may use either one or both strategies to estimate worker's quality.
The sampling strategy interface 44 enables a job requester to choose among various statistical sampling strategies. The IDLE system 30 includes a general interface for developers to implement the required sampling strategies. The goal is for the job requester to obtain sampling data from a diverse data set. In some examples, the IDLE system 30 includes a number of hierarchical sampling strategies including data clustering followed by stratified sampling, and topic modeling followed by stratified sampling.
The job processing interface 46 enables a job requester to launch jobs of various types (e.g., filter jobs 48, re-label jobs 50, and audit jobs 52).
In a filter job 48, a small set of data are sampled from pre-labeled data and sent to one or more crowdsourcing platforms to confirm their labels. In some examples, filter job questions are presented either as yes/no questions (e.g. “Does the given label match this datum?”) or multiple-choice questions (e.g. “Which of the following labels best matches this datum?”). A filter job also may include one or more golden task questions for the purpose of identifying poor-quality workers to exclude from participating in the job. After the workers submit their answers, the results are collected in the answer aggregation component 42 where they are aggregated according to the prescribed technique and the aggregated results are assessed according to one or more consensus criteria, as described above. The results that associated with high confidence levels are used as new training data for the machine learning model 56. The remaining (filtered-out) data are treated as mislabeled data and become input data for re-label jobs that are handled by domain experts, as described above. It is expected that data that are trivial for crowdsourcing workers can quickly pass through and data that are difficult to label are filtered out, hence, the name ‘Filter’ job. The cost of domain experts is much higher than crowdsourcing workers, which is why it is more cost-effective to have crowdsourcing workforce perform filter jobs on large number of trivial questions first and leave a small number of more challenging re-label jobs to domain experts.
After the data passes through the filter job component 48, the IDLE system 30 automatically collects the filtered-out mislabeled data and makes that mislabeled data available to domain experts for relabeling. As explained above, the domain experts are trained to assign correct labels to the mislabeled data. Thus, data that is relabeled by domain experts do not require quality control or truth inference measures before they become training data for the machine learning model 56. With that said, there might be some data that even domain experts cannot label, these data are regarded as rejected data and recorded for further analysis.
In some examples, after the filter job and the re-label job are completed, all the sampled data are either identified as new training data for the machine learning model or as rejected data for analysis. After retraining the machine learning model 56 in the classification engine with the new training data, the model processes the new training data and updates the product category labels. In some examples, an audit job 52 is performed to assess the accuracy of the retrained machine learning model. Similar to a filter job, a small set of data are sampled and sent to one or more crowdsourcing platforms 36 to identify correctly labeled data. The answer aggregation component 42 is applied to the crowdsourced answers to identify data with high confidence levels and calculate model accuracy while the mislabeled data are simply discarded.
To maximize the effectiveness of crowdsourcing and minimize the costs, the IDLE system 30 includes a data reporter 54 that includes a data visualization dashboard for administrators and analysts to evaluate the effectiveness of crowdsourcing and the performance of the machine learning algorithms. For example, the data reporter 54 enables an analyst to determine the ratio of filter job questions that need to be handled by a re-label job. In some examples, the data reporter 54 includes a crowdsourcing report and a machine learning model report.
The crowdsourcing report provides an evaluation of the effectiveness and the efficiency of crowdsourcing. The crowdsourcing report is designed to provide insights, such as the answer distribution and processing time. The crowdsourcing report includes the statistics and results of crowdsourcing jobs. For filter jobs and audit jobs, the statistics include the ratio of YES vs. NO and job completion time. For re-label jobs, the report displays the ratio of relabeled rate and job completion time. To estimate the overall performance of crowdsourcing for each job, the dashboard also shows the ratio of mislabeled data vs. data with high confidence level in addition to the total processing time.
The machine learning report tracks the rate of improvement for the machine learning model. Thus, the machine learning report shows not only the history of accuracy for the model but also the ratio of data processed through crowdsourcing.
In accordance with this process, a training data database component 62 of the IDLE system 30 shown in
Based on one or more machine learning based classifiers, the machine learning component 56 of the IDLE system 30 infers for the item a classification path in a hierarchical classification taxonomy that includes successive levels of nodes associated with respective class labels, where the classification includes an ordered sequence of one or more of the class labels in the hierarchical classification taxonomy (
Referring to
In addition to inferring a single discrete classification path through a hierarchical classification structure for each item record, examples of the machine learning component 56 also can be trained to classify an item based on one or more record values 72 associated with the item (e.g., product description) into multiple paths in a hierarchical classification structure (i.e., a multi-label classification). For example,
Referring back to
Referring back to
In some examples, the IDLE system 30 issues an interface specification for presenting the labeling task on workers' respective computing devices and receiving workers' responses to the labeling task (e.g., validation or invalidation responses).
After publishing a job to one or more crowdsourcing platforms 36, the job processing component 46 of the IDLE system 30 receives evaluation decisions from the one or more crowdsourcing systems (
After receiving the evaluation decisions for the crowdsourcing job, the job processing component 46 validates the classification to obtain a validation result, where the validating comprises applying at least one consensus criterion to an aggregation of the received evaluation decisions (
In some examples, the validation result may be one of the following: a valid classification, an invalid classification, and an uncertain classification. In some examples, the validating includes, responsive to failure to satisfy at first consensus criterion, issuing the labeling task to at least one additional worker participating in the crowdsourcing system and receiving a respective evaluation decision from the at least one additional worker. In these examples, a second consensus criterion is applied to an aggregation of the received evaluation decisions including the evaluation decision received from the at least one additional worker. As mentioned above, in some examples, more than two consecutive consensus criteria are applied to the workers' evaluation decisions received for the crowdsourcing job.
After validating the classification, the job processing component 46 of the IDLE system 30 routes, over a communications network, data corresponding to the one or more class labels in the classification to respective destinations based on the validation result (
A user may interact (e.g., input commands or data) with the computer apparatus 320 using one or more input devices 330 (e.g. one or more keyboards, computer mice, microphones, cameras, joysticks, physical motion sensors, and touch pads). Information may be presented through a graphical user interface (GUI) that is presented to the user on a display monitor 332, which is controlled by a display controller 334. The computer apparatus 320 also may include other input/output hardware (e.g., peripheral output devices, such as speakers and a printer). The computer apparatus 320 connects to other network nodes through a network adapter 336 (also referred to as a “network interface card” or NIC).
A number of program modules may be stored in the system memory 324, including application programming interfaces 338 (APIs), an operating system (OS) 340 (e.g., the Windows® operating system available from Microsoft Corporation of Redmond, Wash. U.S.A.), software applications 341 including one or more software applications programming the computer apparatus 320 to perform one or more of the steps, tasks, operations, or processes of the hierarchical classification systems described herein, drivers 342 (e.g., a GUI driver), network transport protocols 344, and data 346 (e.g., input data, output data, program data, a registry, and configuration settings).
Examples of the subject matter described herein, including the disclosed systems, methods, processes, functional operations, and logic flows, can be implemented in data processing apparatus (e.g., computer hardware and digital electronic circuitry) operable to perform functions by operating on input and generating output. Examples of the subject matter described herein also can be tangibly embodied in software or firmware, as one or more sets of computer instructions encoded on one or more tangible non-transitory carrier media (e.g., a machine readable storage device, substrate, or sequential access memory device) for execution by data processing apparatus.
The details of specific implementations described herein may be specific to particular embodiments of particular inventions and should not be construed as limitations on the scope of any claimed invention. For example, features that are described in connection with separate embodiments may also be incorporated into a single embodiment, and features that are described in connection with a single embodiment may also be implemented in multiple separate embodiments. In addition, the disclosure of steps, tasks, operations, or processes being performed in a particular order does not necessarily require that those steps, tasks, operations, or processes be performed in the particular order; instead, in some cases, one or more of the disclosed steps, tasks, operations, and processes may be performed in a different order or in accordance with a multi-tasking schedule or in parallel.
Other embodiments are within the scope of the claims.
Number | Name | Date | Kind |
---|---|---|---|
5664109 | Johnson et al. | Sep 1997 | A |
5864848 | Horvitz et al. | Jan 1999 | A |
5897622 | Blinn et al. | Apr 1999 | A |
6772130 | Karbowski et al. | Aug 2004 | B1 |
7082426 | Musgrove et al. | Jul 2006 | B2 |
7197449 | Hu | Mar 2007 | B2 |
7321887 | Dorner et al. | Jan 2008 | B2 |
7546290 | Colando | Jun 2009 | B2 |
7627641 | Aslop | Dec 2009 | B2 |
7685276 | Konig | Mar 2010 | B2 |
7739337 | Jensen | Jun 2010 | B1 |
7747693 | Banister | Jun 2010 | B2 |
7783515 | Kumar et al. | Aug 2010 | B1 |
7788262 | Shirwadkar | Aug 2010 | B1 |
7809824 | Wei et al. | Oct 2010 | B2 |
7899871 | Kumar | Mar 2011 | B1 |
7917548 | Gibson et al. | Mar 2011 | B2 |
8046797 | Bentolia et al. | Oct 2011 | B2 |
8055999 | Dames et al. | Nov 2011 | B2 |
8078619 | Bansal et al. | Dec 2011 | B2 |
8095597 | Rawat et al. | Jan 2012 | B2 |
8230323 | Bennett et al. | Jul 2012 | B2 |
8233751 | Patel et al. | Jul 2012 | B2 |
8458054 | Thakur | Jun 2013 | B1 |
8489689 | Sharma et al. | Jul 2013 | B1 |
8527436 | Salaka et al. | Sep 2013 | B2 |
8560621 | Rawat et al. | Oct 2013 | B2 |
8666812 | Gandhi | Mar 2014 | B1 |
8676815 | Deng | Mar 2014 | B2 |
8738477 | Lefebvre et al. | May 2014 | B2 |
8744948 | McVickar et al. | Jun 2014 | B1 |
8812417 | Martinez et al. | Aug 2014 | B2 |
8819109 | Krishnamurthy | Aug 2014 | B1 |
8868621 | D'Onofrio | Oct 2014 | B2 |
8903924 | Jensen et al. | Dec 2014 | B2 |
9053206 | Cai et al. | Jun 2015 | B2 |
9268860 | Lee et al. | Feb 2016 | B2 |
9275418 | Johansen et al. | Mar 2016 | B2 |
9286586 | Kern et al. | Mar 2016 | B2 |
9305263 | Horvitz et al. | Apr 2016 | B2 |
9311599 | Attenberg et al. | Apr 2016 | B1 |
9313166 | Zeng | Apr 2016 | B1 |
9323731 | Younes et al. | Apr 2016 | B1 |
9436738 | Ehsani et al. | Sep 2016 | B2 |
9483741 | Sun | Nov 2016 | B2 |
9508054 | Brady | Nov 2016 | B2 |
9734169 | Redlich | Aug 2017 | B2 |
9767419 | Venanzi et al. | Sep 2017 | B2 |
9792530 | Wu | Oct 2017 | B1 |
9799327 | Chan | Oct 2017 | B1 |
9846902 | Brady | Dec 2017 | B2 |
10339470 | Dutta | Jul 2019 | B1 |
20010016819 | Kolls | Aug 2001 | A1 |
20020046248 | Drexler | Apr 2002 | A1 |
20020052847 | Shioda et al. | May 2002 | A1 |
20020065884 | Donoho et al. | May 2002 | A1 |
20020091776 | Nolan et al. | Jul 2002 | A1 |
20020143937 | Revashetti et al. | Oct 2002 | A1 |
20020156817 | Lemus | Oct 2002 | A1 |
20020174185 | Rawat et al. | Nov 2002 | A1 |
20030105681 | Oddo | Jun 2003 | A1 |
20040044587 | Schwartzman | Mar 2004 | A1 |
20040044674 | Mohammadioun et al. | Mar 2004 | A1 |
20040064373 | Shannon | Apr 2004 | A1 |
20040117615 | Wilks | Jun 2004 | A1 |
20040177120 | Kirsch | Sep 2004 | A1 |
20040199595 | Banister et al. | Oct 2004 | A1 |
20040205737 | Margaliot et al. | Oct 2004 | A1 |
20040220926 | Lamkin et al. | Nov 2004 | A1 |
20040230647 | Rawat et al. | Nov 2004 | A1 |
20050050099 | Bleistein et al. | Mar 2005 | A1 |
20050055290 | Bross et al. | Mar 2005 | A1 |
20050131764 | Pearson et al. | Jun 2005 | A1 |
20050177785 | Shrader | Aug 2005 | A1 |
20050184152 | Bornitz | Aug 2005 | A1 |
20050210016 | Brunecky | Sep 2005 | A1 |
20050246269 | Smith | Nov 2005 | A1 |
20060026152 | Zeng et al. | Feb 2006 | A1 |
20060088214 | Handley et al. | Apr 2006 | A1 |
20060122899 | Lee et al. | Jun 2006 | A1 |
20060143158 | Ruhl et al. | Jun 2006 | A1 |
20060206063 | Cao | Sep 2006 | A1 |
20060206306 | Cao et al. | Sep 2006 | A1 |
20060265396 | Raman et al. | Nov 2006 | A1 |
20060282442 | Lennon et al. | Dec 2006 | A1 |
20060288268 | Srinivasan et al. | Dec 2006 | A1 |
20070069013 | Seifert et al. | Mar 2007 | A1 |
20070073580 | Perry | Mar 2007 | A1 |
20070073592 | Perry et al. | Mar 2007 | A1 |
20070156732 | Surendran et al. | Jul 2007 | A1 |
20070168464 | Noonan | Jul 2007 | A1 |
20070185865 | Budzik et al. | Aug 2007 | A1 |
20070198727 | Guan | Aug 2007 | A1 |
20070250390 | Lee et al. | Oct 2007 | A1 |
20070294127 | Zivov | Dec 2007 | A1 |
20080033831 | Boss et al. | Feb 2008 | A1 |
20080072140 | Vydiswaran | Mar 2008 | A1 |
20080073429 | Oesterling et al. | Mar 2008 | A1 |
20080098300 | Corrales et al. | Apr 2008 | A1 |
20080147525 | Allen et al. | Jun 2008 | A1 |
20080228466 | Sudhakar | Sep 2008 | A1 |
20080262940 | Kovach | Oct 2008 | A1 |
20080288486 | Kim et al. | Nov 2008 | A1 |
20080306831 | Abraham | Dec 2008 | A1 |
20080306968 | Nandhra | Dec 2008 | A1 |
20080307046 | Baek et al. | Dec 2008 | A1 |
20090089209 | Bixler et al. | Apr 2009 | A1 |
20090171906 | Adams et al. | Jul 2009 | A1 |
20090204545 | Barsukov | Aug 2009 | A1 |
20090299887 | Shiran | Dec 2009 | A1 |
20090300482 | Summers et al. | Dec 2009 | A1 |
20090313101 | McKenna et al. | Dec 2009 | A1 |
20090313132 | McKenna et al. | Dec 2009 | A1 |
20090327268 | Denney et al. | Dec 2009 | A1 |
20100037177 | Golsorkhi | Feb 2010 | A1 |
20100082754 | Bryan | Apr 2010 | A1 |
20100083095 | Nikovski et al. | Apr 2010 | A1 |
20100121775 | Keener | May 2010 | A1 |
20100161527 | Sellamanickam et al. | Jun 2010 | A1 |
20100257066 | Jones et al. | Oct 2010 | A1 |
20110078724 | Mehta et al. | Mar 2011 | A1 |
20110191206 | Kiarostami | Aug 2011 | A1 |
20110191693 | Baggett et al. | Aug 2011 | A1 |
20110208787 | Sidy | Aug 2011 | A1 |
20110246239 | Vdovjak et al. | Oct 2011 | A1 |
20110282734 | Zurada | Nov 2011 | A1 |
20110282906 | Wong | Nov 2011 | A1 |
20120029963 | Olding et al. | Feb 2012 | A1 |
20120047014 | Smadja et al. | Feb 2012 | A1 |
20120059859 | Jiao et al. | Mar 2012 | A1 |
20120089903 | Liu et al. | Apr 2012 | A1 |
20120191585 | Lefebvre et al. | Jul 2012 | A1 |
20120203632 | Blum et al. | Aug 2012 | A1 |
20120203733 | Zhang | Aug 2012 | A1 |
20120239650 | Kim et al. | Sep 2012 | A1 |
20120259882 | Thakur et al. | Oct 2012 | A1 |
20120284081 | Cheng et al. | Nov 2012 | A1 |
20120284150 | Stanley | Nov 2012 | A1 |
20120290609 | Britt | Nov 2012 | A1 |
20120303411 | Chen | Nov 2012 | A1 |
20120303758 | Anbarasan et al. | Nov 2012 | A1 |
20120330971 | Thomas et al. | Dec 2012 | A1 |
20130009774 | Sabeta | Jan 2013 | A1 |
20130024282 | Kansal | Jan 2013 | A1 |
20130024525 | Brady et al. | Jan 2013 | A1 |
20130024924 | Brady et al. | Jan 2013 | A1 |
20130124376 | Lefebvre et al. | May 2013 | A1 |
20130145255 | Zheng et al. | Jun 2013 | A1 |
20130151631 | Jensen et al. | Jun 2013 | A1 |
20130191723 | Pappas et al. | Jul 2013 | A1 |
20130197954 | Yankelevich | Aug 2013 | A1 |
20130231969 | Pelt et al. | Sep 2013 | A1 |
20130228839 | Lefebvre et al. | Oct 2013 | A1 |
20130339145 | Blum et al. | Dec 2013 | A1 |
20140067633 | Venkatasubramanian et al. | Mar 2014 | A1 |
20140105508 | Arora | Apr 2014 | A1 |
20140172767 | Chen et al. | Jun 2014 | A1 |
20140188787 | Balamurugan | Jul 2014 | A1 |
20140189808 | Mahaffey et al. | Jul 2014 | A1 |
20140201126 | Zadeh | Jul 2014 | A1 |
20140229160 | Galle | Aug 2014 | A1 |
20140236577 | Malon et al. | Aug 2014 | A1 |
20140272884 | Allen | Sep 2014 | A1 |
20140314311 | Garera | Oct 2014 | A1 |
20140358814 | Brady et al. | Dec 2014 | A1 |
20150086072 | Kompalli | Mar 2015 | A1 |
20150095017 | Mnih et al. | Apr 2015 | A1 |
20150169522 | Logan et al. | Jun 2015 | A1 |
20150235166 | Brady et al. | Aug 2015 | A1 |
20150235301 | Brady et al. | Aug 2015 | A1 |
20150254593 | Ramos Rinze | Sep 2015 | A1 |
20150295869 | Li et al. | Oct 2015 | A1 |
20150363741 | Chandra | Dec 2015 | A1 |
20160071048 | Gujar | Mar 2016 | A1 |
20160104188 | Glyman et al. | Apr 2016 | A1 |
20160110762 | Mastierov et al. | Apr 2016 | A1 |
20160110763 | Mastierov et al. | Apr 2016 | A1 |
20160117316 | Le et al. | Apr 2016 | A1 |
20160180215 | Vinyals | Jun 2016 | A1 |
20160232440 | Gregor et al. | Aug 2016 | A1 |
20160232474 | Zou | Aug 2016 | A1 |
20160247061 | Trask et al. | Aug 2016 | A1 |
20170011289 | Gao et al. | Jan 2017 | A1 |
20170017635 | Leliwa et al. | Jan 2017 | A1 |
20170032035 | Gao et al. | Feb 2017 | A1 |
20170076196 | Sainath et al. | Mar 2017 | A1 |
20170140753 | Jaitly et al. | May 2017 | A1 |
20170154258 | Liu et al. | Jun 2017 | A1 |
20170154295 | Fang | Jun 2017 | A1 |
20170192956 | Kaiser et al. | Jul 2017 | A1 |
20170200076 | Vinyals et al. | Jul 2017 | A1 |
20170235848 | Dusen et al. | Aug 2017 | A1 |
20170270100 | Audhkhasi et al. | Sep 2017 | A1 |
20170270409 | Trischler et al. | Sep 2017 | A1 |
20190197483 | Li | Jun 2019 | A1 |
20190236478 | Wu | Aug 2019 | A1 |
20200326832 | Lee | Oct 2020 | A1 |
Number | Date | Country |
---|---|---|
1139264 | Oct 2001 | EP |
2002014681 | Jan 2002 | JP |
2014509002 | Apr 2014 | JP |
2017505964 | Feb 2017 | JP |
20140138512 | Dec 2014 | KR |
200137540 | May 2001 | WO |
2016064679 | Apr 2016 | WO |
2017083752 | May 2017 | WO |
2017090051 | Jun 2017 | WO |
Entry |
---|
Japan Patent Office, “Office Action,” issued in connection with Japanese Patent Application No. 2019-013304, dated Mar. 3, 2023, 4 pages (English Translation Included). |
Number | Date | Country | |
---|---|---|---|
20190236478 A1 | Aug 2019 | US |
Number | Date | Country | |
---|---|---|---|
62623448 | Jan 2018 | US |