There are many applications such as relevance ranking, identification of intent, image classification and handwriting classification that employ machine learning techniques over manually labeled data. In such applications that use supervised learning techniques, a first step is to obtain manually labeled data. For this, human judges are provided with guidelines as to how to label a set of items (these items can be documents, images, queries and so forth, depending on the application).
These guidelines can be anywhere from a few sentences to tens of pages. While detailed guidelines serve to clarify the labeling criteria, in practice, it is often not possible for human judges to assimilate and apply all the guidelines consistently and correctly. The difficulty increases as the guidelines get longer and more complex. Further, most judges need to label a large number of items within a short span of time.
This results in noisy labels, which hinders the performance of the machine learning techniques and directly impacts the businesses that depend on these techniques. It also limits any ability to evaluate and compare against the competition, as these labels are also used during evaluation time.
This Summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Briefly, various aspects of the subject matter described herein are directed towards a technology by which objective (e.g., binary yes/no) questions are developed (e.g., by experts) and provided to judges for evaluating against data samples to obtain answers. The answers are input to a label assignment mechanism (algorithm) that determines a label for the data sample based upon the answers. The label is then associated with the data sample.
In one aspect, the questions may be arranged in a tree-like structure in which the answer to a question determines whether to ask a subsequent question, or determines which branch to take to ask a subsequent question. The label assignment algorithm may be constructed by performing a depth-first traversal of the tree-like structure.
In one aspect, the objective questions may be based upon a set of guidelines. If the guidelines are modified and previous answers to the binary questions are maintained, at least some of the previous answers may be used in re-labeling the samples in view of the modification. For example, questions may be added, deleted and/or changed; the other answers may remain valid. Also, if the guidelines result in a label change, the label assignment algorithm may be re-run with the new label change on the previous answers to re-label the samples.
Other advantages may become apparent from the following detailed description when taken in conjunction with the drawings.
The present invention is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
Various aspects of the technology described herein are generally directed towards removing the need for the judges to work with guidelines, by asking a series of questions with more definite answers. For example, the questions may be framed such that answers are binary, either true or false. At the same time, the questions are generally designed in such a way that they require no intrinsic or new knowledge, instead requiring only common sense. Once the answers are obtained, an automated procedure uses these answers to infer the labels.
It should be understood that any of the examples herein are non-limiting. As such, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used various ways that provide benefits and advantages in sample labeling and data processing in general.
Thus, once the set of guidelines 102 for labeling are decided, the guidelines are converted into the set of binary questions 104. This conversion is typically performed by manual operations, although some assistance from automated technology may be used, such as to merge candidate questions from a number of experts into a final set.
In addition, the questions 104 are considered in conjunction with the guidelines to produce a label assignment algorithm 106. In general and as exemplified below, the label assignment algorithm 106 evaluates the yes/no (or other) answers 108 for the questions 104 with respect to each sample of data (sample 1101-110N) to be labeled to come up with a label 1121-112N for each. Note that the mapping of the set of guidelines 102 to the questions 104 and to the label assignment algorithm 106 is done once for a given guideline, (as represented in
For every sample 1101-110N to be labeled, each judge 116 of a set of one or more judges is asked to answer these binary questions 104, providing the set of answers 108 from each judge. In other words, the answers 108 are obtained per sample and per judge, as represented in
For each judged sample 1101-110N, the label assignment algorithm 106 assigns a label 1121-112N based upon the various answers 108 for that sample/judge in an automated post processing step. A composite label for each sample may be obtained by combining the separate labels from the judges, e.g., by using majority voting. Note that in theory, with a proper question set and correct answers, the label assigned by the algorithm will be the same as one provided by a judge that directly used and correctly applied the guidelines.
By way of example, consider a first scenario in which human labels are required to train a machine learning algorithm, such as to label whether a query has commercial intent. This first scenario is concerned with the articulation of guidelines to enable identification of a dominant label for items that can be ambiguously labeled. For this task, the judges will be asked questions that identify whether a given query reflects the intention (for majority of users) to buy a tangible product, so that commerce-related content can be shown to such users.
To infer whether a query posed to a search engine has commercial intent, large amounts of training data labeled as commercial or noncommercial are typically needed, because of large variations in the search queries. The labeling task is inherently very difficult, and indeed, it is not possible to precisely specify which queries have commercial intent because the same query can have multiple interpretations. For example, consider the query ‘digital camera’. If the user intent is to buy a digital camera, then this query has commercial intent; however, when the intent is to find the history of digital cameras, then the same query has noncommercial intent.
In general, a query has a commercial intent if most of the users who submit the query have the intention to buy a tangible product. Examples of tangible products include items such as books, furniture, clothing, jewelry, household goods, vehicles, and so forth; services are not considered to be tangible products. For example, “medical insurance” and “cleaning services” are queries that are not considered commercial in this particular scenario. However commercial intent includes queries in which a user submits when researching the product before buying it, e.g., “digital camera reviews” and “digital camera price comparison” reflect an intention to buy. Intention to buy also means that money will be spent on the product, and thus excludes products that can be obtained for free. For example, a “free ringtones” query does not have commercial intent.
Binary questions corresponding to such a set of guidelines may be used to explicitly force the judges to think of certain situations, namely one where the user may intend to buy a tangible product, and another where the user may not have such intent. The judges thus compare the two situations, and choose the one believed to be more frequent. For example, the following questions may be asked:
Note that question 2 may be answered either way, even if the answer to Question 1 is Yes. For example, consider the query “chocolates”. The correct answer to Question 1 is “Yes” because a person may be planning to buy chocolates for a gift. The correct answer to Question 2 is also “Yes” because a person may be trying to learn about different types of chocolates, rather than buying chocolates.
Thus a third question (which may be asked contingent only upon both answers being “Yes”) may be posed:
As can be readily appreciated, the third question makes a judge consider whether commercial intent is more likely with respect to a query. For example, if the query is for a brand name of a popular product, then most judges will likely consider the query to have commercial intent.
The label assignment algorithm evaluates the answers. For any given query, the query is considered to be commercial if the answer to Question 1 is positive, and if the answer to Question 3 is that the situation imagined for Question 1 is more likely than the one for Question 2, whenever the answer to Question 2 is positive. The following table provides a concise description of the assignment algorithm that can applied to automatically assign a label that indicates if the query has commercial intent. Note that the approach allows easy identification of bad judges when they have inconsistency, in terms of invalid combinations, that is, there is a “bad-judge” label among the set of possible labels, (although ultimately the sample will not be labeled “bad-judge” if, for example, the majority of labels are “bad-judge” for a particular sample; indeed, such a sample may be discarded or further analyzed). Alternatively, if the bad-judge “label” appears too often among a plurality of judges over a plurality of samples, it is likely that one or more of the questions is causing the judges to be confused.
Note that this approach further allows identification of bad or ambiguous questions. To achieve this, a ground truth set may be created, comprising pairs of questions and samples such that there is only one real true answer to that pair.
Typically, the creator of the guidelines also produces the ground truth set, and/or the set may be used by a (separate) set of judges. If the majority of the judges' answers are inconsistent with the true answer, this indicates that the question is bad or ambiguous; inconsistencies across multiple judges are thus used to decide if a question is bad or ambiguous.
Turning to another aspect, modifications to the guidelines can be easily incorporated without throwing away the data collected so far. By way of example, consider the revision to the guidelines that exclude queries about vehicles/automobiles from being a commercial query. To this end, another question is added:
The label assignment algorithm can combine the answers from the previous questions and this question, without re-asking the previous questions, as shown in the following table:
It should be noted that by separately maintaining the answers to the questions, other types of modifications are feasible. Thus, not only may a question (or questions) be added, but a question (or questions) may be deleted or substituted. Only the changed questions need to be asked, with the answers merged with the other remaining answers.
Turning to another example, a second scenario is directed towards meeting multiple conditions for assigning a particular label, exemplified herein as determining the relevance of a document (representative of that product) to a query. Given a commercial query and a product, the labeling goal is to assign a label (such as Perfect, Good, Fair, Bad) indicating the relevance of the product for the query. In order to assign one of these labels, multiple conditions need to be satisfied.
To determine the relevance of a product for a given query (with commercial intent), each judge is shown a <query, product> tuple, that is, a query with commercial intent and a product description page (containing title, description, and image of the product). The goal for the judge is to assign one of the following labels: perfect, good, fair, bad. The training set formed using these labels can be used to learn a ranking model that ranks products given a query. The relevance of the ranking model depends on the quality of the training data, and thus precise labels are needed.
The difficulty of determining the relevance of a product to a query and thereby assigning a label arises because there are many different classes of queries with commercial intent. A query may broadly refer to a product class, e.g., ‘digital camera’, or may be about a specific product, e.g. ‘40 GB black zune mp3 player’ (note capitalization was intentionally omitted to reflect how users typically input queries).
For a possible set of guidelines the nature of the query is evaluated, i.e., whether the query is specific to a brand, a product, a product class (e.g. video games or digital cameras), or a line of products (e.g. Microsoft Office). Consider when the query is specific to a brand; the label cannot be ‘perfect’ for any product. If the product shown matches the brand exactly, the label should be ‘good’. On the other hand, if the product can be associated with a competing brand, the label should be ‘fair’. Otherwise the label should be ‘bad’.
Next consider the case in which the query is specific to a product class. If the product belongs to the identical product class, the label should be ‘good’. If the product belongs to the similar class (e.g. digital camera versus disposable camera), the label should be ‘fair’ and otherwise, ‘bad’.
Further, consider the case in which the query is specific to a line of products. If the product matches the product line exactly, the label should be ‘perfect’. However if the product is of the same brand but not the same product line (e.g. query about ‘Microsoft Excel’ but the product about ‘Microsoft Word’), the label should be ‘good’. If the product is an accessory for a product line, the label should be ‘fair’ and else, ‘bad’. Finally consider the case in which the query is specific to a product. If the product shown matches the product of the query exactly, the label should be ‘perfect’. If the product shown is somewhat related to the query (e.g. 40 GB zune versus 8 GB zune), the label should be ‘good’. If the product shown is about a different brand (e.g. 40 GB zune versus 80 GB generic MP3 player) or an accessory, then the label should be ‘fair’ and else, ‘bad’.
The following table sets forth example binary questions corresponding to the above set of guidelines:
The above guidelines are thus converted into simple questions so that a judge need not comprehend and remember the complex guidelines in their entirety. Instead a set of simple questions can be posed to the judge in an adaptive manner, as represented in the tree diagram of
Thus, as represented in
Based on the guidelines, the following algorithm for assigning the label is produced. The label assignment algorithm takes as input the binary answers provided by the judge (note that the judge need not be aware of the label assignment algorithm). Thus in the process of posing binary questions, reduce the burden on the judge may be further reduced.
As can be seen, the label assignment algorithm traverses the binary questions tree of
The guidelines for determining relevance of a product to a query are likely to be modified over time, and also, better ways of phrasing a question may be developed, (such as if a particular question seems to confuse judges). As in the case of detecting commercial intent, modifications to the guidelines may be made without having to abandon the data collected so far. For example, if a new criterion (e.g. if the age of the product is more than five years, the label should always be ‘bad’) is to be included, the sequence of binary questions and the label assignment algorithm may be modified without making the previously collected data invalid. Similarly if the outcome of an existing criterion is changed (e.g., if the query is specific to a brand and the product matches the brand exactly, the label should be ‘perfect’ instead of ‘good’), only label assignment algorithm needs to be updated, while making use of the existing data collected.
As can be seen, there is described the conversion of guidelines to questions regarding a data sample, with the answers to those questions processed to infer a label for that sample. This makes it easy to obtain large-scale data sets, as the amount of time a judge needs to spend answering a question is relatively small. Further, the data collected by this approach is less error prone and more consistent since the questions can be easily answered by the judges.
Moreover, when the guidelines change, it is easy to straightforward to re-use some or all of the existing collected data to adjust for the change. This may involve removing some of the earlier questions or asking more or different questions, and then automatically re-computing the label; (note that earlier labeling approaches discard the data when the guidelines change). Also, a label may be changed without needing to collect any new data.
The technology also makes it easy to identify specific criteria that need clarification or are causing confusion amongst the judges, facilitating quick and focused refinements to both the questions and/or to the original guidelines. This can potentially be done without having to completely discard the previously collected data. The technology also allows effective identification of bad judges, because since a judge who consistently performs badly on questions relative to other judges can be quickly identified as being not likely suited for the judgment process.
Exemplary Operating Environment
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to: personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in local and/or remote computer storage media including memory storage devices.
With reference to
The computer 310 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer 310 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by the computer 310. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above may also be included within the scope of computer-readable media.
The system memory 330 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 331 and random access memory (RAM) 332. A basic input/output system 333 (BIOS), containing the basic routines that help to transfer information between elements within computer 310, such as during start-up, is typically stored in ROM 331. RAM 332 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 320. By way of example, and not limitation,
The computer 310 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media, described above and illustrated in
The computer 310 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 380. The remote computer 380 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 310, although only a memory storage device 381 has been illustrated in
When used in a LAN networking environment, the computer 310 is connected to the LAN 371 through a network interface or adapter 370. When used in a WAN networking environment, the computer 310 typically includes a modem 372 or other means for establishing communications over the WAN 373, such as the Internet. The modem 372, which may be internal or external, may be connected to the system bus 321 via the user input interface 360 or other appropriate mechanism. A wireless networking component 374 such as comprising an interface and antenna may be coupled through a suitable device such as an access point or peer computer to a WAN or LAN. In a networked environment, program modules depicted relative to the computer 310, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
An auxiliary subsystem 399 (e.g., for auxiliary display of content) may be connected via the user interface 360 to allow data such as program content, system status and event notifications to be provided to the user, even if the main portions of the computer system are in a low power state. The auxiliary subsystem 399 may be connected to the modem 372 and/or network interface 370 to allow communication between these systems while the main processing unit 320 is in a low power state.
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents failing within the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
5574828 | Hayward et al. | Nov 1996 | A |
5680509 | Gopalakrishnan et al. | Oct 1997 | A |
5764923 | Tallman et al. | Jun 1998 | A |
6298351 | Castelli et al. | Oct 2001 | B1 |
6409599 | Sprout et al. | Jun 2002 | B1 |
6434549 | Linetsky et al. | Aug 2002 | B1 |
6692436 | Bluth et al. | Feb 2004 | B1 |
6697998 | Damerau et al. | Feb 2004 | B1 |
7155421 | Haldar | Dec 2006 | B1 |
7200852 | Block | Apr 2007 | B1 |
7603330 | Gupta et al. | Oct 2009 | B2 |
7831537 | Grabarnik et al. | Nov 2010 | B2 |
8005531 | Xue et al. | Aug 2011 | B2 |
8156065 | Larios et al. | Apr 2012 | B1 |
8375020 | Rogers et al. | Feb 2013 | B1 |
8380696 | Rogers et al. | Feb 2013 | B1 |
8560948 | Hu | Oct 2013 | B2 |
20020062226 | Ito et al. | May 2002 | A1 |
20030182163 | Tice et al. | Sep 2003 | A1 |
20040024634 | Carp et al. | Feb 2004 | A1 |
20040024769 | Forman et al. | Feb 2004 | A1 |
20040111353 | Ellis et al. | Jun 2004 | A1 |
20040204981 | Schuebel et al. | Oct 2004 | A1 |
20040210438 | Gillick et al. | Oct 2004 | A1 |
20050038647 | Baker | Feb 2005 | A1 |
20050089246 | Luo | Apr 2005 | A1 |
20060078867 | Penny et al. | Apr 2006 | A1 |
20060111986 | Yorke et al. | May 2006 | A1 |
20060198502 | Griebat | Sep 2006 | A1 |
20060289651 | Gostling | Dec 2006 | A1 |
20060292539 | Jung et al. | Dec 2006 | A1 |
20070026372 | Huelsbergen | Feb 2007 | A1 |
20070067295 | Parulski et al. | Mar 2007 | A1 |
20070078668 | Pathria et al. | Apr 2007 | A1 |
20070083403 | Baldwin et al. | Apr 2007 | A1 |
20070136457 | Dai et al. | Jun 2007 | A1 |
20070156559 | Wolzenski et al. | Jul 2007 | A1 |
20070166674 | Kochunni et al. | Jul 2007 | A1 |
20070166689 | Huang et al. | Jul 2007 | A1 |
20070192166 | Van Luchene | Aug 2007 | A1 |
20070219996 | Jarvinen | Sep 2007 | A1 |
20080021851 | Alcalde et al. | Jan 2008 | A1 |
20080027790 | Balz et al. | Jan 2008 | A1 |
20080065995 | Bell et al. | Mar 2008 | A1 |
20080082013 | Xue et al. | Apr 2008 | A1 |
20080177704 | Denney et al. | Jul 2008 | A1 |
20090006269 | Klayman | Jan 2009 | A1 |
20090012838 | DeJong et al. | Jan 2009 | A1 |
20090030856 | Arena et al. | Jan 2009 | A1 |
20090064028 | Garvey et al. | Mar 2009 | A1 |
20090070152 | Sperske et al. | Mar 2009 | A1 |
20090070160 | Kasravi et al. | Mar 2009 | A1 |
20090100371 | Hu | Apr 2009 | A1 |
20090106233 | Veenstra | Apr 2009 | A1 |
20090143905 | Blust et al. | Jun 2009 | A1 |
20090150387 | Marchewitz | Jun 2009 | A1 |
20090254421 | Wolfe | Oct 2009 | A1 |
20090276706 | Lukes | Nov 2009 | A1 |
20090287514 | West | Nov 2009 | A1 |
20100023346 | Paty et al. | Jan 2010 | A1 |
20100042409 | Hutchinson et al. | Feb 2010 | A1 |
20100047754 | Metz et al. | Feb 2010 | A1 |
20100056239 | Inubushi et al. | Mar 2010 | A1 |
20100082603 | Krompass et al. | Apr 2010 | A1 |
20100198630 | Page et al. | Aug 2010 | A1 |
20100309971 | Vanman et al. | Dec 2010 | A1 |
Entry |
---|
Black et al., “Decision Tree Models Applied to the Labeling of Text with Parts-of-Speech”, In Darpa Workshop on Speech and Natural Language, 1992, pp. 117-121. |
Gowtham Bellala, “Generalized Binary Search Trees and Clock Tree Revisited”, In the Proceedings of American Medical Informatics Association, 2007, 9 pages. |
Document “Decision Trees”, 28 pages, accessed online at <http://www.cse.msu.edu/˜cse802/DecisionTrees.pdf> on Mar. 21, 2014. |
Hoshino, et al., “WebExperimenter for Multiple-Choice Question Generation”, Retrieved at <<http://www.r.dl.itc.u-tokyo.ac.jp/˜nakagawa/academic-res/hoshino-HLT.pdf>>, Proceedings of HLT/EMNLP on Interactive Demonstrations, Oct. 7-7, 2005, pp. 2. |
Vega, et al., “Continuous Naive Bayesian Classification”, Retrieved at <<https://dl.comp.nus.edu.sg/dspace/bitstream/1900.100/1429/1/report.pdf>>, Technical Report, TRB6/03, Department of Computer Science, National University of Singapore, Jun. 2003, pp. 10. |
Kim, et al., “Automatic Identification of Pro and Con Reasons in Online Reviews”, Retrieved at <<http://www.isi.edu/natural-language/people/hovy/papers/06ACL-ProCon-opinions-short.pdf>>, Proceedings of the COLING/ACL on Main conference poster sessions, Jul. 17-18, 2006, pp. 8. |
Number | Date | Country | |
---|---|---|---|
20100318539 A1 | Dec 2010 | US |