The present invention relates to text classifiers. In particular, the present invention relates to the classification of user queries.
In the past, search tools have been developed that classify user queries to identify one or more tasks or topics that the user is interested in. In some systems, this was done with simply key-word matching in which each key word was assigned to a particular topic. In other systems, more sophisticated classifiers have been used that use the entire query to make a determination of the most likely topic or task that the user may be interested in. Examples of such classifiers include support vector machines that provide a binary classification relative to each of a set of tasks. Thus, for each task, the support vector machine is able to decide whether the query belongs to the task or not.
Such sophisticated classifiers are trained using a set of queries that have been classified by a librarian. Based on the queries and the classification given by the librarian, the support vector machine generates a hyper-boundary between those queries that match to the task and those queries that do not match to the task. Later, when a query is applied to the support vector machine for a particular task, the distance between the query and the hyper-boundary determines the confidence level with which the support vector machine is able to identify the query as either belonging to the task or not belonging to the task.
Although the training data provided by the librarian is essential to initially training the support vector machine, such training data limits the performance of the support vector machine over time. In particular, training data that includes current-events queries becomes dated over time and results in unwanted topics or tasks being returned to the user. Although additional librarian-created training data can be added over time to keep the support vector machines current, such maintenance of the support vector machines is time consuming and expensive. As such, a system is needed for updating search classifiers that requires less human intervention, while still maintaining a high standard of precision and recall.
A method and computer-readable medium are provided for constructing a classifier for classifying search queries. The classifier is constructed by receiving a query from a user and applying the query to a classifier to identify the task. An unsupervised mapping between the query and the task is then identified and is used to train a new classifier. Under one embodiment, the unsupervised mapping is identified based on a user's selection of the task.
The present invention may be practiced within a single computing device or in a client-server architecture in which the client and server communicate through a network.
The computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
With reference to
Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation,
The computer 110 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
A user may enter commands and information into the computer 110 through input devices such as a keyboard 162, a microphone 163, and a pointing device 161, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190. In addition to the monitor, computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 190.
The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110. The logical connections depicted in
When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
As shown in the flow diagram of
At step 304 of
In step 306 of
At step 308 of
Over time, log 210 grows in size to include log entries from many users over many different search sessions. After a period of time, typically a week, log 210 is used to build a new classifier as shown in the steps of
At step 500 of
At step 502, log parser 212 applies each query that resulted in a selected task to the classifier model stored in storage 208 to determine the confidence level of the task selected by the user. The query, task and confidence level are then stored in a database 214.
The query and selected task represent an unsupervised query-to-task mapping. This mapping is unsupervised because it is generated automatically without any supervision as to whether the selected task is appropriate for the query.
Under one embodiment, query-to-task mappings stored in database 214 are stored with a confidence bucket indicator that indicates the general confidence level of the query-to-task mapping. In particular, a separate bucket is provided for each of the following ranges of confidence levels: 50-60%, 60-70%, 70-80%, 80-90% and 90-100%. These confidence buckets are shown as buckets 216, 218, 220, 222 and 224 in
Using a build interface 230, a build manager 232 selects a combination of training data at step 506.
Under the embodiment of
In
Check box 614 allows build manager 232 to select training data has been newly created by a librarian. In other words, a librarian has associated a task with a query and that mapping has been stored as new manual training data 236 in
Under one embodiment, build interface 230 uses the selections made in the check boxes of
Build interface 230 also includes a freshness box 652, which allows the build manager to designate the percent of the training data that is to be used in constructing the classifier. This percentage represents the latest x percent of the training data that was stored in the log. For example, if the percentage is set at twenty percent, the latest 20 percent of task mappings that are found in the database are used to construct the classifier. Thus, the freshness box allows the build manager to select the training data based on when the mappings were produced.
Freshness box 652 allows the build manager to tailor how much old training data will be used to construct the classifier. In addition, in embodiments where the training data is specified on a per task basis using task selection box 650, it is possible to set different freshness levels for different tasks. This is helpful because some tasks are highly time-specific and their queries change significantly over time making it desirable to use only the latest training data. Other tasks are not time-specific and their queries change little over time. For these tasks it is desirable to use as much training data as possible to improve the performance of the classifier.
Based on the check boxes selected in build interface 230, build script 238 retrieves the query-to-task mappings with the appropriate designations 216, 218, 220, 222, 224, 233, 234 and/or 236 and uses those query-to-task mappings to build a candidate classifier 240 at step 508.
Candidate classifier 240 is provided to a tester 242, which at step 510 of
Under one embodiment, the step of testing the candidate classifier at step 510 is performed using a “holdout” methodology. Under this method, the selected training data is divided into N sets. One of the sets is selected and the remaining sets are used to construct a candidate classifier. The set of training data that was not used to build the classifier is then applied to the classifier to determine its precision, recall and FeelGood performance. This is repeated for each set of data such that a separate classifier is built for each set of data that is held out. The performance of the candidate classifier is then determined as the average precision, recall, and FeelGood performance of each of the candidate classifiers generated for the training data.
At step 512, the build interface 230 is provided to build manager 232 once again so that the build manager may change the combination of training data used to construct the candidate classifier. If the build manager selects a new combination of training data, the process returns to step 506 and a new candidate classifier is constructed and tested.
When the build manager has tested all of the desired combinations of training data, the best candidate classifier is selected at step 514. The performance of this best candidate is then compared to the performance of the current classifier at step 516. If the performance of the current classifier is better than the performance of the candidate classifier, the current classifier is kept in place at step 518. If, however, the candidate classifier performs better than the current classifier, the candidate classifier is designated as a release candidate 243 and is provided to a rebuild tool 244. At step 520, rebuild tool 244 replaces the current classifier with release candidate 243 in model storage 208. In many embodiments, the changing of the classifier stored in model storage 208 s performed during non-peak times. When the search classifier is operated over multiple servers, the change in classifiers is performed in a step-wise fashion across each of the servers.
Thus, the present invention provides a method by which a search classifier may be updated using query-to-task mappings that have been designated by the user as being useful. As a result, the classifier improves in performance and is able to change over time with new queries such that it is no longer limited by the original training data used during the initial construction of the search classifier. As a result, less manually entered training data is needed under the present invention in order to update and expand the performance of the classifier.
While the present invention has been described with reference to queries and tasks, those skilled in the art will recognize that a query is simply one type of example that can be used by an example-based categorizer such as the one described above and a task is just one example of a category. Any type of example and any type of category may be used with the present invention.
Although the present invention has been described with reference to particular embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
5671333 | Catlett et al. | Sep 1997 | A |
6192360 | Dumais et al. | Feb 2001 | B1 |
6253169 | Apte et al. | Jun 2001 | B1 |
6789069 | Barnhill et al. | Sep 2004 | B1 |
6886008 | Blayvas et al. | Apr 2005 | B2 |
6901398 | Horvitz et al. | May 2005 | B1 |
7062488 | Reisman | Jun 2006 | B1 |
7089226 | Dumais et al. | Aug 2006 | B1 |
20010037324 | Agrawal et al. | Nov 2001 | A1 |
20020078044 | Song et al. | Jun 2002 | A1 |
20020103775 | Quass et al. | Aug 2002 | A1 |
20020107843 | Biebesheimer et al. | Aug 2002 | A1 |
20020194161 | McNamee et al. | Dec 2002 | A1 |
20030004966 | Bolle et al. | Jan 2003 | A1 |
20030033274 | Chow et al. | Feb 2003 | A1 |
20030046297 | Mason | Mar 2003 | A1 |
20030046311 | Baidya et al. | Mar 2003 | A1 |
20030093395 | Shetty et al. | May 2003 | A1 |
20030110147 | Li et al. | Jun 2003 | A1 |
20030154181 | Liu et al. | Aug 2003 | A1 |
20030200188 | Moghaddam | Oct 2003 | A1 |
20030233350 | Dedhia et al. | Dec 2003 | A1 |
20040120572 | Luo et al. | Jun 2004 | A1 |
20040162852 | Qu et al. | Aug 2004 | A1 |
20050066236 | Goeller et al. | Mar 2005 | A1 |
20050071300 | Bartlett et al. | Mar 2005 | A1 |
20050108200 | Meik et al. | May 2005 | A1 |
20050131847 | Weston et al. | Jun 2005 | A1 |
20050216426 | Weston et al. | Sep 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20040111419 A1 | Jun 2004 | US |