A. Field of the Invention
The invention relates to artificial intelligence applications that require training sets having positive and negative examples, especially recommender systems and particularly for use with television. The invention relates even more particularly to such applications that use statistically valid techniques for choosing negative examples for training sets.
B. Related Art
U.S. patent application Ser. No. 09/498,271 filed Feb. 4, 2000 (US 000018), incorporated herein by reference, discloses a television recommender system. In that system, recommendations are deduced based on a pattern of shows watched or not watched. Of course, the number of shows not watched necessarily dwarfs the number of shows watched. Accordingly, a heuristic was developed for selecting shows not watched. The heuristic was to select a not watched show for each watched show, the not watched show being taken at random from time slots other than the slot in which the corresponding watched show occurred.
In general, many artificial intelligence applications have training sets with positive and negative examples. The heuristic for selecting negative examples needs improvement over the concept of selecting negative examples at random one-by-one with reference to respective individual positive examples.
It is an object of the invention to improve heuristics for choosing negative examples for a training set for an artificial intelligence application.
This object is achieved in that a group of negative examples is selected corresponding to a group of positive examples, rather than one-by-one.
This object is further achieved in that the group of positive examples is analyzed according to a feature presumed to be dominant. Then a first fraction of the negative examples is taken from the non-positive possible examples sharing the feature with the positive examples.
This object is still further achieved in that a second fraction of the shows is taken from slots within a predetermined range in feature space with respect to the feature.
This object is yet still further achieved in that no negative example is taken more than once.
Advantageously the application in question is a recommender for content, such as television, where the positive examples are selected content, the negative examples are non-selected content. Advantageously, also, the feature is time of day of broadcast.
Further objects and advantages will be apparent in the following.
The invention will now be described by way of non-limiting example with reference to the following drawings.
Herein the invention is described with respect to recommenders for television, but it might be equally applicable to training sets for any artificial intelligence application, including recommenders for other types of content. The term “show” is intended to include any other type of content that might be recommended by a recommender, including audio, software, and text information. The term “watch” or “watched” is intended to include any type of positive example selection, including experiencing of any type of content, e.g. listening or reading. The invention is described also with the assumption that time is the dominant feature distinguishing watched from not-watched content; however other dominant features might be used as parameters for selecting negative examples for a training set.
Commonly there will be at least one memory device 6, such as a CD ROM drive, floppy disk drive, hard drive, or any other type of memory device. The memory device 6 can store data, software, or both.
There may be other peripherals not shown, such as a voice recognition system, a PC camera, speakers, and/or a printer.
At 1401, a population of watched shows of statistically significant size is accumulated. In the examples of Users H & C, the sizes of the population are over 275 and 175, respectively; however, other size populations are usable, so long as they are statistically significant.
At 1402 the distribution of watched shows with respect to time is determined, and preferred time slots are determined. The distribution can be taken in the form of a histogram, see e.g.
Then at 1403, a first fraction of the negative examples is chosen in the preferred time slots of this user. In the preferred embodiment the fraction is 50%.
At 1404, optionally, a second fraction of negative examples is taken from a predetermined time interval around the preferred time slot or slots. In the preferred embodiment, the second fraction will be taken from the hour immediately before and the hour immediately after the single most preferred time slot If optional step 1404 is omitted, then all of the negative examples should be taken from either the preferred time slots, or from all the time slots viewed by that user. Thus, the option—at 1402—of using all the time slots used by the user is most likely to be chosen when step 1404 is to be omitted.
The negative example set is then taken at 1405 to include the first fraction and any second fraction. In the preferred embodiment, the negative example set in fact is just the first and second fractions.
At 1406, the recommender is trained using positive and negative example sets.
The technique of
1. The system predicts a yes answer and there actually is a yes answer (TP)
2. The system predicts a no answer and there is actually a yes answer (FN)
3. The system predicts a yes answer and there actually is a no answer (FP)
4. The system predicts a no answer and there actually is a no answer (TN)
Then the “hit rate” is defined in accordance with the following equation:
And the false positive rate is calculated in accordance with the following equation:
Usually a content recommender will first assign a probability of success for each piece of content, with respect to a user. Then the content will be recommended if its probability of success exceeds some threshold. The points on the curve of
In the examples given above, the set of negative examples is generally chosen to have the same number of members as the set of positive examples. However, those of ordinary skill in the art can design training sets in accordance with the invention where the number of negative examples is more or less than the number of positive examples.
From reading the present disclosure, other modifications will be apparent to persons skilled in the art. Such modifications may involve other features which are already known in the design, manufacture and use of training sets for artificial intelligence applications and which may be used instead of or in addition to features already described herein. Although claims have been formulated in this application to particular combinations of features, it should be understood that the scope of the disclosure of the present application also includes any novel feature or novel combination of features disclosed herein either explicitly or implicitly or any generalization thereof, whether or not it mitigates any or all of the same technical problems as does the present invention. The applicants hereby give notice that new claims may be formulated, including method, software embodied in a storage medium, and “means for” claims, to such features during the prosecution of the present application or any further application derived therefrom.
The word “comprising”, “comprise”, or “comprises” as used herein should not be viewed as excluding additional elements. The singular article “a” or “an” as used herein should not be viewed as excluding a plurality of elements.
Number | Name | Date | Kind |
---|---|---|---|
5444499 | Saitoh | Aug 1995 | A |
5749081 | Whiteis | May 1998 | A |
5758257 | Herz et al. | May 1998 | A |
5801747 | Bedard | Sep 1998 | A |
5867226 | Wehmeyer et al. | Feb 1999 | A |
6000018 | Packer et al. | Dec 1999 | A |
6020883 | Herz et al. | Feb 2000 | A |
6119112 | Bush | Sep 2000 | A |
6134532 | Lazarus et al. | Oct 2000 | A |
6177931 | Alexander et al. | Jan 2001 | B1 |
6370513 | Kolawa et al. | Apr 2002 | B1 |
6418424 | Hoffberg et al. | Jul 2002 | B1 |
6721953 | Bates et al. | Apr 2004 | B1 |
6727914 | Gutta | Apr 2004 | B1 |
6862253 | Blosser et al. | Mar 2005 | B2 |
6898762 | Ellis et al. | May 2005 | B2 |
6934964 | Schaffer et al. | Aug 2005 | B1 |
7051352 | Schaffer | May 2006 | B1 |
20020046402 | Akinyanmi et al. | Apr 2002 | A1 |
20020075320 | Kurapati | Jun 2002 | A1 |
20020104087 | Schaffer et al. | Aug 2002 | A1 |
Number | Date | Country |
---|---|---|
WO 0115449 | Mar 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20020162107 A1 | Oct 2002 | US |