This patent application is related to the U.S. Patent Applications, entitled “CLUSTERING AND CLASSIFICATION OF CATEGORY DATA”, Ser. No. 11/436,142, now U.S. Pat. No. 7,774,288, issued Aug. 10, 2010, assigned to the same assignee as the present application.
This invention relates generally to multimedia, and more particularly sorting multimedia objects by similarity.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawings hereto: Copyright © 2005, Sony Electronics, Incorporated, All Rights Reserved.
Clustering and classification tend to be important operations in certain data mining applications. For instance, data within a dataset may need to be clustered and/or classified in a data system with a purpose of assisting a user in searching and automatically organizing content, such as recorded television programs, electronic program guide entries, and other types of multimedia content.
Generally, many clustering and classification algorithms work well when the dataset is numerical (i.e., when data within the dataset are all related by some inherent similarity metric or natural order). Categorical datasets describe multiple attributes or categories that are often discrete, and therefore, lack a natural distance or proximity measure between them.
It may be desirable to display a set of multimedia objects that a user may be interested in given a multimedia object that the user has shown interest in.
Weights are assigned for attributes of multimedia objects by sorting the attributes into preference levels, and computing a weight for each preference level. A similarity value of a multimedia object to an object of interest is computed based on the attribute weights.
The present invention is described in conjunction with systems, clients, servers, methods, and machine-readable media of varying scope. In addition to the aspects of the present invention described in this summary, further aspects of the invention will become apparent by reference to the drawings and by reading the detailed description that follows.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
In the following detailed description of embodiments of the invention, reference is made to the accompanying drawings in which like references indicate similar elements, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, functional, and other changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
A user interface 15 also shown in
The category data 11 is grouped into clusters, and/or classified into folders by the clustering/classification module 12. Details of the clustering and classification performed by module 12 are below. The output of the clustering/classification module 12 is an organizational data structure 13, such as a cluster tree or a dendrogram. A cluster tree may be used as an indexed organization of the category data or to select a suitable cluster of the data.
Many clustering applications require identification of a specific layer within a cluster tree that best describes the underlying distribution of patterns within the category data. In one embodiment, organizational data structure 13 includes an optimal layer that contains a unique cluster group containing an optimal number of clusters.
A data analysis module 14 may use the folder-based classifiers and/or classifiers generated by clustering operations for automatic recommendation or selection of content. The data analysis module 14 may automatically recommend or provide content that may be of interest to a user or may be similar or related to content selected by a user. In one embodiment, a user identifies multiple folders of category data records that categorize specific content items, and the data analysis module 14 assigns category data records for new content items with the appropriate folders based on similarity. In another embodiment, data analysis module 14 comprises interest/influence module 17 that orders the artists associated with the category data by artist influence. Data analysis module 14 comprises similarity module 18 that sorts media objects by similarity. Sorting multimedia objects by similarity is further described in
Clustering is a process of organizing category data into a plurality of clusters according to some similarity measure among the category data. The module 12 clusters the category data by using one or more clustering processes, including seed based hierarchical clustering, order-invariant clustering, and subspace bounded recursive clustering. In one embodiment, the clustering/classification module 12 merges clusters in a manner independent of the order in which the category data is received.
In one embodiment, the group of folders created by the user may act as a classifier such that new category data records are compared against the user-created group of folders and automatically sorted into the most appropriate folder. In another embodiment, the clustering/classification module 12 implements a folder-based classifier based on user feedback. The folder-based classifier automatically creates a collection of folders, and automatically adds and deletes folders to or from the collection. The folder-based classifier may also automatically modify the contents of other folders not in the collection.
In one embodiment, the clustering/classification module 12 may augment the category data prior to or during clustering or classification. One method for augmentation is by imputing attributes of the category data. The augmentation may reduce any scarceness of category data while increasing the overall quality of the category data to aid the clustering and classification processes.
Although shown in
A filtering system is provided that presents the user with media objects of potential interest. The user provides active and/or passive feedback to the system relating to some presented objects. The feedback is used to find media objects that are similar to the media objects viewed by the user.
Category data describes the different categories associated with the content. For example, category data 158 comprises terms: Best, Underway, Sports, GolfCategory, Golf, Art, 0SubCulture, Animation, Family, FamilyGeneration, Child, Kids, Family, FamilyGeneration, and Child. As illustrated, category data 158 comprises fifteen terms describing the program. Some of the terms are related, for example, “Sports, GolfCategory, Golf” are related to sports, and “Family, FamilyGeneration, Child, Kids”, are related to family. Furthermore, category data 158 includes duplicate terms and possibly undefined terms (0SubCulture). Undefined terms may be only associated with one program, because the definition is unknown and, therefore, not very useful.
One embodiment of a method 200 to be performed by the data analysis module 14 to sort multimedia objects by similarity is described with reference to a flowchart shown in
At block 201, an ordering of attributes is obtained. This ordering may be obtained in a number of ways. In one embodiment, the ordering is obtained by the data analysis module 10 from a user profile created by the user. In another embodiment, the ordering is obtained by the data analysis module 10 when a user enters search criterion. In another embodiment, the ordering is obtained by the data analysis module 10 by learning the user preferences. Accordingly, two attributes that are equally important to a user belong to a particular preference level, and preference levels can have a value starting from zero. The preference level value of zero indicates that the user does not consider the attribute(s) in that level to be at all important.
At block 211, attribute weights are computed. One embodiment of a method 300 to be performed to compute attribute weights is described with reference to a flowchart shown in
At block 221, a user input of a chosen object is received. Other inputs, such as object metadata, and object to rank may also be received. For instance, the metadata related to a song may be artist name, genre, name of producer, song writer name, and so on. The metadata is categorical in nature, and may be obtained from one or more sources, such as American Media Communications.
At block 231, similarities between the chosen object and other objects to be ranked are calculated. One embodiment of a method 400 to be performed to compute similarities is described with reference to a flowchart shown in
At block 241, the objects are sorted based on a measure of their similarity to the object of interest to the user, and at block 261, a sorted list is displayed to the user.
One embodiment of a method 300 to be performed to compute attribute weights is described with reference to a flowchart shown in
Method 300 receives as inputs the user attribute ordering (e.g., from block 201) and certain data statistics. The data statistics, may include, the maximum number of values that each attribute can have. For example, the attribute “directors” of a movie may have value of more than 1, but a maximum of 5.
At block 311, the user attribute ordering is used to sort the attributes and a “current_weight” value that is not yet assigned to any attribute is set to 1.
The method 300 computes attribute weights for the preference levels greater than zero. One way of computing attribute weights is to loop over all preference levels greater than zero. At block 321, for a first preference level greater than zero, the level_weight is set to “current_weight+1” at block 331. Thus, when the loop over the preference levels starts, the level_weight is equal to two since current_weight was set to one at block 311.
At block 351, the “attribute_weight” for an attribute is set to the level_weight, and current_weight is incremented by the value of level_weight multiplied by the maximum number of values in the attribute. The loop continues for more attributes at the same preference level.
If there are no more attributes at the same preference level, the loop goes back to decision block 321, where if there are more preference levels greater than zero, the loops start again. At block 361, the attribute weights have been calculated and are returned.
Thus, in the embodiment shown in
Further, the current_weight depends on the maximum number of values for the attribute which was last looked at. Thus, depending on which attribute within the previous level was used last, the level_weight will vary. Accordingly, no matter how the attributes at the lower level match, they should not be stronger than the next higher level. So, the weight of the current level has to depend on the previous (lower) level, and should be high enough for the current level to win over the lower ones. Thus, e.g., for two levels A and B where A is lower than B, there are three objects O1, O2, O3. And say O1 and O2 match 100% with attributes in level A and 0% on attributes in B, but O1 and O3 match 0% with A, but there is just one match on B, which could be 0.0001%. The weights computed are such that when multiply with the number of matches to find the similarities, O3 ends up being more similar to O1 than O2 is.
An example computation of attribute weights is now described. Say, e.g., that the method 300 receives the following information: User “U” rates attribute “A” as very important, attribute “B” as very important, attribute “C” as not important, and attribute “D” as important. Of course, other ways of rating attributes may also be used. Attribute A has 10 value maximum, attribute B has 5 value maximum, attribute C has 10 value maximum, and attribute D has a 2 value maximum.
As an example, attributes for a “song” may include “song writer name(s)”, “performer name(s)”, year of production, genre, name of album and so on. Each attribute may have one or more values. For e.g., the attribute “song writer name(s)” for the song “Birthday” may have two values—Paul McCartney and John Lennon. The attribute “performer name(s)” for that song may have just one value—the Beatles. The year of production for this song has one value—1968. The name of album for this song has one value—The White Album.
The method 300 sorts the attributes by preferences. Accordingly, the method 300 may assign a preference level of “0” to attribute “C”, a preference level of “1” to attribute “D”, and a preference level of “2” to attributes A and B. Also, current_weight is set to 1. Level_weight for level 1 is set to 2. For attribute D, attribute_weight is set to 2 (the value of level_weight). The value of current_weight is equal to 1 plus 4 (the value of level_weight multiplied by 2). Thus, current_weight is equal to 5.
Because there are no more attributes at this level, and there are more preference levels greater than zero (preference level 2), level_weight is set to 6 (current_weight+1). Because there are more attributes at this level (attribute A), attribute_weight of A is set to 6 (level_weight). The value of current_weight is equal to 1 plus 60 (the value of level_weight multiplied by 10). Thus, current_weight is equal to 61.
Because there are more attributes at this level (attribute B), attribute_weight of B is set to 6 (level_weight). The value of current_weight is equal to 1 plus 30 (the value of level_weight multiplied by 5). Thus, current_weight is equal to 31.
Because there are no more attributes and no more preference levels, the following attribute weights are returned to method 200: attribute A weight=61, attribute B weight=31, and attribute D weight=5.
One embodiment of a method 400 to be performed to compute similarities between an object of interest and objects to be ranked is described with reference to a flowchart shown in
At block 401, the method 400 receives the following inputs: objects to rank, object metadata, attribute weights, and object of interest. The objects to rank may include all or some objects from a collection of objects. In one embodiment, the objects to rank may be received by filtering objects from the collection of objects based on one or more criterion, including, e.g., a user query. The object metadata may be read in, as described with respect to block 201 of
At block 411, as long as there are objects to rank, the process continues to block 421, where for an object, a similarity value is set to zero and a value for “num_matches” is set to zero. The values “num_matches” represents number of matches between the values of attributes of an object to rank and the object of interest. At blocks 431 and 441, for each attribute of the object, and for each value in the L attribute, it is determined whether the object of interest has the same value at block 451. If the object of interest is determined to have the same value as the value of the attribute of the object to be ranked, then at block 461, the value of num_matches is incremented by one. The flow returns to block 441, when the object of interest does not have the same value or after the value of num_matches has been incremented.
At block 441, the flow continues to block 451 if the attribute of the object to be ranked as more values. Otherwise, the flow returns to block 431, where if the object to be ranked as more attributes, the flow continues to block 441. Otherwise, if all the attributes of the object to be ranked have been exhausted, at block 471, the value of the term similarity for each object to be ranked is equal to num_matches of the values within the attributes of the object multiplied by attribute weight and divided by number of values for the object. Accordingly, this measure of similarity between an object of the plurality of objects and the object of interest is calculated based on a number of matches between values of attributes of the two objects.
Accordingly, the number of matches an object to be ranked has with the object of interest is normalized over a total number of values in the object. Other normalizing factors may also be used. For example, the value of similarity may be normalized for each individual value of each object.
An example to compute similarities between an object of interest and objects to be ranked is now described.
Say, e.g., that method 300 receives the following inputs: objects P and Q to rank, object metadata, attribute weights (attribute A weight=4, attribute B weight=4, and attribute D weight=2), and object R of interest. The objects P, Q and R may be, e.g., movies, and attributes A, B and D, may respectively be female actors, male actors and directors. For object P, attribute A has 2 values, attribute B has 2 values, and attribute D has 1 value. For object Q, attribute A has 2 values, attribute B has 4 values, and attribute D has 6 values.
For object P, a similarity value is set to zero and a value for “num_matches” is set to zero. The values “num_matches” represents number of matches between the values of attributes of an object to rank and the object of interest. For attribute A of object P, and for each of the two values in the attribute A, it is determined whether the object of interest has the same value. If the object of interest is determined to have the same value as the value of the attribute of the object to be ranked, then the value of num_matches is incremented by one. Here, suppose one of the values of attribute A of object P matches with one of the values of object R (e.g., both movies have Julia Roberts as one of female actresses). Therefore, num_matches=1.
For attribute B of object P, it is determined that none of the two values for attribute B match with the values of attributes of object R. The value of num_matches is not incremented.
For attribute D of object P, it is determined that the value for attribute D matches with a value of attributes of object R and the value of num_matches is incremented. Since there are no more attributes for object P, the value of object's P similarity to object R is calculated as number of matches (2) multiplied by attribute weights (61+31+5) divided by number of values in object P (5), and thus equals 38.8.
The procedure continues for object Q. Here, assuming that 1 value of the attribute A, 1 value of the attribute B, and 2 values of the attribute D of object Q match with values of object R, object's Q similarity to object R is calculated as number of matches (4) multiplied by attribute weights (61+31+5) divided by number of values in object P (12), and thus equals 32.33. The normalization helps to assure that an object having a large number of values, and thus having a higher probability of matching values with the object of interest, is penalized.
Otherwise, if all the attributes of the object to be ranked have been exhausted, the value of the term similarity for the object to be ranked is equal to num_matches multiplied by the sum of attribute weights for the object and divided by sum of number of values for each object.
In practice, the methods described herein may constitute one or more programs made up of machine-executable instructions. Describing the method with reference to the flowchart in
The web server 1108 is typically at least one computer system which operates as a server computer system and is configured to operate with the protocols of the World Wide Web and is coupled to the Internet. Optionally, the web server 1108 can be part of an ISP which provides access to the Internet for client systems. The web server 1108 is shown coupled to the server computer system 1110 which itself is coupled to web content 842, which can be considered a form of a media database. It will be appreciated that while two computer systems 1108 and 1110 are shown in
Client computer systems 1112, 1116, 1124, and 1126 can each, with the appropriate web browsing software, view HTML pages provided by the web server 1108. The ISP 1104 provides Internet connectivity to the client computer system 1112 through the modem interface 1114 which can be considered part of the client computer system 1112. The client computer system can be a personal computer system, a network computer, a Web TV system, a handheld device, or other such computer system. Similarly, the ISP 1106 provides Internet connectivity for client systems 1116, 1124, and 1126, although as shown in
Alternatively, as well-known, a server computer system 1128 can be directly coupled to the LAN 1122 through a network interface 1134 to provide files 1136 and other services to the clients 1124, 1126, without the need to connect to the Internet through the gateway system 1120. Furthermore, any combination of client systems 1112, 1116, 1124, 1126 may be connected together in a peer-to-peer network using LAN 1122, Internet 1102 or a combination as a communications medium. Generally, a peer-to-peer network distributes data across a network of multiple machines for storage and retrieval without the use of a central server or servers. Thus, each peer network node may incorporate the functions of both the client and the server described above.
The following description of
Network computers are another type of computer system that can be used with the embodiments of the present invention. Network computers do not usually include a hard disk or other mass storage, and the executable programs are loaded from a network connection into the memory 1208 for execution by the processor 1204. A Web TV system, which is known in the art, is also considered to be a computer system according to the embodiments of the present invention, but it may lack some of the features shown in
It will be appreciated that the computer system 1200 is one example of many possible computer systems, which have different architectures. For example, personal computers based on an Intel microprocessor often have multiple buses, one of which can be an input/output (I/O) bus for the peripherals and one that directly connects the processor 1204 and the memory 1208 (often referred to as a memory bus). The buses are connected together through bridge components that perform any necessary translation due to differing bus protocols.
It will also be appreciated that the computer system 1200 is controlled by operating system software, which includes a file management system, such as a disk operating system, which is part of the operating system software. One example of an operating system software with its associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. The file management system is typically stored in the non-volatile storage 1214 and causes the processor 1204 to execute the various acts required by the operating system to input and output data and to store data in memory, including storing files on the non-volatile storage 1214.
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the invention as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
5566291 | Boulton et al. | Oct 1996 | A |
5764239 | Misue et al. | Jun 1998 | A |
5963746 | Barker et al. | Oct 1999 | A |
6105046 | Greenfield et al. | Aug 2000 | A |
6256648 | Hill et al. | Jul 2001 | B1 |
6282548 | Burner et al. | Aug 2001 | B1 |
6373484 | Orell et al. | Apr 2002 | B1 |
6460036 | Herz | Oct 2002 | B1 |
6473851 | Plutowski | Oct 2002 | B1 |
6484199 | Eyal | Nov 2002 | B2 |
6513027 | Powers et al. | Jan 2003 | B1 |
6539354 | Sutton et al. | Mar 2003 | B1 |
6545209 | Flannery et al. | Apr 2003 | B1 |
6592627 | Agrawal et al. | Jul 2003 | B1 |
6625585 | MacCuish et al. | Sep 2003 | B1 |
6668273 | Rust | Dec 2003 | B1 |
6714897 | Whitney et al. | Mar 2004 | B2 |
6732145 | Aravamudan et al. | May 2004 | B1 |
6748418 | Yoshida et al. | Jun 2004 | B1 |
6785688 | Abajian et al. | Aug 2004 | B2 |
6801229 | Tinkler | Oct 2004 | B1 |
6941300 | Jensen-Grey | Sep 2005 | B2 |
6996575 | Cox et al. | Feb 2006 | B2 |
7085736 | Keezer et al. | Aug 2006 | B2 |
7162691 | Chatterjee et al. | Jan 2007 | B1 |
7165069 | Kahle et al. | Jan 2007 | B1 |
7184968 | Shapiro et al. | Feb 2007 | B2 |
7185001 | Burdick et al. | Feb 2007 | B1 |
7203698 | Yamashita | Apr 2007 | B2 |
7216129 | Aono et al. | May 2007 | B2 |
7330850 | Seibel et al. | Feb 2008 | B1 |
7340455 | Platt et al. | Mar 2008 | B2 |
7371736 | Shaughnessy et al. | May 2008 | B2 |
7392248 | Bakalash et al. | Jun 2008 | B2 |
20010045952 | Tenev et al. | Nov 2001 | A1 |
20020035603 | Lee et al. | Mar 2002 | A1 |
20020099696 | Prince | Jul 2002 | A1 |
20020099731 | Abajian | Jul 2002 | A1 |
20020099737 | Porter et al. | Jul 2002 | A1 |
20020107827 | Benitez-Jimenez et al. | Aug 2002 | A1 |
20020138624 | Esenther | Sep 2002 | A1 |
20030011601 | Itoh et al. | Jan 2003 | A1 |
20030033318 | Carlbom et al. | Feb 2003 | A1 |
20030041095 | Konda et al. | Feb 2003 | A1 |
20030041108 | Henrick et al. | Feb 2003 | A1 |
20030084054 | Clewis et al. | May 2003 | A1 |
20030089218 | Gang et al. | May 2003 | A1 |
20030105819 | Kim et al. | Jun 2003 | A1 |
20030217335 | Chung et al. | Nov 2003 | A1 |
20040083236 | Rust | Apr 2004 | A1 |
20040090439 | Dillner | May 2004 | A1 |
20040117367 | Smith et al. | Jun 2004 | A1 |
20040133555 | Toong et al. | Jul 2004 | A1 |
20040133639 | Shuang et al. | Jul 2004 | A1 |
20040193587 | Yamashita | Sep 2004 | A1 |
20040215626 | Colossi et al. | Oct 2004 | A1 |
20040260710 | Marston et al. | Dec 2004 | A1 |
20050027687 | Nowitz et al. | Feb 2005 | A1 |
20050033807 | Lowrance et al. | Feb 2005 | A1 |
20050060350 | Baum et al. | Mar 2005 | A1 |
20050114324 | Mayer | May 2005 | A1 |
20050289109 | Arrouye et al. | Dec 2005 | A1 |
20050289168 | Green et al. | Dec 2005 | A1 |
20060023724 | Na et al. | Feb 2006 | A1 |
20060025175 | Lapstun et al. | Feb 2006 | A1 |
20060112141 | Morris | May 2006 | A1 |
20060122819 | Carmel et al. | Jun 2006 | A1 |
20060167942 | Lucas et al. | Jul 2006 | A1 |
20060218153 | Voon et al. | Sep 2006 | A1 |
20070005581 | Arrouye et al. | Jan 2007 | A1 |
20070130194 | Kaiser | Jun 2007 | A1 |
20070192300 | Reuther et al. | Aug 2007 | A1 |
20070233730 | Johnston | Oct 2007 | A1 |
20070245373 | Shivaji-Rao et al. | Oct 2007 | A1 |
20080133466 | Smith et al. | Jun 2008 | A1 |
20080313214 | Duhig et al. | Dec 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20070271296 A1 | Nov 2007 | US |