This application is related to U.S. patent application Ser. No. 12/192,789, entitled “Techniques for Automatically Distinguishing Between Users of a Handheld Device” filed on Aug. 15, 2008.
Any given consumer may have access to a broad range of multimedia content, whether through broadcast television, subscription television, or the Internet. For a number of reasons it may be desirable to tailor content delivery for individual users. It would simplify the user experience, for example, if the range of possibilities were narrowed to channels and content that are consistent with the user's preferences. Such tailoring could also conserve bandwidth, memory, and other transmission and computing resources. Moreover, some content may not be appropriate for all users. Some content may be restricted to adults, for example, and should not be made available to children. Tailoring of content for children should reflect such considerations. In addition, media providers may wish to include advertising in the delivery of content. In this situation, advertising resources would be used more efficiently if the advertising were targeted to specific groups of users. For reasons such as these, tailoring content to specific users or to specific sets of users may be desirable.
In order to enjoy the benefits of such tailoring, the user typically needs to identify himself to the content provider when accessing content. This may take the form of logging in at the user's television or set-top box. A profile of the user can then be accessed, allowing for decisions to be made regarding the content to be provided to the user. Such an identification process may be cumbersome to the user. A typical user does not generally want to have to log in every time he sits down in front of the television or computer, for example. Such a process represents a burdensome extra step that must be performed before the user can access content.
In the drawings, the leftmost digit(s) of a reference number identifies the drawing in which the reference number first appears.
An embodiment is now described with reference to the figures, where like reference numbers indicate identical or functionally similar elements. While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the relevant art will recognize that other configurations and arrangements can be used without departing from the spirit and scope of the description. It will be apparent to a person skilled in the relevant art that this can also be employed in a variety of other systems and applications other than what is described herein.
The system, method, and computer program product described herein may allow a user to identify himself to a content provider, without having to explicitly perform a log-in process or other deliberate self-identification process. By manipulating a user device, such as a remote control, a profile of the user may be constructed, where the profile may include a representation of how the individual user typically manipulates the device. The profile may include a feature vector that may be a function of the number of times that individual buttons are pressed by the user. The construction of the feature vector may be part of a training or learning phase. Once the feature vector is constructed, the user's interaction with the device in a subsequent session may be captured and compared with the profile. This may allow identification of the user, in turn allowing content to be tailored in a manner specific to this user.
The input may take the form of button presses as applied to an array of buttons 140. In a typical household, there may be more than one user. Here, the set of users is illustrated as users 150a through 150n. As will be described in greater detail below, the systems and methods described herein may capture a user's button presses on remote control 130 in training phase. This may be used to create a profile for the user, where the profile includes a feature vector that may be representative of the user's characteristic use of the remote control 130. In an embodiment, a profile may include multiple feature vectors, where each feature vector may represent a sample of the user's remote control usage. Other profiles may also be created for each of the other respective users. Subsequent to the training phase, the user's use of the remote control 130 may be compared to the stored profile of each user in the set of users 150, in order to determine the profile that is associated with the current user. This in turn may allow identification of the current user in a prediction phase.
The operation of the system and method described herein is illustrated generally in
Mapping 260 may be provided to a prediction module 230. Profiles that are generated on behalf of additional users (not shown) may also be provided to prediction module 230. Subsequent to the training phase, one or more button presses 270 may be provided by the user during manipulation of the remote control. Button presses 270 may be provided in the course of operating the television and set-top box, for example. The prediction module 230 may process the button presses 270 and compare the result to the feature vectors of the respective profiles generated by training module 220 for the larger set of users. Prediction module 230 may then associate button presses 270 to a particular profile. The profile may be associated with a particular user, using mapping 260. The identity of this user is shown as user ID 280. This user ID 280 may then be used by the STB or the content provider to tailor the content that may be made available to the user associated with that user ID 280.
The classification, training, and prediction modules may be implemented in software, firmware, hardware, or any combination thereof. Moreover, these modules may be built into the user device, e.g., a remote control. Alternatively, these modules may be implemented in an STB, a cable headend, or elsewhere, provided that the classification module may receive information indicating button presses and/or other user inputs, and provide a user ID to a content provider or its proxy so as to allow tailoring of content delivery.
In an embodiment, the feature vector may be implemented as follows. The feature vector may be n-dimensional, where n may correspond to the number of buttons on the user device. If the user device is a remote control and there are n buttons on the remote control, then the feature vector may be n-long. The number of sessions that have taken place may be represented by an index last-session. Sessions may be defined as intervals in which significant remote control activity (e.g., button presses) takes place, separated by time intervals of no activity. Another n-dimensional vector may be used, button_cum_vec. Each value button_cum_vec[i] in this vector may represent the cumulative count of the number of times that button i has been pressed. The feature vector may then be expressed as
log((float)(last_session+2)/(button_cum_vec[i]+1))
for each i, 0≦i≦n.
Note that each value in the feature vector may be seen as an inverse frequency of the number of button presses of button i in relation to the number of sessions.
Training is illustrated in
At 360, training may take place using the feature vector and current profile of the user that corresponds to the user ID as presented at 370. At 380, the user ID and the current profile may be associated, if this has not already taken place.
As is known to persons of ordinary skill in the art, a number of training algorithms may be used. In an embodiment, a support vector machine (SVM) may be used. Such learning may take the form of max-margin learning, and may be supervised or semi-supervised. Moreover, online max-margin learning may be used, where such online learning may be formulated as an online convex programming process.
Prediction is illustrated in
At 460, prediction may take place using the updated feature vector. In an embodiment, this feature vector may be compared to feature vectors of possible users as generated previously during their respective training phases. The previous feature vector that most closely resembles the current feature vector may then be identified. A number of statistical tests may be used to determine the degree of resemblance between a current feature vector and previous feature vectors, as would be known to a person of ordinary skill in the art. A mapping of users and their respective profiles (including their respective previous feature vectors) may then be used to identify or predict the particular user. The ID of this user may be output at 470.
An example of a session-add routine is illustrated in
At 525, a determination may be made as to whether the last session was a prediction session, where the button presses of that session were used to determine the identity of a user. If not, then the last session was a training session, and the process may continue to 555. Here button_cum_vec[i] may be incremented where button_in represents the ith button. If, at 525, it is determined that the last session was a prediction session, then at 530 the array buffer button_map may be purged, except for the last row. This array buffer may be used to store previous instances of the vector button_map. The process may then continue to 555, where button_cum_vec[i] may be incremented.
Returning to 505, if it is determined that the current session is a new session, then at 535 a new button_map vector may be initialized to all 0's, except that button_map [i] may be set to 1 where button_in represents the ith button. At 540 a determination may be made as to whether the last session was a prediction session. If not, then the process may continue at 550, where the newly created button_map vector may be inserted into the button_map array buffer as the last entry. If it is determined at 540 that the last session was a prediction session, then the process continues at 545. Here, the button_map array buffer may be cleared. Then at 550 the newly created button_map vector may be inserted into the button_map array buffer as the last entry. At 555, button_cum_vec[i] may be incremented where button_in represents the ith button.
The creation of a feature vector is illustrated in
button—isf_vec[i]=log((float)(last_session+2)/(button_cum_vec[i]+1)).
At 630, the index i may be incremented. At 640 it may be determined whether the value of i has reached the number of buttons. If not, then the process may return to 620 for calculation of the next value in button_isf_vec. The process may conclude at 650.
One or more features disclosed herein may be implemented in hardware, software, firmware, and combinations thereof, including discrete and integrated circuit logic, application specific integrated circuit (ASIC) logic, and microcontrollers, and may be implemented as part of a domain-specific integrated circuit package, or a combination of integrated circuit packages. The term software, as used herein, refers to a computer program product including a computer readable medium having computer program logic stored therein to cause a computer system to perform one or more features and/or combinations of features disclosed herein. The computer readable medium may be transitory or non-transitory. An example of a transitory computer readable medium may be a digital signal transmitted over a radio frequency or over an electrical conductor, through a local or wide area network, or through a network such as the Internet. An example of a non-transitory computer readable medium may be a compact disk, a flash memory, a hard disk, or other data storage device.
A software or firmware embodiment of the processing described above with reference to
In one embodiment, for example, a device that implements the system, method, and/or computer program product described herein may comprise a processing system, computing system, mobile computing system, mobile computing device, mobile wireless device, computer, computer platform, computer system, computer sub-system, server, workstation, terminal, personal computer (PC), laptop computer, ultra-laptop computer, portable computer, handheld computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, smart phone,(e.g., a Blackberry® device), pager, one-way pager, two-way pager, messaging device, MID, MP3 player, and so forth. The embodiments are not limited in this context.
In one embodiment, such a device may be implemented as part of a wired communication system, a wireless communication system, or a combination of both. In one embodiment, for example, such a device may be implemented as a mobile computing device having wireless capabilities. A mobile computing device may refer to any device having a processing system and a mobile power source or supply, such as one or more batteries, for example. Examples of such a mobile computing device may include a laptop computer, ultra-mobile PC, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, smart phone, pager, one-way pager, two-way pager, messaging device, data communication device, MID, MP3 player, and so forth.
Methods and systems are disclosed herein with the aid of functional building blocks illustrating the functions, features, and relationships thereof. At least some of the boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed.
While various embodiments are disclosed herein, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail may be made therein without departing from the spirit and scope of the methods and systems disclosed herein. Thus, the breadth and scope of the claims should not be limited by any of the exemplary embodiments disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
4827428 | Dunlop et al. | May 1989 | A |
7787869 | Rice et al. | Aug 2010 | B2 |
7800592 | Kerr et al. | Sep 2010 | B2 |
20020099657 | Black et al. | Jul 2002 | A1 |
20040125993 | Zhao et al. | Jul 2004 | A1 |
20040137416 | Ma et al. | Jul 2004 | A1 |
20060250213 | Cain, Jr. et al. | Nov 2006 | A1 |
20070014536 | Hellman | Jan 2007 | A1 |
20070073799 | Adjali et al. | Mar 2007 | A1 |
20070079137 | Tu | Apr 2007 | A1 |
20080004951 | Huang et al. | Jan 2008 | A1 |
20080113787 | Alderucci et al. | May 2008 | A1 |
20080128182 | Westerman et al. | Jun 2008 | A1 |
20080168267 | Bolen et al. | Jul 2008 | A1 |
20080249969 | Tsui et al. | Oct 2008 | A1 |
20090117951 | Alameh et al. | May 2009 | A1 |
20100008643 | Rakib et al. | Jan 2010 | A1 |
20100042564 | Harrison et al. | Feb 2010 | A1 |
Number | Date | Country |
---|---|---|
101651819 | Feb 2010 | CN |
2154882 | Feb 2010 | EP |
2007131069 | Nov 2007 | WO |
2010019415 | Feb 2010 | WO |
2010019415 | May 2010 | WO |
Entry |
---|
Office Action Received for European Patent Application No. 09251950.3, mailed on Apr. 23, 2012, 3 pages. |
Search report received for European Patent Application No. 09251950.3, mailed on Nov. 5, 2009, 3 pages. |
Office Action Received for European Patent Application No. 09251950.3, mailed on Dec. 5, 2011, 4 pages. |
Office Action Received for Chinese Patent application No. 200910170419.0, mailed on Jan. 6, 2012, 18 pages of Office Action Including 9 pages of English Translation. |
International Search Report and Written Opinion received for International PCT Application No. PCT/US2009/052721 mailed on Mar. 23, 2010, 11 pages. |
Office Action received for European Application No. 09251950.3, mailed on May 11, 2010, 5 pages. |
Office Action received for Chinese Patent Application No. 200910170419.0, mailed on Sep. 7, 2011, 16 pages including 8 pages of English translation. |
International Preliminary Report on Patentability and Written Opinion, received for International Patent Application No. PCT/US2009/052721, mailed on Feb. 24, 2011, 6 pages. |
Office Action received for Chinese Patent Application No. 200910170419.0, mailed on Nov. 24, 2010, 10 pages of English Translation. |
Office Action received for European Patent Application No. 09251950.3, mailed on Sep. 30, 2010, 6 pages. |
Chang, K.; Hightower, J.; and Kveton, B. 2009. Inferring identity using accelerometers in television remote controls. In Proceedings of the 7th International Conference on Pervasive Computing, 151-167. |
Ratliff, N.; Bagnell, A.; and Zinkevich, M. 2007. (Online) subgradient methods for structured prediction. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics. |
Taskar, B.; Guestrin, C.; and Koller, D. 2004. Max-margin Markov networks. In Advances in Neural Information Processing Systems 16. |
Office Action received for U.S. Appl. No. 12/192,789, mailed on Mar. 1, 2013. 20 pages. |
Number | Date | Country | |
---|---|---|---|
20130006898 A1 | Jan 2013 | US |