The field of the present invention is digital media players.
Often, multiple users share the same entertainment device. Family members, for example, may share the same television, the same computer, and the same stereo deck. Some entertainment devices are programmable to automatically play content selected by a user. A user may manually select content to be automatically played, or indicate preferences such as artist, title and genre. When there are multiple users, they may define multiple profiles, each profile indicating preferences of a corresponding user.
A Microsoft Windows user profile, for example, is used to configure personal computer parameter settings for a user, including settings for the Windows environment and settings relating to pictures, video and other such media. To configure his parameter settings, the user defines his profile and stores his preferred content files in a designated directory. Thus if the user wants to have his computer screensaver present a slideshow of his pictures, he must store his pictures in the designated directory. If another user wants to have the screensaver present a slideshow of different pictures, then the other user must store his pictures in a different directory, and configure his profile accordingly, and change the currently active profile of the computer.
It would thus be of advantage for a shared entertainment device to be able to automatically personalize its content presentation according to the preferences of the person viewing the content, without manual intervention.
Aspects of the present invention relate to a method and apparatus for automatically personalizing presentation of media content based on the identity of the person enjoying the presentation, without manual intervention, so that the content being presented is the person's preferred content. Embodiments of the present invention may be implemented in a variety of presentation devices, including inter alia digital picture frames, stereo decks, video decks, radios, televisions, computers, and other such entertainment appliances which often play content continuously and uninterruptedly over periods of time.
Embodiments of the present invention detect identities of one or more people enjoying content on a player device at any given time, from IDs received from their transmitters, from biometrics, from voice recognition, or from facial or other images captured by one or more cameras.
Embodiments of the present invention associate media content with people based on their playlists, based on their preferences, based on metadata tags in content files, and by applying face and voice recognition to images and videos.
There is thus provided in accordance with an embodiment of the present invention a method for dynamic real-time content personalization and display, including collecting data about at least one viewer in the vicinity of a content presentation device, identifying the at least one viewer from the collected data, locating content associated with the identified at least one viewer, and automatically presenting the located content.
There is further provided in accordance with an embodiment of the present invention a content presentation device with content personalization functionality, including a data collector, for collecting data about at least one viewer in the vicinity of a content presentation device, a viewer identifier communicatively coupled with the data collector, for identifying the at least one viewer from the data collected by the data collector, a content locator communicatively coupled with the viewer identifier, for locating content associated with the at least one viewer identified by the viewer identifier, and a media player communicatively coupled with the content locator, for automatically presenting the content located by the content locator.
The following definitions are employed throughout the specification.
The present invention will be more fully understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:
Embodiments of the present invention relate to a media presentation device with functionality for automatically presenting content that is associated with one or more viewers of the content. The presentation device identifies the viewer, and in turn presents content that is determined to be associated with the identified viewer.
Reference is now made to
According to an embodiment of the present invention, player device 200 is a passive device, which automatically presents content uninterruptedly without manual intervention. Presentation device 200 may be a digital picture frame, which automatically presents a slide show of pictures. Presentation device 200 may be a stereo or video deck, which automatically plays music or movies. Presentation 200 may be a radio, which plays broadcast sound. Presentation device 200 may be a television, which automatically plays broadcast TV shows. Presentation device 200 may be a computer, which automatically presents a screensaver when it is in an idle state.
At step 110, data collector 210 collects data about viewers 250 in its vicinity. Step 110 may be implemented in many different ways.
Step 110 may be implemented by receiving electronic IDs from viewers' devices. For example, viewers 250 may have cell phones with Bluetooth IDs, near-field communication (NFC) IDs, radio frequency IDs (RFID), bar code IDs, or other such identifiers. For such implementation, data collector 210 is a corresponding Bluetooth receiver, NFC receiver, RFID receiver, bar code scanner, or such other receiver or scanner.
Step 110 may be implemented by scanning viewer biometrics; e.g., by scanning an eye iris, scanning a fingerprint, or scanning a palm. Alternatively, step 110 may be implemented by recording a voice. For such implementation, data collector 210 is a corresponding iris scanner, fingerprint scanner, palm scanner, or voice recorder.
Step 110 may be implemented by analyzing images captured by a still or video camera located on or near the player device. For such implementation, data collector 210 is a camera.
If no viewers are detected at step 110, then the method repeats step 110 periodically, until one or more viewers are detected.
At step 120 viewer identifier 220 identifies the viewers 250 in its vicinity from the data collected at step 110. For example, viewer identifier 220 may look up an electronic ID in a viewer database 260. Alternatively, viewer identifier 220 may employ iris recognition, fingerprint recognition, palm recognition or voice recognition software. Alternatively, viewer identifier 220 may employ face recognition software to identify one or more persons in captured images. E.g., viewer identifier 220 may use the OKAO Vision™ face sensing software developed and marketed by OMRON Corporation of Kyoto, Japan, or the Face Sensing Engine developed and marketed by Oki Electric Industry Co., Ltd. of Tokyo, Japan.
At step 130 content locator 230 locates content associated with the viewers 250 identified at step 120. Content locator may consult a content database 270 that indexes content according to viewer association. Association of content with viewers may be performed in many different ways. Audio files may be associated with viewers based on existing playlists associated with viewers, and based on preset viewer preferences such as viewer preferred genres, and by identifying the viewer's voice in the files. Image and video files may be associated with viewers by cataloging the files according to people included in the images and videos, based on face recognition and other such recognition techniques. Software, such as the content-based image organization application developed and marketed by Picporta, Inc. of Ahmedabad, India, and the visual search application developed by Riya, Inc. of Bangalore, India, may be used to do the cataloging. Alternatively, audio, image and video files may be associated with viewers based on informational metadata tags in the files. The Facebook® system, developed and marketed by Facebook, Inc. of Palo Alto, Calif., enables users to tag people in photos by marking areas in the photos.
A summary and evaluation of face recognition technologies that may be used for automatic cataloging of image collections is presented in Corcoran, P. and Costache, G., “The automated sorting of consumer image collections using face and peripheral region image classifiers”, IEEE Trans. Consumer Electronics, Vol. 51, No. 3, August 2005, pgs. 747-754.
It will be appreciated by those skilled in the art that viewer database 260 may be local to presentation device 200, as indicated in
It will further be appreciated by those skilled in the art that content database 270 may be local to presentation device 200, as indicated in
At step 140 the content located at step 130 by content locator 230, is automatically presented by media player 240. In case more than one viewer was identified at step 120, media player 240 gives priority to content that is associated with the multiple identified viewers. Additionally, or alternatively, media player 240 rotates its presentation between content associated with each identified viewer. As such, the presentation time is divided between content presented for the multiple viewers.
In accordance with another embodiment of the present invention, specific content is designated as default content, and when one or more viewers are identified, media player 240 rotates its presentation between default content and content associated with each identified viewer.
In an embodiment of the present invention, media player 240 uses predefined rules for content to be presented, based on viewers identified at step 120. For example, pre-designated content is prevented from being presented if one or more pre-designated viewers are identified as being in the vicinity of media player 240.
The method of
As such it will be appreciated by those skilled in the art that embodiments of the present invention dynamically search for new viewers in the vicinity of player 200, and present relevant content in real-time, quickly in response to identification of such new viewers.
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.