Embodiments of the invention relate generally to media playback systems, and more specifically, to user awareness detection systems for televisions, computer monitors, and other media display devices.
Display devices, such as televisions, computer monitors, personal digital devices, and the like are the principal means of delivering electronic content. Content providers can deliver virtually any type of visual content through a myriad number of display devices. The most common display means has traditionally been the television, however, the advent of the Internet and other networks has led to an increase in viewing through computers, game device, and other media playback units. Although certain user activity can be tracked and measured with regard to content delivery, such as network sites visited or television shows tuned into, there is no present way of knowing whether a person is actually viewing, reading, or otherwise perceiving what is displayed, when a television or computer monitor is turned on.
A significant disadvantage associated with current media research is the reliance on knowing the number of viewers who are watching a specific piece of media, for example a show or commercial on TV. The issue is that current technologies can only record when a television is on, but are not able to take into account that much of the time that the television or web pages are visible, people are not looking at them, but are instead out of the room or otherwise engaged.
Likewise, with computer systems, it may be possible to determine what content or network sites a user may access, but it is generally not possible to know whether or not the user is actually attending to or perceiving the information on the screen.
Each patent, patent application, and/or publication mentioned in this specification is herein incorporated by reference in its entirety to the same extent as if each individual patent, patent application, and/or publication was specifically and individually indicated to be incorporated by reference.
Embodiments of the present invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
Embodiments of a system to accurately record if viewers are actually watching, listening to, interacting with, or otherwise perceiving a media deliver device, such as a television, computer monitor, or other display mechanism at any given moment are described. A system is configured to sense when a viewer is actually watching television or another electronic device, and make it possible to know when they can be meaningfully engaged by the media. This knowledge can be used by market research entities to measure what media is being viewed and how actively it is being viewed. This can range from users passively watching the screen, or actively paying attention to the screen, or not even viewing the screen at all. The system includes means to sense if a viewer is oriented towards a TV/Radio/Monitor or other media delivery device. Such a system can overcome the disadvantages associated with present systems that generally have problems predicting accurate models of viewership.
In one embodiment, an emitter is attached to each viewer. The emitter sends out a signal only in the direction the viewer is looking. The system has a receiver for this signal placed in close proximity to the media device, such as a TV, monitor or radio. If the signal is received, then it is assumed that the viewers head is oriented in the right direction to view the monitor. If the user leaves the room or looks the other way, the signal will diminish and disappear.
For the embodiment of
In an alternative embodiment, the emitter may be placed on the media device, with the receiver placed on the user that measures if the signal is visible to the viewer. The user-based receiver can then transmit this information back to a base station either through wired or wireless means.
For the embodiments of
The embodiments of
The field of view 301 imaged by the camera 320 corresponds to an optimum line-of-sight 303 when a user 304 is viewing the monitor 302 from a head-on or nearly head-on orientation. The camera 320 is configured to detect if there is a person in front of the monitor, and more specifically if the user's face is pointed towards the monitor. The camera images within a specific field of focus and transmits images to an image processor component 310. The image processor component includes functions, such as face recognition software that determines whether user is looking at the monitor screen. In certain implementations, the direction of the user's eyes can be determined to make sure that the user is focusing on the screen, rather than just having their face in the direction of the screen. In one embodiment, the image data from the image processor 310 is passed onto an attention detector processor 308 for further processing.
It should be noted that any of the connections between the components in any of
In one embodiment, the user may be outfitted with an accelerometer that is attached to a portion of his or her body, such as the head, face, neck, torso, etc. The orientation of the accelerometer can be detected by the attention detector processor 308 to determine if the user is facing the monitor 302 screen. For this embodiment, the accelerometer circuit is attached to a portion of a user positioned proximate the media delivery device at a distance suitable to perceive the monitor. The accelerometer is configured to provide an indication of the position of the user's head relative to the media delivery device. A detector circuit can be coupled to the monitor to receive a signal transmitted from the accelerometer. An attention detector processor coupled to the detector circuit can be configured to determine whether the user is perceiving content provided by the monitor based on one or more signals from the accelerometer.
In general, the viewer attention detection system according to embodiments can detect if a viewer is oriented directly towards the media delivery device. This provides a relatively reasonable indication that the user is paying attention to the media being delivered, and can also help to indicate instances when the user is not paying attention to the media. This information can be utilized by content providers for various purposes. For example, the percentage of time that a user is actively watching the media delivery device relative to the total time the device is powered on can define an “engagement” metric. Very good or engaging media will typically make people want to watch it and they will be glued to their media delivery devices, while less engaging media, even if it is being transmitted to the viewer, may not be actively watched. This is a key new metric for media analysis.
Another advantage of the attention detection system is aggregating this viewer “engagement” and watching time over very large numbers of participants to create models of viewership for given media types. This information can then be used as a baseline to identify how engaging each type of media is relative to other competitive sources. For example, knowing that a piece of media engages viewers for 60% of the time with them actively watching/listening to the media is an important measure. However, the key information is, given its media type, what is the relative engagement to its competition where the competition average provides a benchmark. If the media is, for example a TV program for a round of golf, and the average time for viewers watch golf is usually 30%, then a 60% engagement measure in this case would be good. On the other hand, if the content was a thriller and the average time watching thrillers is 90+%, then a 60% measure would indicate that the show was not particularly engaging.
This information can then be used to rate show viewership very accurately and provide a measure of the overall engagement by viewers. In one embodiment, the attention detection processing system can be deployed in viewer's homes as part of the usual delivery devices, such as the television. This would allow a great many number of users' responses to be simultaneously measured and aggregated. Such a system can be used by television rating services to provide a more accurate measure of actual user interest, rather than just television tuning measurements.
Aspects of the embodiments described herein may be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (“PLDs”), such as field programmable gate arrays (“FPGAs”), programmable array logic (“PAL”) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits. Some other possibilities for implementing aspects of the method include: microcontrollers with memory (such as EEPROM), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the described method may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. The underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (“MOSFET”) technologies like complementary metal-oxide semiconductor (“CMOS”), bipolar technologies like emitter-coupled logic (“ECL”), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, and so on.
It should also be noted that the various functions disclosed herein may be described using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the Internet and/or other computer networks via one or more data transfer protocols (e.g., HTTP, FTP, SMTP, and so on).
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.
The above description of illustrated embodiments is not intended to be exhaustive or to limit the embodiments to the precise form or instructions disclosed. While specific embodiments of, and examples for, the disclosed system are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the described embodiments, as those skilled in the relevant art will recognize.
The elements and acts of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the online loan application system in light of the above detailed description.
In general, in any following claims, the terms used should not be construed to limit the described system to the specific embodiments disclosed in the specification and the claims, but should be construed to include all operations or processes that operate under the claims. Accordingly, the described system is not limited by the disclosure, but instead the scope of the recited method is to be determined entirely by the claims.
While certain aspects of the system may be presented in certain claim forms, the inventor contemplates the various aspects of the methodology in any number of claim forms. For example, while only one aspect of the system is recited as embodied in machine-readable medium, other aspects may likewise be embodied in machine-readable medium. Accordingly, the inventor reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the described systems and methods.
This application is a continuation in part application of U.S. patent application Ser. No. 11/681,265, filed Mar. 2, 2007. This application is a continuation in part application of U.S. patent application Ser. No. 11/804,517, filed May 17, 2007. This application claims the benefit of U.S. Patent Application No. 60/970,898, filed Sep. 7, 2007. This application claims the benefit of U.S. Patent Application No. 60/970,900, filed Sep. 7, 2007. This application claims the benefit of U.S. Patent Application No. 60/970,905, filed Sep. 7, 2007. This application claims the benefit of U.S. Patent Application No. 60/970,908, filed Sep. 7, 2007. This application claims the benefit of U.S. Patent Application No. 60/970,913, filed Sep. 7, 2007. The present application claims the benefit of the U.S. Provisional Application No. 60/970,916 entitled “Methods and Systems for Media Viewer Attention Detection Using Means for Improving Information About Viewer's Preferences, Media Viewing Habits, and Other Factors,” and filed on Sep. 7, 2007.
Number | Date | Country | |
---|---|---|---|
60970898 | Sep 2007 | US | |
60970900 | Sep 2007 | US | |
60970905 | Sep 2007 | US | |
60970908 | Sep 2007 | US | |
60970913 | Sep 2007 | US | |
60970916 | Sep 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11681265 | Mar 2007 | US |
Child | 12206700 | US | |
Parent | 11804517 | May 2007 | US |
Child | 11681265 | US |