The present invention relates generally to providing multimedia content over a communications network, and more particularly, to an automated system for rating such multimedia content based on cues that are passively gathered from the user.
The delivery of multimedia and other content over communications networks is well known in the art. Examples include, but are not limited to, web browsing, File Transfer Protocol (FTP), Internet Protocol (IP) services such as Voice-over-IP (VoIP), and even conventional Cable Television (CATV) over Hybrid Fiber Coax (HFC).
In the context of television programming, delivered either via HFC, IP or the like, current technology enables users to provide ratings for such programming (or other dynamic media such as radio, CD, audio books, etc.). However, the current state of the art requires that the users actively provide such feedback. This is often accomplished by the user manipulating a remote control or keyboard. For example, the well-known TiVo® remote has data input controls for accepting such user input. However, the need for active user participation decreases the likelihood for the typical TV audience to provide any feedback.
It would therefore be desirable to provide a system and methodology whereby a viewer of multimedia content can provide feedback to a service provider or other entity in a transparent, non-invasive way that obviates the need for explicit viewer participation.
In accordance with aspects of the invention(s), methods and systems for capturing, transmitting and processing data for generating ratings relating to multimedia programming based on passively obtained user cues are disclosed herein.
An exemplary method, in the broadest sense, generally comprises the step of: receiving data over the communications network, the data comprising cues providing feedback regarding the multimedia content from at least one of the end users in a manner transparent to the user.
In accordance with another aspect of the invention, a system for gathering user feedback, that can be used for example, to rate multimedia content that is distributed to end users over a communications network, comprises: a network element adapted for receiving data over the communications network, the data comprising cues providing feedback regarding the multimedia content from at least one of the end users in a manner transparent to the user.
In accordance with still another aspect of the invention, a memory medium containing programmable machine readable instructions which, when executed by a processor, enable a network element to obtain rating data regarding multimedia content that is communicated to end users over a communications network, enable a system, device, network or other entity or apparatus to receive data over the communications network, where the data comprises cues providing feedback regarding the multimedia content from at least one of the end users in a manner transparent to the user.
These and further advantages will become apparent to those skilled in the art as the present invention is described with particular reference to the accompanying drawings.
Embodiments of the invention will be described with reference to the accompanying drawing figures wherein like numbers represent like elements throughout. Before embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of the examples set forth in the following description or illustrated in the figures. The invention is capable of other embodiments and of being practiced or carried out in a variety of applications and in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
In accordance with a first aspect of the invention, a method and system for rating multimedia content is disclosed. With reference to
With reference to
The cues, as previously discussed, may be aural, visual or audio-visual in nature and can be measured in terms of intensity and/or duration. Such cues may further be processed with specific regard to the multimedia content stream i to enable a service provider or other entity to generate “ratings” for the programming in step 304. In this context, the cues may be temporally mapped against the content which could then be distributed in step 306 for multiple purposes, including but not limited to virtual audiences 308, providing show recommendations 310, enabling producers to understand which content is most likely to generate the best revenue 312, advertisers 314 and service providers (e.g., TV networks) 316.
For example, a particular “comedy” might cause a viewer to laugh, an interesting documentary might elicit a thought provoking discussion, or a horror “flick” might cause shock or fear. These aural, visual or audio-visual cues may be identified, captured and processed by the network access device in real or quasi real-time. A variety of acoustic models can be created to monitor different aural cues, such as a scream which, obviously, has different properties than laughter. In the case of conversation, the inventive method can identify dialog without the need for complicated speech recognition technology, or the need to even understand the content at all. Such mapping of user's audio or visual expressions for the purpose of authentication is known in the art.
In addition to monitoring the frequency and duration of such cues, in accordance with aspects of the invention, an interested entity or device can record when such viewer feedback is generated. For example, a set-top box (in the HFC context) knows what multimedia content (i.e., TV show) is being shown and the points in the show if and when, the viewer has which type of reactions thereto. This data, as described above, can be temporally correlated with the media content, thereby enabling the generation of a continuous ratings profile.
Another aspect of the invention comprises a memory medium storing machine readable instructions which, when executed by a processor enable a system, device or other entity/apparatus to generate the above-described ratings from passively obtained user cues. The memory medium may be part of the network access device described above, or disposed anywhere within the network or a separate entity responsible for generating ratings for multimedia programming. The memory medium and instructions may be embodied in software, hardware or firmware, as is well understood by those of ordinary skill in the art.
The foregoing detailed description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the description of the invention, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that various modifications will be implemented by those skilled in the art, without departing from the scope and spirit of the invention.
This application is a continuation of U.S. patent application Ser. No. 15/434,444 filed Feb. 16, 2017, which is a continuation of U.S. patent application Ser. No. 14/170,895 filed Feb. 3, 2014 (now U.S. Pat. No. 9,606,768), which is a continuation of U.S. patent application Ser. No. 12/006,311 filed Jan. 2, 2008 (now U.S. Pat. No. 8,677,386). All sections of the aforementioned applications and patents are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
7237250 | Kanojia | Jun 2007 | B2 |
20030093784 | Dimitrova | May 2003 | A1 |
20050055708 | Gould | Mar 2005 | A1 |
20050071865 | Martins | Mar 2005 | A1 |
20050256712 | Yamada et al. | Nov 2005 | A1 |
20050281410 | Grosvenor et al. | Dec 2005 | A1 |
20060048189 | Park | Mar 2006 | A1 |
20060136965 | Ellis | Jun 2006 | A1 |
20060206379 | Rosenberg | Sep 2006 | A1 |
20070186228 | Ramaswamy | Aug 2007 | A1 |
20090043586 | MacAuslan et al. | Feb 2009 | A1 |
20170164054 | Amento et al. | Jun 2017 | A1 |
Number | Date | Country | |
---|---|---|---|
20190379937 A1 | Dec 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15434444 | Feb 2017 | US |
Child | 16550383 | US | |
Parent | 14170895 | Feb 2014 | US |
Child | 15434444 | US | |
Parent | 12006311 | Jan 2008 | US |
Child | 14170895 | US |