There is a great deal of program content available from television networks and other sources. On a television set top box or the like, users receive and may be able to record pre-programmed, manually produced television shows. Often the user is not interested in the entire content of each show, and thus has to fast forward through uninteresting content. Sometimes in order to find desired content, such as a particular highlight, a user has to look for the content by fast forwarding through multiple recorded shows, or possibly waiting during a live show until the content can be viewed.
On the web, to find desired content, users have to hunt around for a video clip that they think contains desired content, and similarly play/fast forward through that clip. Again, to find something specific, a user often needs to click through multiple video clips.
This Summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Briefly, various aspects of the subject matter described herein are directed towards a technology by which personalization data is used to determine a set of video clips, based upon the personalization data and metadata associated with the video clips. The video clips of the set are ordered into a catalog based upon the personalization data and at least some of the metadata. The video clips may be integrated with other content into a narrative presentation. In one aspect, the other content includes transition content, which upon playing of the narrative is played before at least one video clip, and/or an advertisement that upon playing of the narrative is played in association with at least one video clip.
In one aspect, a content scoring component determines scores for a set of personalized video clips that are selected based upon at least some personalization data. A sorting component orders the video clips based upon their scores, and a mechanism arranges the video clips and other content into a narrative for playing. The content scoring component may further determine the scores based upon one or more criteria including at least one of popularity data, user behavior history data or state data. A filter set, comprising filters and/or a querying mechanism, may be used to determine the set of personalized video clips from a larger set of available video clips.
In one aspect, information corresponding to a set of video clips is ordered into a catalog based upon personalization data. The catalog is arranged into a narrative based upon the ordering, including adding at least some other content to the narrative. At least part of the narrative is played, including playing at least one of the video clips. The narrative may be dynamically rearranged based upon an event.
Other advantages may become apparent from the following detailed description when taken in conjunction with the drawings.
The present invention is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
Various aspects of the technology described herein are generally directed towards automatically providing personalized content for a user. In one aspect, a user's personalization data is stored and used (along with possibly other criteria) to drive queries for program content. A video player application may organize and display the program content in a series of video clips, sometimes referred to as a highlight reel or highlights reel, including playing each video clip in succession as a narrative. The clips may be played in association with other content, such as “programmed” video bumpers, transition content, advertisements and/or other effects. The application may be configured to tweak, filter, or sort the personalization data to dynamically re-present a modified highlights reel, such as by performing real-time reorganization, sorting, and querying for content to present to the user.
It should be understood that any of the examples herein are non-limiting. For example, the technology may work with audio, and may use any computing devices such as a gaming systems, personal computers, digital video recorders (DVRs), set-top boxes, smartphones and/or tablets. As such, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used various ways that provide benefits and advantages in content presentation in general.
As will be understood, having the service 104 in the cloud allows any user's personalized information (as well as the program content and metadata) to be accessed from different platforms and devices. However, as will be understood, any of the components and/or data described herein may be remote relative to the device on which the content is viewed, or alternatively, at least some components that are represented as remote in
In the example implementation of
Based upon the user's personalization data 108, e.g., which may be in the form of a list, the application may create a set (e.g., a series) of one or more filters, such as one for each item in the personalization data 108. The filter set 110, which in
Time is one filtering criterion, e.g., only clips that are newer than forty-eight hours old or some other timeframe may be considered for the reel. Other filtering criterion also may be applied at this stage; for example, as indicated within the user's personalization data 108, a user may be interested in professional football but not college football, and thus from among the video clips identified via their metadata as containing football video, the filter set 110 eliminates video clips containing college football, and so on with other non-desirable clips.
In one implementation, after filtering, the result is a set of clips (such as a list) in which the user is likely interested. As described below, a scoring mechanism 115 and sorting mechanism 116 (which may be the same component) computes a score for each clip and ranks the clips according to the score, respectively. For example, based in part on the metadata, clips of user-selected favorite teams and newer clips may be given more weight and thus trend toward the head of the playback queue. Weights used in scoring may be manually set, learned via machine learning, tuned based upon feedback, and so forth.
Also shown in
Another possible factor in scoring that is exemplified in
History 120 may also be used for other considerations, and may be obtained from other sources. For example, if a user regularly watches a particular show involving a certain celebrity, the user may be asked whether he or she would like to modify the personalization data 108 to include news stories regarding that celebrity with respect to clip selection and/or filtering. The adjustment may be automatic to an extent, e.g., a user may have been watching golf a significant amount of time recently, and thus the filtering and/or scoring may be automatically adjusted to include some golf highlights even if the user's personalization data 108 did not explicitly mention golf. If the user then tends to view (rather than skip over) golf highlights in the reel, the scoring may go up to include more golf and/or weight golf higher over time relative to highlights the user does skip over.
Still another factor in scoring may be state data 122. As described above, state data in the form of the current time is one such factor, e.g., older clips (those remaining after the example forty-eight hour filtering) may be given less weight in scoring than newer clips. Another such factor represented in the state data 122 may be game results. For example, when combined with the history 120, the application 102 may learn that the user skips over video highlight clips of his favorite team whenever his team lost its game, and skips ahead to any such clips when they win. Thus, as one example, the weight given to a video highlight clip may be adjusted based upon whether the user's team won and/or lost the game with which the clip is associated. Note that
As can be readily appreciated, numerous other factors may be used in scoring. The device being used to display the highlight reel (e.g., whether the device is a smartphone with a small screen versus a set-top box coupled to a high definition television) may be one such factor. Such factors and relative weights may be machine learned for many users, for example, and adjusted for a user's particular viewing history.
Following scoring and sorting, a sorted playlist catalog is assembled for the user. The catalog may be limited to a certain total duration, by default or if desired by the user, or by some other criterion such as a total number of clips. The catalog may be a separate data structure (e.g., a list or the like) that gets integrated into a narrative comprising the video clips and other content, or may be directly built into a data structure that contains the narrative, e.g., identifying the clips and other content to play in association with those clips.
A video catalog presentation mechanism 124 plays the clips according to the catalog, whereby the user sees what is (to a very high likelihood) desirable personalized content in the form of a highlight reel. Note however that the catalog may be dynamically reconfigured/rearranged as described below.
As represented in
Via the metadata 114, the transitions, effects and other content insertions such as advertisements may be selected for relevance to the current clip or clips before and after the effect/content insertion. For example, a transition from a current news story to a story about a celebrity may include an image of that celebrity and text to indicate that a highlight video regarding that celebrity is upcoming. An advertisement related to buying a particular sports team's merchandise may follow (or accompany) a highlight reel containing a big win for that sports team, for example. As another example, a transition for an upcoming sports video clip may include metadata-based information, such as a title, score of the game, team statistics, standings, key player statistics and so forth. In one alternative, the user may personalize transitions and/or other effects by specifying certain statistics to view, e.g., related to the user's fantasy sports team.
In one implementation, the catalog may be dynamically configured or reconfigured at any time, such as while one clip is playing. For example, a new clip may become available while another clip is playing, causing the catalog to be reconfigured (or a new catalog to be built); clips in the reconfigured catalog that have already been played may be marked as such to avoid replaying them unless requested by the user. Other content in the narrative may also change (or be changed instead of the catalog); as used herein, the term “rearranging” with respect to the narrative includes reconfiguring any or all of the catalog and/or changing any or all of the other content.
Dynamic reconfiguration or rebuilding, as well as other effects, may occur in response to an event, such as time, existence of a new clip, user interaction, new state data, a breaking news story, and so forth. For example, user interaction with a currently displayed score in a text ticker may add a highlight clip (that is otherwise not there) to the catalog, or change an effect such as split screen or picture-in-picture to show that clip without necessarily changing the catalog. Interaction may be via a controller, voice, gesture, touch, and/or the like depending on what input mechanisms are available. A device may be complemented by a companion device, e.g., a smartphone may be used to help select content shown on a high-definition television set.
Note however that reconfiguration may not occur if the user is able to choose to use the catalog in an offline mode, e.g., to buffer the content while in Wi-Fi range or while connected to a high bandwidth source, such as to avoid paying for data charges/waiting for slower networks. Also, a user also may share or save a highlight reel, in which event the catalog is persisted at the time of sharing/saving; note that the transitions may or may not be persisted, e.g., some or all new transitions and/or effects may appear upon the replaying of a saved or shared reel, such as more current information if available.
Further, in one implementation the video player application components are arranged in a pipeline type arrangement. This allows the video player application to be extended with one or more other components, a component may be replaced, updated and so forth without necessarily changing any of the other components. By way of example, consider a component that senses a user's emotional reaction to a viewed game or other clip; the scoring component may be updated to accept such reaction data in computing the scores.
Step 208 represents any further filtering, if all filtering is not performed during querying. For example, the querying at step 204 may be relatively general filtering to find as many clips as possible that may be relevant, with the further filtering at step 208 being a more fine-grained filtering, including to possibly remove any clips added because of popularity. Querying may be inclusive in nature, with subsequent filtering being exclusive, or vice-versa. In general, the querying and/or any subsequent filtering operations provide a more manageable set for scoring, e.g., instead of scoring many thousands of clips that may be found, querying and/or filtering may reduce the set to a subset of a few dozen, which then may be efficiently scored and ranked.
Step 210 represents providing the metadata associated with each of the clips that remain after filtering. As can be readily appreciated, this metadata is used as factors in the scoring. Note that in this example, scoring and sorting are performed at the user device, and thus step 210 provides the metadata to the user device; however as described above the scoring may be performed remotely, (whereby scoring and sorting generally occur where step 210 is performed).
Step 212 represents receiving a request from the user device for the content (clips and other content such as transitions) to play. Depending on the type of request, the clips and other content may be streamed as requested to a device buffer, or downloaded for offline use, for example, as represented by step 214. The dashed line from step 214 to step 212 represents ongoing communication as needed for additional content downloading/streaming.
Step 310 represents the selection of a highlight video clip to play, starting with the first clip on the list. Streaming or downloading may begin for the first clip and for any clip-dependent associated content once the clip is known; additional content may be streamed/downloaded in anticipation of being needed so as to avoid perceived playback delay. Step 312 thus determines and receives what other content (e.g., transition, advertisement and/or the like) is associated with that content; note that other content played before the clip may be downloaded/streamed before the clip itself. Step 314 plays the clip and the other content; note that the clip may not be played until after some or all of the other associated content, such as a transition has played, and that other content may be played along with the clip, and/or follow the clip. Step 316 repeats the process for each other clip. When the various clips and associated content have been played, end content (not necessarily clip-dependent) may be played (step 318), such as an advertisement, goodbye message and so forth.
Although not explicitly shown in
It can be readily appreciated that the above-described implementation and its alternatives may be implemented on any suitable computing device, including a gaming system, personal computer, tablet, DVR, set-top box, smartphone and/or the like. Combinations of such devices are also feasible when multiple such devices are linked together. For purposes of description, a gaming (including media) system is described as one exemplary operating environment hereinafter.
The CPU 402, the memory controller 403, and various memory devices are interconnected via one or more buses (not shown). The details of the bus that is used in this implementation are not particularly relevant to understanding the subject matter of interest being discussed herein. However, it will be understood that such a bus may include one or more of serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus, using any of a variety of bus architectures. By way of example, such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus.
In one implementation, the CPU 402, the memory controller 403, the ROM 404, and the RAM 406 are integrated onto a common module 414. In this implementation, the ROM 404 is configured as a flash ROM that is connected to the memory controller 403 via a Peripheral Component Interconnect (PCI) bus or the like and a ROM bus or the like (neither of which are shown). The RAM 406 may be configured as multiple Double Data Rate Synchronous Dynamic RAM (DDR SDRAM) modules that are independently controlled by the memory controller 403 via separate buses (not shown). The hard disk drive 408 and the portable media drive 409 are shown connected to the memory controller 403 via the PCI bus and an AT Attachment (ATA) bus 416. However, in other implementations, dedicated data bus structures of different types can also be applied in the alternative.
A three-dimensional graphics processing unit 420 and a video encoder 422 form a video processing pipeline for high speed and high resolution (e.g., High Definition) graphics processing. Data are carried from the graphics processing unit 420 to the video encoder 422 via a digital video bus (not shown). An audio processing unit 424 and an audio codec (coder/decoder) 426 form a corresponding audio processing pipeline for multi-channel audio processing of various digital audio formats. Audio data are carried between the audio processing unit 424 and the audio codec 426 via a communication link (not shown). The video and audio processing pipelines output data to an A/V (audio/video) port 428 for transmission to a television or other display. In the illustrated implementation, the video and audio processing components 420, 422, 424, 426 and 428 are mounted on the module 414.
In the example implementation depicted in
Memory units (MUs) 450(1) and 450(2) are illustrated as being connectable to MU ports “A” 452(1) and “B” 452(2), respectively. Each MU 450 offers additional storage on which games, game parameters, and other data may be stored. In some implementations, the other data can include one or more of a digital game component, an executable gaming application, an instruction set for expanding a gaming application, and a media file. When inserted into the console 401, each MU 450 can be accessed by the memory controller 403.
A system power supply module 454 provides power to the components of the gaming system 400. A fan 456 cools the circuitry within the console 401.
An application 460 comprising machine instructions is typically stored on the hard disk drive 408. When the console 401 is powered on, various portions of the application 460 are loaded into the RAM 406, and/or the caches 410 and 412, for execution on the CPU 402. In general, the application 460 can include one or more program modules for performing various display functions, such as controlling dialog screens for presentation on a display (e.g., high definition monitor), controlling transactions based on user inputs and controlling data transmission and reception between the console 401 and externally connected devices.
The gaming system 400 may be operated as a standalone system by connecting the system to high definition monitor, a television, a video projector, or other display device. In this standalone mode, the gaming system 400 enables one or more players to play games, or enjoy digital media, e.g., by watching movies, or listening to music. However, with the integration of broadband connectivity made available through the network interface 432, gaming system 400 may further be operated as a participating component in a larger network gaming community or system.
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
RE32632 | Atkinson | Mar 1988 | E |
RE34340 | Freeman | Aug 1993 | E |
5912696 | Buehl | Jun 1999 | A |
RE36801 | Logan et al. | Aug 2000 | E |
RE37881 | Haines | Oct 2002 | E |
6496981 | Wistendahl et al. | Dec 2002 | B1 |
6665870 | Finseth et al. | Dec 2003 | B1 |
7836475 | Angiolillo et al. | Nov 2010 | B2 |
7996878 | Basso et al. | Aug 2011 | B1 |
8028315 | Barber | Sep 2011 | B1 |
8448206 | Roberts et al. | May 2013 | B2 |
20020144268 | Khoo et al. | Oct 2002 | A1 |
20030044162 | Angel | Mar 2003 | A1 |
20030189589 | LeBlanc | Oct 2003 | A1 |
20050172318 | Dudkiewicz et al. | Aug 2005 | A1 |
20080060001 | Logan | Mar 2008 | A1 |
20080066111 | Ellis et al. | Mar 2008 | A1 |
20080086747 | Rasanen et al. | Apr 2008 | A1 |
20080155421 | Ubillos et al. | Jun 2008 | A1 |
20080155616 | Logan et al. | Jun 2008 | A1 |
20080306925 | Campbell et al. | Dec 2008 | A1 |
20090199230 | Kumar et al. | Aug 2009 | A1 |
20090282337 | Tilley | Nov 2009 | A1 |
20100115413 | Schein et al. | May 2010 | A1 |
20100115541 | Schein et al. | May 2010 | A1 |
20100122286 | Begeja et al. | May 2010 | A1 |
20100146543 | Knee et al. | Jun 2010 | A1 |
20100153856 | Russ | Jun 2010 | A1 |
20100175078 | Knudson et al. | Jul 2010 | A1 |
20100211969 | Schein et al. | Aug 2010 | A1 |
20100211975 | Boyer et al. | Aug 2010 | A1 |
20100247065 | Cooper et al. | Sep 2010 | A1 |
20100275230 | Yuen et al. | Oct 2010 | A1 |
20100299692 | Rao et al. | Nov 2010 | A1 |
20100319013 | Knudson et al. | Dec 2010 | A1 |
20100332560 | Gerbasi, III | Dec 2010 | A1 |
20110013885 | Wong et al. | Jan 2011 | A1 |
20110035771 | Ward, III et al. | Feb 2011 | A1 |
20110064387 | Mendeloff et al. | Mar 2011 | A1 |
20110131601 | Alten et al. | Jun 2011 | A1 |
20110167451 | Yuen et al. | Jul 2011 | A1 |
20110173660 | Schein et al. | Jul 2011 | A1 |
20110185387 | Schein et al. | Jul 2011 | A1 |
20110209170 | Schein et al. | Aug 2011 | A1 |
20110258663 | Lemmons et al. | Oct 2011 | A1 |
20110265124 | Goldenberg et al. | Oct 2011 | A1 |
20110276995 | Alten et al. | Nov 2011 | A1 |
20120079539 | Schein et al. | Mar 2012 | A1 |
20120102523 | Herz et al. | Apr 2012 | A1 |
20120185901 | Macrae et al. | Jul 2012 | A1 |
20120189284 | Morrison et al. | Jul 2012 | A1 |
20120198338 | Flint | Aug 2012 | A1 |
20120272270 | Boyer et al. | Oct 2012 | A1 |
20120304211 | Berezowski et al. | Nov 2012 | A1 |
20130031582 | Tinsman et al. | Jan 2013 | A1 |
20130073384 | Qiu | Mar 2013 | A1 |
20130160051 | Armstrong et al. | Jun 2013 | A1 |
20130304586 | Angles et al. | Nov 2013 | A1 |
Entry |
---|
Lists> What's on Tonite! TV Listings (fwd), Internet article (On line), Jan. 28, 1995, XP 002378869 Retrieved from the Internet: URL: www.scout.wisc.edu/Projects/PastProjects/NH/95-01-31/0018.html> [Retrieved on Apr. 28, 2006]. The whole document, 4 pages. |
Little et al., “Prospects for Interactive Video-on-Demand,” IEEE Multimedia, Fall 1994, pp. 14-24. |
Lloyd, “Impact of Technology,” Financial Times, Jul. 1978, 2 pages. |
Loen et al., “Subscriber Terminal Units for Video Dial Tone Systems,” IEEE Network, Sep./Oct. 1995, 10 pages. |
Lowenstein, R.L. and Aller, H.E., “The Inevitable March of Videotex,” Technology Review, vol. 88, Oct. 1985, p. 22. |
IPG Attitude and Usage Study, prepared by Lieberman Research Worldwide for Gemstar-TV Guide International, Oct. 2002. |
Lynch, Keith, timeline of net related terms and concepts, Mar. 22, 2007, 8 pages. |
M/A-COM, Inc., “Videocipher II Satellite Descrambler Owner's Manual,” dated Prior Feb. 1986, pqs. 1-17. |
Mack Daily, “Addressable Decoder with Downloadable Operation,” Proceedings from the Eleven Technical Sessions, 42nd Annual Convention and Exposition of the NCTA, Jun. 6-9, 1993, pp. 82-89. |
Make Room for POP, Popular Science, Jun. 1993, 5 pages. |
Prevue Networks and OpenTV(R) Agree to Work Together on Deploying Interactive Program Guides Worldwide, from the internet at http://www.opentv.com/news/prevuefinal.htm, printed on Jun. 28, 1999, 2 pages. |
Prevue Networks, Inc. Promotional Materials, 1994, 22 pages. |
Prevue Online Debuts Local Listings for 250 Systems; System-Specific Listings Include Multimedia Features—Free Build Acceleration, PR Newswire, Jun. 5, 1997. |
Prevue Online, Dec. 28, 1996, extract from web.archive.org, printed on Nov. 18, 2014, http://www.prevue.com. |
Probe XL Brochure, Auto Tote Systems Inc., (Newark, Delaware) (undated) 57 pages. |
Prodigy Launches Interactive TV Listing, Apr. 22, 1994, Public Broadcasting Report, 1 page. |
PTV Recorder Setup Guide, Philips Electronics, TiVo Inc. (2000), 68 pages. |
Public Final Initial Determination on Violation filed Jul. 2, 2013, 371 pages. |
Qayyum, “Using IVDS and VBI for Interactive Television,” IEEE, Jun. 10, 1996, 11 pages. |
Rajapakshe et al., “Video on demand,” (last modified Jun. 1995) <http://www-dse.doc.ic.ac.uk/˜nd/suprise_95 /Journal/vo14/shr/report.html>, 14 pages. |
Number | Date | Country | |
---|---|---|---|
20130160051 A1 | Jun 2013 | US |