Method and apparatus for delivering video and video-related content at sub-asset level

Information

  • Patent Grant
  • 11832024
  • Patent Number
    11,832,024
  • Date Filed
    Thursday, November 20, 2008
    15 years ago
  • Date Issued
    Tuesday, November 28, 2023
    5 months ago
Abstract
A method and apparatus for delivering an ordered list of items of supplemental content to a consumer comprising determining a context of a portion of media selected for consumption, determining consumer preference information corresponding to the consumer, generating the ordered list of items of supplemental content as a function of the context and of the consumer preference information.
Description
FIELD OF THE INVENTION

The invention pertains to the delivery to consumers of personalized video and video-related content at the sub-asset level. More particularly, the invention pertains to search and discovery software that can, for example, be embodied within a set-top box, Internet browser, intelligent television, intelligent radio, or the like.


BACKGROUND OF THE INVENTION

The current paradigm for delivery of audio and video services, radio services and Internet services to consumers, be it over-the-air broadcasts, cable television service, Internet television service, telephone network television service, satellite television service, satellite radio service, websites, etc., delivers a relatively unpersonalized, generic experience to all viewers. That is, for example, all of the subscribers of a given television network system receive essentially the same content in essentially the same order.


Cable and satellite television services and websites permit some personalization in that many such television network systems permit each individual viewer to access and download Video-on-Demand content. The Video-on-Demand feature (VOD) may be considered to comprise personalization in some sense because it allows a viewer to select content for viewing at any time of his or her choice that is different from content being provided to other subscribers. However, the typical VOD feature provided by television network operators is generic in the sense that the VOD options (e.g., the programs available for viewing on demand) are the same for all subscribers and are presented in the same manner to all subscribers. Furthermore, the items available for consumption via the VOD feature are complete assets. For instance, a subscriber using the VOD feature is enabled to select and download and view an entire asset, such as a television program, a music video, movie, instructional video, etc., but not a particular portion thereof.


United States Patent Application Publication No. 2008/0133504 discloses a method and apparatus for contextual search query refinement on consumer electronics devices. In the disclosed method and apparatus, a consumer electronics apparatus, such as a television, is enabled to search the Internet for content, the search being performed and refined based on contextual information, such as the consumer electronic device's current activity, e.g., playing a music CD or playing a DVD, and the actual content of the media. While the method and apparatus disclosed in that patent application provides additional content for potential viewing by a subscriber, there is no customization or personalization of the content in that each subscriber will receive the same search results for a search performed in a particular context (e.g., the particular song playing on a particular CD). The specific search results depend solely on the particular context of the media, and not anything particular to the specific subscriber.


SUMMARY OF THE INVENTION

The invention pertains to a method and apparatus for delivering an ordered list of items of supplemental content to a consumer comprising determining a context of a portion of media selected for consumption, determining user preference information corresponding to the consumer, and generating the ordered list of items of supplemental content as a function of the context and of the consumer preference information.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of the components of an exemplary cable television network system that supports features in accordance with the present invention.



FIG. 2 is an exemplary interface screens that can be used in connection with the present invention.



FIG. 3 is a flow diagram illustrating flow according to one exemplary process for generating search results for presentation to a viewer in accordance with the principles of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

The present invention pertains to methods and apparatus for automatically presenting to a consumer of a media asset a customized set of optional media content, the optional media content being automatically selected and presented to the consumer as a function of the media asset currently being consumed and the predicted personal preferences of the consumer himself or herself.


In one particular embodiment in connection with which the invention will first be described for exemplary purposes, the invention is presented in the context of a television network system, such as cable television. However, it should be understood that this embodiment is merely exemplary and that the principles of the present invention can be applied in other information networks, including the Internet, and satellite radio networks.


“Information network” refers to a collection of devices having a transport mechanism for exchanging information or content between the devices. Such networks may have any suitable architecture, including, for example, client-server, 3-tier architecture, N-tier architecture, distributed objects, loose coupling, or tight coupling.



FIG. 1 is a block diagram illustrating a set of components found in a cable television system 100 in which the present invention can be incorporated. Cable network system 100 includes a server, such as a headend 101, that receives content that is to be transmitted to the subscriber locations 102 of the cable network system.


The term “transmitted” or “transmits” refers broadly to sending a signal from a transmitting device to a receiving device. The signal may be transmitted wirelessly or over a solid medium such as wire or fiber. Furthermore, the signal may be broadcast, multicast/narrowcast, or unicast. Broadcasting refers to the transmission of content to an audience at large. The audience may be the general public, or a sub-audience. Switched digital video is a type of broadcast that is initiated in response to a client request and is terminated when no more clients are tuned to it. Multicasting refers to the simultaneous transmission of content to a plurality of specific and known destinations or addresses in a network or between networks. Multicast is often used for streaming media and Internet television applications, Ethernet multicast addressing, ATM point-to-multipoint VCs and Infiniband® multicast. Unicasting refers to the transmission of a signal to a single destination or address in a network. Unicast is used, for example, in Video-On-Demand applications.


The headend 101 receives the content to be broadcast from one or more sources, for example, such as a satellite 103 or a landline 105. The data is modulated at the headend 101 for distribution over the medium of the network 104, e.g., coaxial cable, optical fiber, wireless satellite communication, etc., to the subscriber locations 102 in their individual homes, businesses, cars, etc. One particular exemplary subscriber location 102 is shown in detail in FIG. 1. Typically, each subscriber will have a client device, such as a radio receiver, computer terminal, or set top box 106, in communication with the server through the network 104.


“Set top box” or STB refers to a device that connects to a monitor and an external source of signal, converting the signal into content for display/transmission over the monitor. The signal source might be an Ethernet cable, a satellite dish, a coaxial cable, a fiber optic cable, a telephone line (including DSL connections), Broadband over Power Line, or even an ordinary antenna. The STB may have several different embodiments. For example, it may be a special digital STB for delivering digital content on TV sets that do not have a built in digital tuner. The STB may also descramble premium channels. A STB may be a cable converter box to receive digital cable TV channels and convert them to analog for non-digital TVs. In the case of direct broadcast satellite (mini-dish) systems such as SES Astra®, Dish Network®, or DirecTV®, the STB is an integrated receiver/decoder (or IRD). In internet packet (IP) TV networks, the STB is a small computer providing two-way communications on an IP network, and decoding the video streaming media which eliminates the need for any coaxial cabling. The STB may be a discrete unit or its functionality may be incorporated into other components of the user's system such as the monitor, TV, DVR, residential gateway, or personal computer. For example, the STB may be a portable, modular unit (i.e., a personal STB) or it may be integrated into a stationary TV system. The STB may contain one or more digital processors or may use the processing capabilities of the other system components (e.g., TV, DVR, personal computer). Additionally, rather than having its own tuner, the STB may use the tuner of a television.


A set top box 106 commonly will be connected to provide its output to a monitor 109, such as a television set. Commonly, a handheld remote control unit 110 communicates wirelessly (e.g., infrared) with the set top box 106 to control functions and operations of the set top box 106.


The set top box 106 is capable of receiving the content signals, permitting the user to select a particular channel for viewing, and demodulating the content on that channel to a form that can be displayed on the subscriber's television or other monitor 109. The STB further may control access to various channels and other content, such as on demand, pay-per-view programs, premium channels, etc., based on permissions granted to each individual subscriber based on their subscription plan, parental controls, and other criteria.


The set top box 106 can, not only receive data from the headend 101 through the network 104, but also transmit data upstream to the headend 101. For instance, set top boxes commonly transmit data upstream for purposes of ordering VOD or pay-per-view content.


The set top box 106 includes a processor 113 for running software to provide various functions. Preferably, it further includes a memory storage device, such as a hard disk drive 111, for recording television programs and/or other content. Set top boxes with this recording capability are commonly referred to as digital video recorder (DVR) set top boxes (STBs) or DVR-STBs. They provide the user the ability to search through upcoming television programming and selectively designate certain programs of the user's choosing to be recorded. The set top box 106 is programmed to provide various graphical user interfaces (GUIs), such as in the form of menus, permitting the user to interact (typically using the remote control unit 110) with the set top box 106 and/or the headend 101.


The set top box 106 may further include an input terminal 118 for connecting to a LAN or WAN 123 (preferably with connectivity to the Internet 124). Alternately or additionally, connectivity to the Internet 124 may be provided through the television network 104 itself via the headend 101.


The set top box 106 may be configured with Internet browser software and software for permitting users to interface with the Internet browser software, such as through a keyboard 125 and/or mouse 126.


In accordance with the present invention, a user of an information network (any system through which a user can access information), is offered supplemental content, the supplemental content being selected and/or organized as a function of both the user's personal preferences and the information currently being consumed by the user (i.e., the context). Furthermore, in a preferred embodiment of the invention, the supplemental content is offered on a sub-asset level. That is, the supplemental content is provided in units that may be smaller than the units in which that type of information conventionally is offered for consumption. Media items are typically offered by programmers and network operators in generally predefined portions herein termed assets. For instance television programs such as dramas, soap operas, reality shows, and sitcoms are typically broadcast in asset level units known as episodes that commonly are a half hour or an hour in length (including advertisements). Sporting events are broadcast in asset units of a single game. Music videos are commonly offered in asset units corresponding to a complete song or a complete concert performance.


The definition of the term “asset” is well understood in the industry as well as among content consumers. For instance, a typical programming guide printed in a newspaper or the electronic program guides commonly provided by a subscriber-based television network are well known to virtually all television viewers and generally list multimedia content at the asset level.


In any event, a media asset typically can conceptually be broken down into a plurality of segments at the sub-asset level, each having a cohesive context or theme. “Context” or “contextual information” refers broadly to the subject matter or theme of the content and can be virtually anything within the realm of human knowledge, such as baseball, strike out, fast ball, stolen base, mountains, scary, happy, George Carlin, nighttime. The nature and duration of each segment will depend, of course, on the particular ontology used for purposes of segmentation as well as on the particular content of each program. “Content” refers broadly to the information contained in the signal transmitted, and includes, for example, entertainment, news, and commercials. For instance, most stage plays, and motion pictures readily break down into two or three acts. Each such act can be a different segment. Television programs also can be segmented according to thematic elements. Certain programs, for instance, the television news magazine program 60 Minutes can readily be segmented into different news stories. Other programs, however, can be segmented based on more subtle thematic elements. A baseball game can be segmented by inning, for instance. A typical James Bond movie can be segmented into a plurality of action segments, a plurality of dramatic segments, and a plurality romantic segments. The possibilities for segmentation based on thematic elements is virtually limitless and these are only the simplest of examples.


In accordance with the present invention as will be described in more detail below, information is offered in segments smaller than asset level units. For example, supplemental content may be offered in the form of individual coherent scenes from a television program or motion picture, a portion of a music video, a particular news report within a news program, a coherent portion of a Web page, the chorus portion of a song, etc.


The invention is perhaps best described initially by way of an example. Accordingly, an exemplary embodiment of the invention as implemented in connection with a television network (cable, satellite, Internet, telephone, fiber optic, etc.) will be described herein below in connection with FIGS. 2 and 3.


Let us consider an individual consumer who is watching a particular television program, in this example, a major league baseball game between the Philadelphia Phillies and the New York Mets. In accordance with this particular exemplary embodiment, the consumer is permitted at any time during the program to activate a supplemental content search feature (hereinafter F-Search). Activation of this feature may be provided through any reasonable means, such as a dedicated button on a remote control unit or portable consumer device (e.g., a smart phone, a media player, etc.) or a hyperlink in a Web browser. When the feature is thus selected, the local device, for instance, sends a signal upstream to a server requesting invocation of the F-Search feature. In response, the server performs a search for supplemental content that pertains to the context of the particular content being consumed by that consumer at that time. Furthermore, in a preferred embodiment, the supplemental content is organized for presentation to the consumer in a manner that is a function of user preferences. For instance, the results may be ordered from highest to lowest relevance or importance as a function of the user preferences. The content may not only be ordered at least partially as a function of user preferences, but may be actually selected for inclusion in the list of supplemental content as a function of user preferences.


Even further, the supplemental content offered to the user may comprise at least some media items at the sub-asset level. Supplemental content offered at the sub-asset level may include content from both the current context, i.e., the Philadelphia Phillies vs. New York Mets game currently being watched, as well as sub-asset level content from media items not currently being consumed by the user.


The exact form in which the items of supplemental content are presented to the user can be any reasonable form. FIG. 2 shows one particular possible interface in the form of a graphical user interface (GUI) 200 that the user can interact with very simply via the use of one or more navigation buttons (e.g., UP, DOWN, LEFT, and/or RIGHT buttons) and a SELECT button. In this particular example, the program that the consumer was originally watching is paused, but remains on the display in a small window 201 in the center of the display and surrounded by a plurality of icons, each representing either an individual item of supplemental media content or a list of items of supplemental content. In this example, each icon comprises a button 202a-202k that the user may select by navigating to it and selecting it. In other embodiments, the media originally being consumed may disappear from the screen and/or the item of supplemental content may be shown in a numbered list from which the user can select an item by entering its number on the remote-control unit.


In a preferred embodiment of the invention, only a limited number of media items deemed to be the most relevant items of supplemental content are presented on the display at any given time. In this example, the results are broken down into three categories. However, this is merely exemplary and the media items of supplemental content can be in a single or any other number of lists. First, on the left-hand side of the main window 201, a list of four items 202a, 202b, 202c, 202d of supplemental content pertaining to one or both of the teams playing the game is provided. Second, on the right-hand side of the main window, a list of four items 202e, 202f, 202g, 202h of supplemental content relevant to the program and scene, but not specifically related to the teams is presented. Third, above of the main window 201, the subscriber is presented with three more items 202i, 202j, 202k of supplemental content pertaining to the purchase of merchandise and the like. In this case, the subscriber is presented with the opportunity to buy goods and/or services deemed relevant to the program being viewed. In this example, item 202i allows the subscriber to purchase Philadelphia Phillies baseball caps. Item 202j allows the subscriber to purchase Philadelphia Phillies Jackets. Finally, item 202k allows the subscriber to order pizza delivery online.


The supplemental content on both the left hand and right hand sides of the picture are ordered according to the determined user preferences.


In addition, beneath the main window, is a time bar 205 that shows the point at which the game was paused relative to the current real-time of the game broadcast. Beneath that are two more options. The first one 204a allows the consumer to exit out of the F-Search GUI and return to the program being viewed starting from the point at which the program was paused. The other option 204b exits the F-Search feature and returns the consumer to the program at the current real-time point in the game broadcast.


In addition, for each of the three categories into which the supplemental content items is organized in this particular embodiment, there is a MORE button 206a, 206b, 206c. The selection of one of the MORE buttons causes the next most relevant items of supplemental content in that particular category to appear on the screen in the place previously occupied by the preceding list of first most relevant items. In a preferred embodiment, the user is permitted to click on each MORE button multiple times to continue to view additional items of supplemental content of lesser and lesser relevance.


As previously noted, the search engine selects and organizes items of supplemental content as a function of both (1) the context in which the F-Search feature was invoked (i.e., the particular program and scene being viewed) and (2) predetermined user preferences. For instance, as previously mentioned, let us assume that the program being viewed is a baseball game between the Philadelphia Phillies and the New York Mets and the particular scene during which F-Search was invoked was immediately after Philadelphia Phillies shortstop Jimmie Rollins made a spectacular catch to rob New York Mets batter Marlon Anderson of a base hit. Accordingly, relevant supplemental content may include, for instance, a recorded interview with Jimmie Rollins, a recorded interview with Marlon Anderson, a highlight reel of great plays by Jimmie Rollins, a highlight reel of great hits by Marlon Anderson, New York Mets season statistics, Philadelphia Phillies season statistics, career statistics for Jimmie Rollins, career statistics for Marlon Anderson, the current team standings, results of other baseball games being played that day, etc.


The relevance of each of these items of supplemental content to the viewer may be highly dependent on the particular viewer. For instance, a fan of the Philadelphia Phillies probably would probably deem the highlight clips of Marlon Anderson or the career statistics of Marlon Anderson as possessing low relevance. Conversely, a Mets fan viewing the same game and scene would deem these same pieces of content as highly relevant.


Thus, the F-Search feature not only automatically detects the context in which the F-Search feature was invoked, and uses the contextual information to search for supplemental contents, but also utilizes a prediction as to the particular viewer's preferences in selecting supplemental content and/or ordering the items of supplemental content. In one embodiment of the invention, the list or lists of items of supplemental content is determined as a function of the context without consideration of user preferences and then those items are ordered for presentation in terms of relevance as a function of user preferences. However, in other embodiments, the selection of items for inclusion in the list itself can be a function of both context and user preferences.


Context can be determined in an automated fashion through the use of one or more of several technologies. Of course, it is possible to do this by human effort, i.e., a person watches media assets and manually takes note of coherent segments and their thematic elements and then enters the information in a database. However, with the sheer volume of media content available today and which is only likely to increase exponentially in the future, at least some automation of the process would be highly desirable.


Many software systems are available now that can be adapted for use in connection with this task. For instance, software is now available that can capture the closed caption stream within a media asset and convert it to written text, which could then be analyzed for context. Further, software is available that can analyze the audio portion of a media stream and convert speech detected in the audio text (which can then further be analyzed for context). In fact, software is now available that can analyze the audio portion of a media stream to determine additional contextual information from sounds other than speech. For instance, such software can detect, recognize and distinguish between, for instance, the sound of a crowd cheering or a crowd booing, sounds associated with being outdoors in a natural setting or being outdoors in an urban setting or being indoors in a factory or an office or a residence, etc. For example, U.S. Pat. No. 7,177,861 discloses suitable software for detecting semantic events in an audio stream.


Even further, optical character recognition software can be used to determine text that appears in a scene (as opposed to being audibly spoken). See, e.g. Li, Y. et al. “Reliable Video Clock Recognition,” Pattern Recognition, 2006, 1CPR 2006, 18th International Conference on Pattern Recognition. Such software can be used, for instance, to detect the clock in a timed sporting event. Specifically knowledge of the game time could be useful in helping determine the nature of a scene. For instance, whether the clock is running or not could be informative as to whether the ball is in play or not during a football game. Furthermore, certain times during a sporting event are particularly important, such as two minutes before the end of a professional football game. Likewise, optical character recognition can be used to determine the names of the actors or other significant persons in a television program or the like simply by reading the credits at the beginning or end of the program.


Furthermore, video analytics software is available that can analyze the visual content of a media stream to determine context, e.g., indoors or outdoors, presence or absence of cars and other vehicles, presence or absence of human beings, presence or absence of non-human animals, etc. In fact, software is available today that can be used to actually recognize specific individuals by analyzing their faces.


Even further, there may be significant metadata contained in a media stream. While some may consider the closed captioning stream to be metadata, we here refer to additional information. Particularly, the makers or distributors of television programs or third party providers sometimes insert metadata into the stream that might be useful in determining the context of an asset or of a segment within an asset. Such metadata may include almost any relevant information, such as actors in a scene, timestamps identifying the beginnings and ends of various segments within a program, the names of the teams in a sporting event, the date and time that the sports event actually occurred, the number of the game within a complete season, etc. Accordingly, the system may also include software for analyzing such metadata.


Even further, companies now exist that provide the services of generating and selling data about television programs and other media assets. For instance, Stats, Inc. of Northbrook, IL, USA sells such metadata about sporting events. Thus, taking a baseball game as an example, the data may include, for instance, the time that each half inning commenced and ended, data for each at bat during the game, such as the identity of the batter, the times at which the at-bat commenced and ended, the statistics of each player in the game, the score of the game at any given instance, the teams playing the game, etc.


User preferences likewise can be determined from various sources of information readily available to website operators, radio network operators, television network, etc. For instance, this may include the geographic location of the user, information about the user's household members (such as ages, professions, personal interests), that may have been obtained from a user directly when the user subscribed to the service (or that can be obtained through third-party services that provide such data for a fee). Other sources of data include demographic data about the geographic area in which a user lives.


Perhaps most significantly, user preference data can be obtained from the user's media consumption habits, (subject to the user's consent to collect such data). Particularly, a media service provider, such as a cable television network or website, may record and maintain records of (1) all linear programs consumed by a media consumer, (2) programs viewed via VOD, (3) the specific subscription plan purchased by the consumer (if it is a subscription-based service), (4) the programs the consumer recorded on his or her DVR-STB or computer, (5) how often particular programs have been consumed (either linearly or through use of a DVR-STB or other recording device or software), (6) how often particular scenes within a program are consumed by the consumer, and (7) the consumer's past consumption of supplemental content via usage of the F-Search feature (particularly, the specific items of supplemental content selected from the search results presented). The term “linear” above or “linear consumption” refers to how a person consumes (e.g., watches) television programs in real time as they are being broadcast by a content provider.


Merely as an example, a user living in Philadelphia that has watched every Philadelphia Phillies game broadcast since subscribing to the television service and that has ordered VOD programs pertaining to the Philadelphia Phillies, and for whom a large portion of his or her television consumption comprises sporting events involving Philadelphia area teams, and who has never watched a New York Mets game (except for when the New York Mets are playing the Philadelphia Phillies) can easily be predicted to be more interested in a highlight reel pertaining to Jimmie Rollins than a highlight reel pertaining to Marlon Anderson.


The prediction of relevance of any item of supplemental content as a function of user preferences can be performed using a multi-variable regression equation having as its input data the aforementioned variables such as linear television consumption, VOD television consumption, geographic data, demographic data, etc., the particular variables. The weighting coefficient applied to each variable, and the specific equation (e.g., least mean squares) would all depend on the particular available information, experimentation with different variables. The variable, the weighting factors, the equations, and other factors can be modified and updated periodically based on historical performance and even possibly user satisfaction surveys.


The selection of the items of supplemental content based on context also may be performed using any reasonable multi-variable regression equation having as its inputs, for example, any one or more of the aforementioned variables, such as the closed-captioning stream, the video analytics output stream, the audio analytics output stream, the metadata associated with the program, etc.


The equipment for providing functionality in accordance with the invention may reside at any reasonable location in the network, such as at a headend or server, at a content management center, or even at the set top box, Web browser, radio receiver, or other device local to the consumer. The most likely embodiments of the invention will comprise software programs running on digital processing devices such as computers, microprocessors, digital processors, etc. However, at least parts of the functionalities set forth herein could also be complemented by other means such as ASICs (Application Specific Integrated Circuits), FPGAs (Field Programmable Gate Arrays), state machines, combinational logic circuits, analog circuits, human operator, and any combination of the above.


The software and/or other circuits may be distributed among different nodes of a network, such as a server and a client node. Also, the software may be embodied in any form of memory that can be associated with a digital processing apparatus, including, but not limited to RAM, ROM, PROM, EPROM, EEPROM, DRAM, Compact Disc, Tape, Floppy Disc, DVD, SD memory devices, Compact Flash memory devices, USB memory devices, etc.


In some embodiments, some or all of the items of supplemental content can be pre-determined. That is, for instance, even in a broadcast of a live event, such as a broadcast of a live baseball game, some items of supplemental content can be preset, such as the season records of the two teams that are known to be playing the game. Other items of supplemental content can be found (or even generated) in real time in response to the particular context. Generating and/or finding items of supplemental content in real time based on contextual information can be accomplished easily. Particularly, once the context is determined, the generation of the items of supplemental content can be generated via a search on the Internet similar to the searches performed by any Internet search engine. The context determination can be performed and formulated into a search query in a fraction of a second and the search for items of supplemental content based on that query also can be performed in a fraction of a second. Finally, the list of items of supplemental content can be run through a regression analysis that will order the items in a selected order within a fraction of a second. The entire process can readily be performed in substantially less than a second.


In other systems, most or all of the items of supplemental content can be predetermined. For instance, when viewing a motion picture (as opposed to a live performance or event), the content of the motion picture may be analyzed ahead of time and some or all of the supplemental content to be made available at any given instant in the motion picture can be pre-determined. For instance, it is widely known that the Star Wars series of movies contains many light saber fight scenes and that many fans of the series are particularly interested in the light saber fight scenes. Accordingly, the individual light saber scenes from the various movies in the series may be pre-ordained as items of supplemental content when the F-Search feature is activated during the viewing of any particular light saber fight scene within a Star Wars movie. In one embodiment of the invention, each scene can be contained in the list of supplemental content items as a separate item. However, in another embodiment, one of the items of supplemental content may be, for instance, “See other light saber fight scenes.” If the user chooses that item, the user may be taken to a new menu from which the user can select from a list of light saber fight scenes.


As previously mentioned and as illustrated in the immediately preceding example, items of supplemental content can be provided at the sub-asset level, e.g., light saber fight scene, as opposed to the asset level, e.g., an entire Star Wars movie.


The same basic software used to determine the context of a scene being consumed can likewise be used in software that analyzes media assets and breaks them down into contextually coherent segments at the sub-asset level. More particularly, as an example, the same software that determines that a particular scene being consumed is a light saber fight scene (such as based on sound effects, motion progression in the scene, and meta data associated with the program and/or scene) can readily be applied in the process of identifying the light saber fight scenes within a Star Wars movie for purposes of segmenting a media asset into contextually coherent sub-asset level media segments, including determining the beginning and the end of the light saber fight scene in order to create a coherent sub-asset.


In the particular above-described embodiment, the process of generating the ordered list of items of supplemental content is broken into two separate and distinct steps, namely, generation of the list of search results as a function of context and then ordering of the list as a function of predicted user preferences. However, as previously mentioned, this is merely exemplary. The generation and ordering of the list can be performed together as a function of both context and predicted user preferences.


In one preferred embodiment of the invention, the F-Search feature is available at any time during the consumption of any media asset. Search results can be generated at least partially in real time based on analyzed context and analyzed user preferences. However, the media provider also may pre-identify particular portions of particular media assets as being particularly suitable for supplemental content and, as previously mentioned, may have pre-generated supplemental content particularly relating to such scenes. Thus, in accordance with one preferred embodiment of the invention, a cable television network operator may insert a F-Search icon into the display during such scenes that, in essence, alerts the user to the availability of particularly interesting supplemental content and/or invites the user to activate the F-Search feature.


In some embodiments of the invention, the media being consumed continues to stream during use of the F-Search feature. However, the user can choose to pause the program while using the F-Search feature. In other embodiments, the media may be automatically paused upon activation of the feature. The user may be given the option of continuing to view the original asset.


The invention preferably is implemented primarily or exclusively by software (including appropriate databases) running on a server at a content center, head end, or any other content node of the network. However, portions or all of the functions in accordance with the principles of the invention can be implemented via software and/or hardware and may comprise any one or more of a microprocessor, a processor, combinational logic, a state machine, analog circuitry, digital circuitry, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Array (PLA), any other conventional computing apparatus and/or combinations of any of the above. In other embodiments, the invention may be implemented at the user nodes, such as a set top box or intelligent television.



FIG. 3 is a flow diagram illustrating process flow in accordance with one particular exemplary embodiment of the invention. At step 301, process is commenced responsive to the user activating the F-Search feature. Next, in step 303, software performs an analysis to determine context, e.g., the nature of the scene currently being displayed. As previously mentioned, this may include a regression analysis of the available data about the scene. Such data may include metadata contained in the stream, analysis of the closed captioning stream, video analytics, and/or audio analytics. Next in step 305, a search is performed for items of supplemental content as a function of the determined contextual data and a list is generated of items of supplemental content. Next, in step 307, user preferences are determined for the specific user. As previously mentioned, this also can be performed by a regression analysis based on any or all of the available data about the user, including, for instance, user-provided data such as address, income level, age, gender, demographic data, and viewing habit data collected by the network operator.


In step 309, the system performs an analysis of the importance of the items of supplemental content that were generated in step 305 based on these determined user preferences. In step 311, that list is ordered as a function of the determine relevance/importance. Next, in step 313, the system generates a display with a suitable GUI showing the ordered list of items of supplemental content.


At this point, the user can now interact with the list of supplemental content to select items for viewing, pause the original program, etc. The flow diagram shows this interaction generally at step 315, the details of which can take any number of forms, many of which will be readily apparent to those persons of skill in the related arts and therefore are not shown in detail. Some of the items of supplemental content may be ordered in a menu tree. For instance, selection of a particular item in the GUI generated in step 313 may actually lead to a sub-list of items of supplemental content.


In any event, the user can interact with the GUI as much as desired, and when the user selects to exit the feature (step 317), flow proceeds to step 319 where the process is exited.


Having thus described a few particular embodiments of the invention, various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications and improvements as are made obvious by this disclosure are intended to be part of this description though not expressly stated herein, and are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and not limiting. The invention is limited only as defined in the following claims and equivalents thereto.

Claims
  • 1. A method comprising: determining a current portion of content being output for a user;determining a first content item associated with the current portion;determining, by one or more computing devices and based on the current portion: a first supplemental content asset having a first relevancy to the current portion, anda second supplemental content asset having a second relevancy, different from the first relevancy, to the current portion; andcausing output of a listing of the first supplemental content asset and the second supplemental content asset, wherein the listing is ordered based on a determination of whether the user has previously viewed or recorded the first content item.
  • 2. The method of claim 1, wherein the first content item was recorded based on a request by the user, and the method further comprising: determining the order of the first supplemental content asset and the second supplemental content asset based on performing regression analysis on first data associated with the first content item recorded by a digital video recorder and additional data indicating that a second content item was output for display during a broadcast of the second content item, wherein the performing of the regression analysis comprises applying a different weight to each of the first data and the additional data.
  • 3. The method of claim 1, wherein each of the first supplemental content asset and the second supplemental content asset comprises a segment of a television program.
  • 4. The method of claim 1, wherein determining the first supplemental content asset and the second supplemental content asset is based on one or more of: video analytics data of the current portion, or audio analytics data of the current portion.
  • 5. The method of claim 1, further comprising: inserting, into a data stream that comprises the current portion of content, metadata associated with the current portion, wherein determining the first supplemental content asset and the second supplemental content asset is based on the metadata.
  • 6. The method of claim 1, wherein determining the first supplemental content asset and the second supplemental content asset are performed based on receipt of a request for supplemental content, and the method further comprising: causing output of a user interface that includes the listing of the first supplemental content asset and the second supplemental content asset after receipt of the request for supplemental content.
  • 7. The method of claim 1, wherein determining the current portion is performed based on receiving a request for supplemental content; and wherein determining the first supplemental content asset and the second supplemental content asset comprises: performing, using data associated with the current portion, a search for the first and second supplemental content assets.
  • 8. The method of claim 1, wherein determining the current portion is performed based on receiving a request for supplemental content; and wherein determining the first supplemental content asset and the second supplemental content asset comprises: performing, using data associated with the current portion, a search for the first and second supplemental content assets via the World Wide Web.
  • 9. The method of claim 1, further comprising: determining a person depicted in the current portion of the content; andperforming, based on the person, a search that results in at least one additional supplemental content asset, wherein the at least one additional supplemental content asset comprises video depicting the person.
  • 10. The method of claim 7, wherein the request for supplemental content represents a user-activation of a search function, wherein the search function is for supplemental content pertaining to what is depicted in the current portion, and wherein a user interface is configured to provide results of the search function.
  • 11. The method of claim 1, further comprising: determining, based on the first relevancy of the first supplemental content asset to the current portion and the second relevancy of the second supplemental content asset to the current portion, a plurality of third supplemental content assets;determining, based on whether the user has viewed or recorded the first content item, an order of the plurality of third supplemental content assets; andcausing output of a user interface that comprises an indication and a listing of the plurality of third supplemental content assets based on the order of the plurality of third supplemental content assets, and wherein the user interface is configured to, based on a user-selection of the indication, initiate a process to purchase merchandise associated with one of the listed plurality of third supplemental content assets.
  • 12. The method of claim 1, wherein the content comprises an event for a sport that is between at least two teams, and the first supplemental content asset comprises supplemental content associated with one of the at least two teams, and wherein the second supplemental content asset comprises supplemental content associated with the sport.
  • 13. The method of claim 1, wherein the listing of the first supplemental content asset and the second supplemental content asset is ordered from highest to lowest importance as determined based on whether the first content item: was recorded by a digital video recorder; andwas output from the digital video recorder for display.
  • 14. The method of claim 1, wherein the listing is ordered based on: video-on-demand content that was output for display.
  • 15. The method of claim 1, wherein the listing is ordered based on whether linear content was output for display.
  • 16. The method of claim 1, wherein each of the first supplemental content asset and the second supplemental content asset is one of a television program, a motion picture, a music video, a news program, a Web page, or a song.
  • 17. The method of claim 1, the method further comprising: determining a topic based on the first content item and a plurality of additional content items the user has previously viewed or recorded, wherein a supplemental content asset of the first supplemental content asset and the second supplemental content asset is selected and output to the user based on the topic.
  • 18. The method of claim 1, the method further comprising: determining user preferences based on the first content item, wherein the listing is ordered based on a relevancy between the first supplemental content asset and the second supplemental content asset and the user preferences.
  • 19. A method comprising: determining a current portion of content being output for display;determining, by one or more computing devices and based on the current portion, a plurality of supplemental content assets;determining, based on a viewing history of the user or content recorded by a digital video recorder, an order of the plurality of supplemental content assets; andcausing output of a user interface comprising a listing, based on the order, of the plurality of supplemental content assets.
  • 20. The method of claim 19, wherein determining the current portion of content being output for display is performed based on receiving an indication of a user-activation of a supplemental content search function.
  • 21. The method of claim 19, wherein determining the order of the plurality of supplemental content assets comprises performing a regression analysis based on whether the first content item was recorded by the digital video recorder, whether a portion of the first content was output for display at least once from the digital video recorder, and whether second content was output for display during a broadcast of the second content.
  • 22. The method of claim 21, wherein performing the regression analysis comprises applying a first weight to first data indicating whether the first content item was recorded by the digital video recorder, applying a second weight to second data indicating whether a portion of the first content was output for display at least once from the digital video recorder, and applying a third weight to third data indicating whether the second content was output for display during the broadcast of the second content; and wherein the method further comprises adjusting the first weight or the second weight based on a survey taken by a user associated with the digital video recorder.
  • 23. The method of claim 19, wherein determining the plurality of supplemental content assets comprises: searching, based on the current portion, for the plurality of supplemental content assets; andgenerating a list of the plurality of supplemental content assets by determining, based on data indicating whether the first content item was output for display at least once from the digital video recorder, that each of the plurality of supplemental content assets is to be included in the list.
  • 24. The method of claim 19, wherein each of the plurality of supplemental content assets comprises a segment of a television program.
  • 25. The method of claim 19, wherein determining the order of the plurality of supplemental content assets is based on one or more of: video analytics data of the current portion, or audio analytics data of the current portion.
  • 26. The method of claim 19, further comprising: inserting, into a data stream that comprises the content, metadata associated with the current portion; anddetermining, based on the metadata, data associated with the current portion, andwherein determining the plurality of supplemental content assets is based on the data.
  • 27. The method of claim 19, wherein determining the plurality of supplemental content assets is performed based on a request for supplemental content, and wherein the user interface indicates a response to the request for supplemental content.
  • 28. The method of claim 19, wherein determining the current portion is performed based on receiving a request for supplemental content; and wherein determining the plurality of supplemental content assets comprises: performing, using data associated with the current portion, a search for the plurality of supplemental content assets.
  • 29. The method of claim 19, wherein determining the current portion is performed based on receiving a request for supplemental content; and wherein determining the plurality of supplemental content assets comprises: performing, using data associated with the current portion, a search for the plurality of supplemental content assets via the World Wide Web.
  • 30. The method of claim 19, wherein determining the current portion is performed based on receiving a request for supplemental content, wherein determining the plurality of supplemental content assets comprises: performing, using data associated with the current portion, a search for the plurality of supplemental content assets, andwherein the request for supplemental content represents a user-activation of a search function, wherein the search function is for supplemental content pertaining to what is depicted in the current portion, and wherein the user interface is configured to provide results of the search function.
  • 31. The method of claim 19, further comprising: determining a person depicted in the current portion; andperforming, based on the person, a search that results in at least one video depicting the person.
  • 32. The method of claim 19, wherein the content comprises an event for a sport that is between at least two entities, and wherein one or more of the plurality of supplemental content assets is associated with the sport or one of the at least two entities.
  • 33. The method of claim 19, wherein determining the order of the plurality of supplemental content assets is performed by ordering the plurality of supplemental content assets from highest to lowest importance as determined based on whether the first content item was recorded by a digital video recorder and based on whether the first content item was output for display at least once from the digital video recorder.
  • 34. The method of claim 19, wherein determining the order of the plurality of supplemental content assets is performed based on video-on-demand content that was output for display.
  • 35. The method of claim 19, wherein determining the order of the plurality of supplemental content assets is performed based on linear content that was output for display.
  • 36. A method comprising: determining, based on receiving an indication of a user-activation of a supplemental content search function during a current output of content, a current portion of content;determining, by one or more computing devices and based on the current portion, a plurality of supplemental content assets;determining, by the one or more computing devices: a first plurality of the supplemental content assets having a first relevancy to the current portion, anda second plurality of the supplemental content assets having a second relevancy to the current portion;ordering, by the one or more computing devices, based on whether the current portion was previously recorded by a digital video recorder or was previously output for display from the digital video recorder, the first plurality of the supplemental content assets;ordering, by the one or more computing devices, based on whether the current portion was previously recorded by the digital video recorder or was previously output for display from the digital video recorder, the second plurality of the supplemental content assets; andcausing output of a user interface that comprises a first indication of the ordered first plurality of the supplemental content assets and a second indication of the ordered second plurality of the supplemental content assets.
  • 37. The method of claim 36, wherein ordering the first plurality of the supplemental content assets comprises performing a regression analysis of first data indicating whether the current portion was recorded by the digital video recorder, second data indicating whether a portion of the current portion was output for display from the digital video recorder, and third data indicating whether second content was output for display during a broadcast of the second content, wherein performing the regression analysis comprises applying a different weight to each of the first data, the second data, and the third data.
  • 38. The method of claim 36, wherein the supplemental content search function is for supplemental content pertaining to what is depicted in the current portion, and wherein the user interface is configured to provide results of the supplemental content search function.
  • 39. The method of claim 36, further comprising: generating a third plurality of the supplemental content assets; andordering, based on first data indicating whether the current portion was recorded by the digital video recorder and second data indicating whether the current portion was output for display from the digital video recorder, the third plurality of the supplemental content assets,wherein the user interface comprises a third indication of the ordered third plurality of the supplemental content assets, wherein the user interface is configured to, based on a user-selection of the third indication, initiate a process to purchase merchandise associated with at least one of the ordered third plurality of the supplemental content assets.
  • 40. The method of claim 36, wherein the content comprises an event for a sport that is between at least two entities, wherein the first plurality of the supplemental content assets is associated with the at least two entities, and wherein the second plurality of the supplemental content assets is associated with the sport.
  • 41. The method of claim 36, wherein ordering the first plurality of the supplemental content assets is performed based on: video-on-demand content that was output for display; andwherein ordering the second plurality of the supplemental content assets is performed based on:the video-on-demand content that was output for display.
US Referenced Citations (469)
Number Name Date Kind
5287489 Nimmo et al. Feb 1994 A
5321750 Nadan Jun 1994 A
5353121 Young et al. Oct 1994 A
5485221 Banker et al. Jan 1996 A
5521841 Arman et al. May 1996 A
5530939 Mansfield, Jr. et al. Jun 1996 A
5583563 Wanderscheid et al. Dec 1996 A
5589892 Knee et al. Dec 1996 A
5592551 Lett et al. Jan 1997 A
5594509 Florin et al. Jan 1997 A
5613057 Caravel Mar 1997 A
5621456 Florin et al. Apr 1997 A
5657072 Aristides et al. Aug 1997 A
5659793 Escobar et al. Aug 1997 A
5666645 Thomas et al. Sep 1997 A
5675752 Scott et al. Oct 1997 A
5694176 Bruette et al. Dec 1997 A
5737552 Lavallee et al. Apr 1998 A
5802284 Karlton et al. Sep 1998 A
5826102 Escobar et al. Oct 1998 A
5844620 Coleman et al. Dec 1998 A
5850218 LaJoie et al. Dec 1998 A
5852435 Vigneaux et al. Dec 1998 A
5860073 Ferrel et al. Jan 1999 A
5883677 Hofmann Mar 1999 A
5892902 Clark Apr 1999 A
5892905 Brandt et al. Apr 1999 A
5905492 Straub et al. May 1999 A
5929849 Kikinis Jul 1999 A
5945987 Dunn Aug 1999 A
5960194 Choy et al. Sep 1999 A
5990890 Etheredge Nov 1999 A
5996025 Day et al. Nov 1999 A
6002394 Schein et al. Dec 1999 A
6005561 Hawkins et al. Dec 1999 A
6008803 Rowe et al. Dec 1999 A
6008836 Bruck et al. Dec 1999 A
6016144 Blonstein et al. Jan 2000 A
6025837 Matthews, III et al. Feb 2000 A
6038560 Wical Mar 2000 A
6049823 Hwang Apr 2000 A
6061695 Slivka et al. May 2000 A
6067108 Yokote et al. May 2000 A
6088722 Herz et al. Jul 2000 A
6091411 Straub et al. Jul 2000 A
6094237 Hashimoto Jul 2000 A
6141003 Chor et al. Oct 2000 A
6148081 Szymanski et al. Nov 2000 A
6162697 Singh et al. Dec 2000 A
6169543 Wehmeyer Jan 2001 B1
6172677 Stautner et al. Jan 2001 B1
6177931 Alexander et al. Jan 2001 B1
6191781 Chaney et al. Feb 2001 B1
6195692 Hsu Feb 2001 B1
6205582 Hoarty Mar 2001 B1
6219839 Sampsell Apr 2001 B1
6239795 Ulrich et al. May 2001 B1
6240555 Shoff et al. May 2001 B1
6281940 Sciammarella Aug 2001 B1
6292187 Gibbs et al. Sep 2001 B1
6292827 Raz Sep 2001 B1
6295057 Rosin et al. Sep 2001 B1
6314569 Chernock et al. Nov 2001 B1
6317885 Fries Nov 2001 B1
6345305 Beck et al. Feb 2002 B1
6405239 Addington et al. Jun 2002 B1
6415438 Blackketter et al. Jul 2002 B1
6421067 Kamen et al. Jul 2002 B1
6426779 Noguchi et al. Jul 2002 B1
6442755 Lemmons et al. Aug 2002 B1
6477705 Yuen et al. Nov 2002 B1
6486920 Arai et al. Nov 2002 B2
6522342 Gagnon et al. Feb 2003 B1
6529950 Lumelsky et al. Mar 2003 B1
6530082 Del Sesto et al. Mar 2003 B1
6532589 Proehl et al. Mar 2003 B1
6564263 Bergman et al. May 2003 B1
6567104 Andrew et al. May 2003 B1
6571392 Zigmond et al. May 2003 B1
6591292 Morrison et al. Jul 2003 B1
6621509 Eiref et al. Sep 2003 B1
6636887 Augeri Oct 2003 B1
6658661 Arsenault et al. Dec 2003 B1
6678891 Wilcox et al. Jan 2004 B1
6684400 Goode et al. Jan 2004 B1
6694312 Kobayashi et al. Feb 2004 B2
6698020 Zigmond et al. Feb 2004 B1
6704359 Bayrakeri et al. Mar 2004 B1
6731310 Craycroft et al. May 2004 B2
6745367 Bates et al. Jun 2004 B1
6760043 Markel Jul 2004 B2
6763522 Kondo et al. Jul 2004 B1
6766526 Ellis Jul 2004 B1
6806887 Chernock et al. Oct 2004 B2
6857128 Borden, IV et al. Feb 2005 B1
6886029 Pecus et al. Apr 2005 B1
6904610 Bayrakeri et al. Jun 2005 B1
6910191 Segerberg et al. Jun 2005 B2
6918131 Rautila et al. Jul 2005 B1
6963880 Pingte et al. Nov 2005 B1
7028327 Dougherty et al. Apr 2006 B1
7065785 Shaffer et al. Jun 2006 B1
7080400 Navar Jul 2006 B1
7103904 Blackketter et al. Sep 2006 B1
7114170 Harris et al. Sep 2006 B2
7134072 Lovett et al. Nov 2006 B1
7152236 Wugofski et al. Dec 2006 B1
7162694 Venolia Jan 2007 B2
7162697 Markel Jan 2007 B2
7174512 Martin et al. Feb 2007 B2
7177861 Tovinkere et al. Feb 2007 B2
7197715 Valeria Mar 2007 B1
7207057 Rowe Apr 2007 B1
7213005 Mourad et al. May 2007 B2
7221801 Jang et al. May 2007 B2
7237252 Billmaier Jun 2007 B2
7293275 Krieger Nov 2007 B1
7305696 Thomas et al. Dec 2007 B2
7313806 Williams et al. Dec 2007 B1
7337457 Pack et al. Feb 2008 B2
7360232 Mitchell Apr 2008 B2
7363612 Satuloori et al. Apr 2008 B2
7406705 Crinon et al. Jul 2008 B2
7440967 Chidlovskii Oct 2008 B2
7464344 Carmichael et al. Dec 2008 B1
7472137 Edelstein et al. Dec 2008 B2
7490092 Sibley et al. Feb 2009 B2
7516468 Deller et al. Apr 2009 B1
7523180 DeLuca et al. Apr 2009 B1
7587415 Gaurav et al. Sep 2009 B2
7624416 Vandermolen et al. Nov 2009 B1
7640487 Amielh-Caprioglio et al. Dec 2009 B2
7702315 Engstrom et al. Apr 2010 B2
7703116 Moreau et al. Apr 2010 B1
7721307 Hendricks et al. May 2010 B2
7743330 Hendricks et al. Jun 2010 B1
7752258 Lewin et al. Jul 2010 B2
7861259 Barone, Jr. Dec 2010 B2
7913286 Sarachik et al. Mar 2011 B2
7958528 Moreau et al. Jun 2011 B2
7975277 Jerding et al. Jul 2011 B1
8006262 Rodriguez et al. Aug 2011 B2
8032914 Rodriguez Oct 2011 B2
8156533 Crichton Apr 2012 B2
8220018 de Andrade et al. Jul 2012 B2
8266652 Roberts et al. Sep 2012 B2
8296805 Tabatabai et al. Oct 2012 B2
8365230 Chane et al. Jan 2013 B2
8381259 Khosla Feb 2013 B1
8434109 Kamimaeda Apr 2013 B2
8448208 Moreau et al. May 2013 B2
8660545 Redford et al. Feb 2014 B1
8699862 Sharifi et al. Apr 2014 B1
8793256 McIntire Jul 2014 B2
8850495 Pan Sep 2014 B2
8863196 Patil et al. Oct 2014 B2
8938675 Holladay et al. Jan 2015 B2
8943533 de Andrade et al. Jan 2015 B2
8973063 Spilo et al. Mar 2015 B2
9021528 Moreau et al. Apr 2015 B2
9363560 Moreau et al. Jun 2016 B2
9473548 Chakrovorthy et al. Oct 2016 B1
9516253 De Andrade et al. Dec 2016 B2
20010014206 Artigalas et al. Aug 2001 A1
20010027563 White et al. Oct 2001 A1
20010049823 Matey Dec 2001 A1
20010056573 Kovac et al. Dec 2001 A1
20010056577 Gordon et al. Dec 2001 A1
20020010928 Sahota Jan 2002 A1
20020016969 Kimble Feb 2002 A1
20020023270 Thomas et al. Feb 2002 A1
20020026642 Augenbraun et al. Feb 2002 A1
20020032905 Sherr et al. Mar 2002 A1
20020035573 Black et al. Mar 2002 A1
20020041104 Graf et al. Apr 2002 A1
20020042915 Kubischta et al. Apr 2002 A1
20020042920 Thomas et al. Apr 2002 A1
20020046099 Frengut et al. Apr 2002 A1
20020059094 Hosea et al. May 2002 A1
20020059586 Carney et al. May 2002 A1
20020059629 Markel May 2002 A1
20020067376 Martin et al. Jun 2002 A1
20020069407 Fagnani et al. Jun 2002 A1
20020070978 Wishoff et al. Jun 2002 A1
20020078444 Krewin et al. Jun 2002 A1
20020078449 Gordon et al. Jun 2002 A1
20020083450 Kamen et al. Jun 2002 A1
20020100041 Rosenberg et al. Jul 2002 A1
20020104083 Hendricks Aug 2002 A1
20020107973 Lennon et al. Aug 2002 A1
20020108121 Alao et al. Aug 2002 A1
20020108122 Alao et al. Aug 2002 A1
20020120609 Lang et al. Aug 2002 A1
20020124254 Kikinis Sep 2002 A1
20020144268 Khoo et al. Oct 2002 A1
20020144269 Connelly Oct 2002 A1
20020144273 Reto Oct 2002 A1
20020147645 Alao et al. Oct 2002 A1
20020152477 Goodman et al. Oct 2002 A1
20020156839 Peterson et al. Oct 2002 A1
20020156890 Carlyle et al. Oct 2002 A1
20020162120 Mitchell Oct 2002 A1
20020169885 Alao et al. Nov 2002 A1
20020170059 Hoang Nov 2002 A1
20020171691 Currans et al. Nov 2002 A1
20020171940 He et al. Nov 2002 A1
20020184629 Sie et al. Dec 2002 A1
20020188944 Noble Dec 2002 A1
20020194181 Wachtel Dec 2002 A1
20020196268 Wolff et al. Dec 2002 A1
20020199187 Gissin et al. Dec 2002 A1
20020199190 Su Dec 2002 A1
20030001880 Holtz et al. Jan 2003 A1
20030005444 Crinon et al. Jan 2003 A1
20030005453 Rodriguez et al. Jan 2003 A1
20030014752 Zaslavsky et al. Jan 2003 A1
20030014753 Beach et al. Jan 2003 A1
20030018755 Masterson et al. Jan 2003 A1
20030023970 Panabaker Jan 2003 A1
20030023975 Schrader et al. Jan 2003 A1
20030025832 Swart et al. Feb 2003 A1
20030028871 Wang et al. Feb 2003 A1
20030028873 Lemmons Feb 2003 A1
20030041104 Wingard et al. Feb 2003 A1
20030051246 Wilder et al. Mar 2003 A1
20030056216 Wugofski et al. Mar 2003 A1
20030056218 Wingard et al. Mar 2003 A1
20030058948 Kelly et al. Mar 2003 A1
20030061028 Dey et al. Mar 2003 A1
20030066081 Barone et al. Apr 2003 A1
20030067554 Klarfeld et al. Apr 2003 A1
20030068046 Lindqvist et al. Apr 2003 A1
20030070170 Lennon Apr 2003 A1
20030079226 Barrett Apr 2003 A1
20030084443 Laughlin et al. May 2003 A1
20030084444 Ullman et al. May 2003 A1
20030084449 Chane et al. May 2003 A1
20030086694 Davidsson May 2003 A1
20030093760 Suzuki et al. May 2003 A1
20030093790 Logan et al. May 2003 A1
20030093792 Labeeb et al. May 2003 A1
20030097657 Zhou et al. May 2003 A1
20030110500 Rodriguez Jun 2003 A1
20030110503 Perkes Jun 2003 A1
20030115219 Chadwick Jun 2003 A1
20030115612 Mao et al. Jun 2003 A1
20030126601 Roberts et al. Jul 2003 A1
20030132971 Billmaier et al. Jul 2003 A1
20030135464 Mourad et al. Jul 2003 A1
20030135582 Allen et al. Jul 2003 A1
20030140097 Schloer Jul 2003 A1
20030151621 McEvilly et al. Aug 2003 A1
20030158777 Schiff et al. Aug 2003 A1
20030172370 Satuloori et al. Sep 2003 A1
20030177501 Takahashi et al. Sep 2003 A1
20030182663 Gudorf et al. Sep 2003 A1
20030189668 Newnam et al. Oct 2003 A1
20030204814 Elo et al. Oct 2003 A1
20030204846 Breen et al. Oct 2003 A1
20030204854 Blackketter et al. Oct 2003 A1
20030207696 Willenegger et al. Nov 2003 A1
20030226141 Krasnow et al. Dec 2003 A1
20030229899 Thompson et al. Dec 2003 A1
20040003402 McKenna Jan 2004 A1
20040003404 Boston Jan 2004 A1
20040019900 Knightbridge et al. Jan 2004 A1
20040019908 Williams et al. Jan 2004 A1
20040022271 Fichet et al. Feb 2004 A1
20040024753 Chane et al. Feb 2004 A1
20040031015 Ben-Romdhane et al. Feb 2004 A1
20040031058 Reisman Feb 2004 A1
20040031062 Emmons Feb 2004 A1
20040039754 Harple Feb 2004 A1
20040073915 Dureau Apr 2004 A1
20040078814 Allen Apr 2004 A1
20040107437 Reichardt et al. Jun 2004 A1
20040107439 Hassell et al. Jun 2004 A1
20040111465 Chuang et al. Jun 2004 A1
20040128699 Delpuch et al. Jul 2004 A1
20040133923 Watson et al. Jul 2004 A1
20040136698 Mock Jul 2004 A1
20040168186 Rector et al. Aug 2004 A1
20040172648 Xu et al. Sep 2004 A1
20040189658 Dowdy Sep 2004 A1
20040194136 Finseth et al. Sep 2004 A1
20040199578 Kapczynski et al. Oct 2004 A1
20040221306 Noh Nov 2004 A1
20040224723 Farcasiu Nov 2004 A1
20040225751 Urali Nov 2004 A1
20040226051 Carney et al. Nov 2004 A1
20050005288 Novak Jan 2005 A1
20050015796 Bruckner et al. Jan 2005 A1
20050015804 LaJoie et al. Jan 2005 A1
20050028208 Ellis et al. Feb 2005 A1
20050086172 Stefik Apr 2005 A1
20050125835 Wei Jun 2005 A1
20050149972 Knudson Jul 2005 A1
20050155063 Bayrakeri et al. Jul 2005 A1
20050160458 Baumgartner Jul 2005 A1
20050166230 Gaydou et al. Jul 2005 A1
20050204385 Sull Sep 2005 A1
20050259147 Nam et al. Nov 2005 A1
20050262542 DeWeese et al. Nov 2005 A1
20050283800 Ellis et al. Dec 2005 A1
20050287948 Hellwagner et al. Dec 2005 A1
20060004743 Murao et al. Jan 2006 A1
20060059525 Jerding et al. Mar 2006 A1
20060068818 Leitersdorf et al. Mar 2006 A1
20060080707 Laksono Apr 2006 A1
20060080716 Nishikawa et al. Apr 2006 A1
20060104511 Guo et al. May 2006 A1
20060105793 Gutowski et al. May 2006 A1
20060125962 Shelton et al. Jun 2006 A1
20060143191 Cho et al. Jun 2006 A1
20060156336 Knudson et al. Jul 2006 A1
20060195865 Fablet Aug 2006 A1
20060200842 Chapman et al. Sep 2006 A1
20060206470 McIntyre Sep 2006 A1
20060206912 Klarfeld et al. Sep 2006 A1
20060233514 Weng et al. Oct 2006 A1
20060248572 Kitsukama et al. Nov 2006 A1
20070019001 Ha Jan 2007 A1
20070050343 Siddaramappa et al. Mar 2007 A1
20070064715 Lloyd et al. Mar 2007 A1
20070083538 Roy et al. Apr 2007 A1
20070112761 Xu et al. May 2007 A1
20070157247 Cordray Jul 2007 A1
20070211762 Song et al. Sep 2007 A1
20070214123 Messer et al. Sep 2007 A1
20070214488 Nguyen et al. Sep 2007 A1
20070220016 Estrada et al. Sep 2007 A1
20070239707 Collins et al. Oct 2007 A1
20070250901 McIntire et al. Oct 2007 A1
20070260700 Messer Nov 2007 A1
20070261072 Boulet et al. Nov 2007 A1
20070271587 Rowe Nov 2007 A1
20080037722 Klassen Feb 2008 A1
20080060011 Kelts Mar 2008 A1
20080071770 Schloter et al. Mar 2008 A1
20080092201 Agarwal et al. Apr 2008 A1
20080113504 Lee et al. May 2008 A1
20080126109 Cragun et al. May 2008 A1
20080133504 Messer et al. Jun 2008 A1
20080148317 Opaluch Jun 2008 A1
20080163304 Ellis Jul 2008 A1
20080183681 Messer et al. Jul 2008 A1
20080183698 Messer et al. Jul 2008 A1
20080189740 Carpenter et al. Aug 2008 A1
20080196070 White et al. Aug 2008 A1
20080204595 Rathod et al. Aug 2008 A1
20080208796 Messer et al. Aug 2008 A1
20080208839 Sheshagiri et al. Aug 2008 A1
20080221989 Messer et al. Sep 2008 A1
20080235209 Rathod et al. Sep 2008 A1
20080235393 Kunjithapatham et al. Sep 2008 A1
20080235725 Hendricks Sep 2008 A1
20080250010 Rathod et al. Oct 2008 A1
20080256097 Messer et al. Oct 2008 A1
20080266449 Rathod et al. Oct 2008 A1
20080276278 Krieger et al. Nov 2008 A1
20080282294 Carpenter et al. Nov 2008 A1
20080288641 Messer et al. Nov 2008 A1
20080288644 Gilfix et al. Nov 2008 A1
20080301320 Morris Dec 2008 A1
20080301732 Archer Dec 2008 A1
20080317233 Rey et al. Dec 2008 A1
20090006315 Mukherjea et al. Jan 2009 A1
20090019485 Ellis et al. Jan 2009 A1
20090024629 Miyauchi Jan 2009 A1
20090025054 Gibbs et al. Jan 2009 A1
20090083257 Bargeron et al. Mar 2009 A1
20090094113 Berry et al. Apr 2009 A1
20090094632 Newnam et al. Apr 2009 A1
20090094651 Damm et al. Apr 2009 A1
20090123021 Jung et al. May 2009 A1
20090133025 Malhotra et al. May 2009 A1
20090164904 Horowitz et al. Jun 2009 A1
20090183210 Andrade Jul 2009 A1
20090222872 Schlack Sep 2009 A1
20090228441 Sandvik Sep 2009 A1
20090240650 Wang et al. Sep 2009 A1
20090249427 Dunnigan et al. Oct 2009 A1
20090271829 Larsson et al. Oct 2009 A1
20090288132 Hegde Nov 2009 A1
20090292548 Van Court Nov 2009 A1
20100023966 Shahraray Jan 2010 A1
20100077057 Godin et al. Mar 2010 A1
20100079670 Frazier et al. Apr 2010 A1
20100175084 Ellis et al. Jul 2010 A1
20100180300 Carpenter et al. Jul 2010 A1
20100223640 Reichardt et al. Sep 2010 A1
20100250190 Zhang et al. Sep 2010 A1
20100251284 Ellis et al. Sep 2010 A1
20100257548 Lee et al. Oct 2010 A1
20110055282 Hoving Mar 2011 A1
20110058101 Earley et al. Mar 2011 A1
20110087348 Wong Apr 2011 A1
20110093909 Roberts et al. Apr 2011 A1
20110131204 Bodin et al. Jun 2011 A1
20110176787 DeCamp Jul 2011 A1
20110209180 Ellis et al. Aug 2011 A1
20110211813 Marks Sep 2011 A1
20110214143 Rits et al. Sep 2011 A1
20110219386 Hwang et al. Sep 2011 A1
20110219419 Reisman Sep 2011 A1
20110225417 Maharajh et al. Sep 2011 A1
20110246495 Mallinson Oct 2011 A1
20110247042 Mallinson Oct 2011 A1
20110289098 Oztaskent et al. Nov 2011 A1
20120002111 Sandoval et al. Jan 2012 A1
20120011550 Holland Jan 2012 A1
20120054811 Spears Mar 2012 A1
20120066602 Chai et al. Mar 2012 A1
20120117151 Bill May 2012 A1
20120185905 Kelley Jul 2012 A1
20120192226 Zimmerman et al. Jul 2012 A1
20120227073 Hosein et al. Sep 2012 A1
20120233646 Coniglio et al. Sep 2012 A1
20120295686 Lockton Nov 2012 A1
20120324002 Chen Dec 2012 A1
20120324494 Burger et al. Dec 2012 A1
20120324495 Matthews, III et al. Dec 2012 A1
20120324518 Thomas et al. Dec 2012 A1
20130007043 Phillips Jan 2013 A1
20130014155 Clarke et al. Jan 2013 A1
20130040623 Chun et al. Feb 2013 A1
20130051770 Sargent Feb 2013 A1
20130103446 Bragdon et al. Apr 2013 A1
20130110769 Ito May 2013 A1
20130111514 Slavin et al. May 2013 A1
20130170813 Woods et al. Jul 2013 A1
20130176493 Khosla Jul 2013 A1
20130198642 Carney et al. Aug 2013 A1
20130262997 Markworth et al. Oct 2013 A1
20130298038 Spivack et al. Nov 2013 A1
20130316716 Tapia et al. Nov 2013 A1
20130326570 Cowper et al. Dec 2013 A1
20130332839 Frazier et al. Dec 2013 A1
20130332852 Castanho et al. Dec 2013 A1
20130332855 Roman et al. Dec 2013 A1
20130347018 Limp et al. Dec 2013 A1
20130347030 Oh et al. Dec 2013 A1
20140006951 Hunter Jan 2014 A1
20140009680 Moon et al. Jan 2014 A1
20140026068 Park et al. Jan 2014 A1
20140032473 Enoki et al. Jan 2014 A1
20140053078 Kannan Feb 2014 A1
20140068648 Green et al. Mar 2014 A1
20140075465 Petrovic et al. Mar 2014 A1
20140082519 Wang et al. Mar 2014 A1
20140089423 Jackels Mar 2014 A1
20140089967 Mandalia et al. Mar 2014 A1
20140129570 Johnson May 2014 A1
20140149918 Asokan et al. May 2014 A1
20140150022 Oh et al. May 2014 A1
20140237498 Ivins Aug 2014 A1
20140267931 Gilson et al. Sep 2014 A1
20140279852 Chen Sep 2014 A1
20140280695 Sharma et al. Sep 2014 A1
20140282122 Mathur Sep 2014 A1
20140325359 Vehovsky et al. Oct 2014 A1
20140327677 Walker Nov 2014 A1
20140334381 Subramaniam et al. Nov 2014 A1
20140359662 Packard et al. Dec 2014 A1
20140365302 Walker Dec 2014 A1
20140373032 Merry et al. Dec 2014 A1
20150020096 Walker Jan 2015 A1
20150026743 Kim et al. Jan 2015 A1
20150263923 Kruglick Sep 2015 A1
Foreign Referenced Citations (29)
Number Date Country
2685833 Nov 2009 CA
2832800 Nov 2013 CA
2845465 Mar 2014 CA
0624039 Nov 1994 EP
0963115 Dec 1999 EP
1058999 Dec 2000 EP
1080582 Mar 2001 EP
99175979.5 Nov 2009 EP
13192112.4 Nov 2013 EP
14159227.9 Mar 2014 EP
2323489 Sep 1998 GB
2 448 875 Nov 2008 GB
2448874 Nov 2008 GB
9963757 Dec 1999 WO
0011869 Mar 2000 WO
0033576 Jun 2000 WO
0110115 Feb 2001 WO
0182613 Nov 2001 WO
02063426 Aug 2002 WO
02063471 Aug 2002 WO
02063851 Aug 2002 WO
02063878 Aug 2002 WO
03009126 Jan 2003 WO
2003026275 Mar 2003 WO
2007115224 Oct 2007 WO
2008053132 May 2008 WO
2011053271 May 2011 WO
2012094105 Jul 2012 WO
2012154541 Nov 2012 WO
Non-Patent Literature Citations (48)
Entry
Li, Y. et al., “Reliable Video Clock Time Recognition,” Pattern Recognition, 2006, 1CPR 1006, 18th International Conference on Pattern Recognition, 4 pages.
European Search Report dated Mar. 1, 2010.
Salton et al., Computer Evaluation of Indexing and Text Processing Journal of the Association for Computing Machinery, vol. 15, No. 1, Jan. 1968, pp. 8-36.
Smith, J.R. et al., An Image and Video Search Engine for the World-Wide Web Storage and Retrieval for Image and Video Databases 5, San Jose, Feb. 13-14, 1997, Proceedings of Spie, Belingham, Spie, US, vol. 3022, Feb. 13, 1997, pp. 84-95.
Kontothoanassis, Ledonias et al. “Design, Implementation, and Analysis of a Multimedia Indexing and Delivery Server”, Technical Report Series, Aug. 1999, Cambridge Research Laboratory.
Messer, Alan et al., “SeeNSearch: A context Directed Search Facilitator for Home Entertainment Devices”, Paper, Samsung Information Systems America Inc., San Jose, CA.
Boulgouris N. V. et al., “Real-Time Compressed-Domain Spatiotemporal Segmentation and Ontologies for Video Indexing and Retrieval”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, No. 5, pp. 606-621, May 2004.
Changsheng Xu et al., “Using Webcast Text for Semantic Event Detection in Broadcast Sports Video”, IEEE Transactions on Multimedia, vol. 10, No. 7, pp. 1342-1355, Nov. 2008.
Liang Bai et al., “Video Semantic Content Analysis based on Ontology”, International Machine Vision and Image Processing Conference, pp. 117-124, Sep. 2007.
Koskela M. et al., “Measuring Concept Similarities in Multimedia Ontologies: Analysis and Evaluations”, IEEE Transactions on Multimedia, vol. 9, No. 5, pp. 912-922, Aug. 2007.
Steffan Staab et al., “Semantic Multimedia”, Reasoning Web; Lecture Notes in Computer Science, pp. 125-170, Sep. 2008.
U.S. Appl. No. 12/343,790—Office Action dated May 23, 2011.
European Search Report for Application No. 09180776.8, dated Jun. 7, 2010, 9 pages.
European Search Report, EP 09 18 0762, completion date Mar. 22, 2010.
European Search Report dated Jun. 4, 2010.
EP Application No. 09 179 987.4-1241—Office Action dated Feb. 15, 2011.
European Application No. 09 175 979.5—Office Action dated Apr. 11, 2011.
European Patent Application No. 09175979.5—Office Action dated Dec. 13, 2011.
Canadian Patent Application No. 2,685,833—Office Action dated Jan. 20, 2012.
Boronat F et al: “Multimedia group and inter-stream synchronization techniques: A comparative study”, Information Systems. Pergamon Press. Oxford. GB. vol. 34. No. 1. Mar. 1, 2009 (Mar. 1, 2009). pp. 108-131. XP025644936.
Extended European Search Report—EP14159227.9—dated Sep. 3, 2014.
Fernando Pereira, “The MPEG-4 Book”, Prentice Hall, Jul. 10, 2002.
Michael Adams, “Open Cable Architecture”, Cisco Press, Dec. 3, 1999.
Andreas Kraft and Klaus Hofrichter, “An Approach for Script-Based Broadcast Application Production”, Springer-Verlag Brling Heidelberg, pp. 74-82, 1999.
Mark Riehl, “XML and Perl”, Sams, Oct. 16, 2002.
MetaTV, Inc., PCT/US02/29917 filed Sep. 19, 2002, International Search Report dated Apr. 14, 2003; ISA/US; 6 pages.
Sylvain Devillers, “Bitstream Syntax Definition Language: an Input to MPEG-21 Content Representation”, Mar. 2001, ISO, ISO/IEC JTC1/SC29/WG11 MPEG01/M7053.
Shim, et al., “A SMIL Based Graphical Interface for Interactive TV”, Internet Tech. Laboratory Dept. of Comp. Engineering, San Jose State University, pp. 257-266, 2003.
Yoon, et al., “Video Gadget: MPET-7 Based Audio-Visual Content Indexing and Browsing Engine”, LG Electronics Institute of Technology, 2001, pp. 59-68.
Watchwith webpage; http://www.watchwith.com/content_owners/watchwith_platform_components.jsp (last visited Mar. 12, 2013).
Matt Duffy; TVplus App reveals content click-through rates north of 10% across sync enabled programming; http://www.tvplus.com/blog/TVplus-App-reveals-content-click-through-rates-north-of-10-Percent-across-sync-enabled-programming (retrieved from the Wayback Machine on Mar. 12, 2013).
“In Time for Academy Awards Telecast, Companion TV App Umami Debuts First Real-Time Sharing of a TV Program's Images”; Umami News; http:www.umami.tv/2012-02-23.html (retrieved from the Wayback Machine on Mar. 12, 2013).
European Extended Search Report—EP 13192112.4—dated May 11, 2015.
CA Response to Office Action—CA Appl. 2,685,833—dated Jul. 17, 2015.
Response to European Office Action—European Appl. 13192112.4—dated Dec. 9, 2015.
Canadian Office Action—CA 2,685,833—dated Jan. 22, 2015.
Ca Office Action—CA App 2,685,833—dated Jan. 27, 2016.
European Office Action—EP App 14159227.9—dated Jul. 12, 2016.
Agnieszka Zagozdzinnska et al. “TRIDAQ Systems in HEP Experiments at LHC Accelerator” Kwartalnik Elektroniki I Telekomunikacji, vol. 59, No. 4, Jan. 1, 2013.
Mar. 9, 2018—European Office Action—EP 13192112.4.
Feb. 19, 2018—European Summons to Oral Proceedings—EP 14159227.9.
Jul. 31, 2018—European Decision to Refuse—14159227.9.
Nov. 6, 2019—Canadian Office Action—CA 2,832,800.
Apr. 21, 2020—European Summons to Oral Proceedings—EP 09175979.5.
Aug. 24, 2020, Canadian Office Action, CA 2,832,800.
Sep. 5, 2019—Canadian Office Action—CA 2,685,833.
CA Office Action—CA Application 2685833—dated Feb. 8, 2017.
Nov. 29, 2017—Canadian Office Action—CA 2,685,833.
Related Publications (1)
Number Date Country
20100125875 A1 May 2010 US