Method and system for associating video assets from multiple sources with customized metadata

Abstract
A system and method for presenting video asset information to a viewer to assist the view in selecting a video asset for viewing is described. The video assets can be available from a plurality of different video asset sources, such as VOD (video on demand), PVR (personal video recorders) and broadcast (including over the air, cable, and satellite). Images from the video assets are displayed in a uniform manner, along with information about the video assets. The information includes data in a metadata category. The view can select one of the video assets for viewing, but also can navigate using metadata categories such as genre, actors, director etc. Moreover, the system and method includes an on-screen remote control that can be utilized in conjunction with a physical input device for navigation and viewing one or more video assets. This allows a much easier and natural navigating and selection process for viewers.
Description
FIELD OF THE INVENTION

This invention is directed towards multi-channel video environments, and more particularly towards systems and methods for navigating through video assets that are broadcasted and available on a server for play out.


BACKGROUND

With the introduction of multi-channel video, Electronic Program Guides (EPGs) were developed to assist consumers with navigating the ‘500 Channel’ universe. These EPGs allowed features such as grouping of similarly themed programming, look ahead (and often marking for recording), navigating by Favorite Channels, etc. EPGs typically give access to currently showing, and shortly upcoming linear television programming.


With the rise of Video-On-Demand (VOD), EPGs have needed to toggle between VOD offerings and linear offerings. This has been somewhat of a compromise because prerecorded material offered through a VOD service cannot be selected directly through the EPG listings for linear channels. In addition, the VOD selection mechanisms are often modeled as hierarchical menu selection structures. With the steady increase of content available through VOD servers, this makes it increasingly difficult for consumers to navigate all available content.


Personal Video Recorders (PVRs) have had a similar effect: programming available on a PVR is typically presented separate from the linear programming and even from the programming available on VOD. Thus, consumers effectively “toggle” between linear programming, VOD programming, and PVR programming to browse all available programming.


Accordingly, there is a need to be able to tie these technologies together to enable the consumer to browse and search available programming content using metadata values in a consistent manner, and to represent the metadata in an intuitive way so that it is easy to relate them to the programming content.


SUMMARY

Advantageously, technologies have been developed to enable topically linked searches across multiple databases, meta data descriptors have been developed to more fully capture characteristics of such content as well as sub-sections of such content, and technologies have been developed where video scenes can have part of the screen with hot links to meta data objects.


Certain embodiments of the present invention relate to receiver devices for assisting a user to view one or more video assets. The receiver device includes a means for receiving the one or more video assets from a plurality of different video asset sources; and a software for on-screen remote application that can be displayed on a display device to allow the user to view the one or more video assets.


Certain embodiments of the present invention also relate to methods for assisting a user to view one or more a video assets. The method includes providing an on-screen remote application that can be displayed on a display device to allow a user to view the one or more video assets. The on-screen remote application may reside in a receiver device capable of receiving the one or more video assets from a plurality of different video asset sources.





BRIEF DESCRIPTION OF THE DRAWINGS

These and other features of the present invention will be better understood in view of the following detailed description taken in conjunction with the drawings, in which:



FIG. 1 is a block diagram illustrating components of a typical VOD system;



FIG. 2 illustrates a typical set of traversal steps through a VOD menu system to select a movie for viewing;



FIG. 3 illustrates video viewing screen for an illustrative embodiment of the present invention;



FIG. 4 illustrates interactive information banner for an illustrative embodiment;



FIG. 5 illustrates a metadata browsing screen for the illustrative embodiment;



FIG. 6 illustrates a preview/trailer screen for the illustrative embodiment;



FIG. 7 illustrates a second interactive information banner for an illustrative embodiment;



FIG. 8 illustrates a second preview/trailer screen for the illustrative embodiment;



FIG. 9 illustrates a third preview/trailer screen for the illustrative embodiment;



FIG. 10 illustrates a fourth preview/trailer screen for the illustrative embodiment;



FIG. 11 illustrates a flow chart according to an illustrative embodiment;



FIG. 12 illustrates a system diagram for an implementation of the illustrative embodiment;



FIG. 13 illustrates an implementation of a Clip/Still Store component;



FIG. 14 illustrates an implementation of a Search Metadata Database component;



FIG. 15 illustrates an implementation of a Asset Availability Database component;



FIG. 16 illustrates a possible mapping of user inputs commands to an existing remote control;



FIG. 17 illustrates an implementation of the Personalization Database component;



FIG. 18A-D illustrate example screen views of an embodiment;



FIG. 19A-B illustrate other example screen views for the embodiment of FIG. 18;



FIG. 20 illustrates another example screen view for the embodiment of FIG. 18; and



FIG. 21 illustrates another example screen view for the embodiment of FIG. 18.





DETAILED DESCRIPTION

A schematic overview of a prior art VOD system is shown in FIG. 1. The system consists of a VOD Back-End component 20 (residing in a cable head-end) and a Receiver Device 22 and Display Device 24 at the consumer's home. The Receiver Device 22 may be a digital set-top box, or any other receiving device including computers or media processors. The Display Device 24 can be a TV set, or any other display or monitoring system. Further, the Receiver device 22 and Display Device 24 may also be combined into one physical device, e.g. a “Digital Cable Ready” TV set, or computer/media center. The backend component 20 may comprise several modules, such as one or more VOD Storage servers 26 (used to store the programming that is available to the consumers), one or more VOD Pumps 28 (used to play out the programming as requested by the various consumers that are actually using the system at any point in time), a Subscriber Management & Billing module 30 (used to interface with the subscriber database, and for authentication and billing services), a Management & Control module 32 (used to overall manage the system, assets, and resources), and a Content Ingest module 34 (used to load new programming content onto the system).


In a typical usage scenario, the consumer 25 would “toggle” to VOD (e.g., by pressing a special button on their Received Device remote control). This causes the Receiver Device to send an initiation signal to the VOD Back-End over the Command & Control channel, and then typically to tune to a VOD channel, which gives the consumer a menu of available VOD assets from which to select. This menu is typically implemented as a hierarchical text-oriented menu system, where the user can select sub-menus and order VOD assets with key presses from their remote control. This is illustrated in the menu chain 36 of FIG. 2, where the consumer selects “Movies” from the main menu, then selects “Action Movies” from Sub Menu 1, then selects “Hannibal” from Sub Menu 2, then confirms the transaction to buy Hannibal at Sub Menu 3. Once all this is done, the VOD Back-End system 20 will allocate Hannibal in the VOD Storage system 26, allocate an available VOD Pump 28, and instruct the VOD Pump 28 to start playing out Hannibal on an available bandwidth slot (frequency) in the network. The Receiver Device 22 will then tune itself to this slot, and start to display the asset on the Display Device 24 so that the consumer 25 can view the asset. During the viewing process, the consumer 25 typically has the ability to Pause, Rewind, and Fast-Forward the movie by pressing buttons on his or her remote control. For example, when the consumer 25 presses the Pause button, the Receiver Device will send a Pause message (via Command & Control channel 27) to the VOD Back-End 20 to pause the movie. A VOD session can end because the movie viewing has ended, or because the consumer 25 decided to terminate the session by pressing one or more special buttons on the remote control, in both cases the system will go back to regular television viewing mode.


Current interfaces and systems for searching and browsing VOD assets are often problematic and not always effective. The systems are often implemented as hierarchical menu systems, are not very flexible, and not very intuitive. As a result, it is not always possible for a consumer to find a VOD asset for viewing unless they know the exact title and properties of the asset they are looking for. This problem gets even worse if the number of available VOD assets on VOD systems increases.


The present invention provides a new paradigm for browsing and searching video assets available on VOD and from other sources. The present invention takes advantage of metadata for the assets (e.g. “lead actor”, “director”, “year of release”, etc.), and in one embodiment uses it to enable the consumer search for certain assets (e.g. “find all assets starring or associated with Clint Eastwood”). It also provides powerful associative search capabilities (e.g. “I like movie X, so find me all assets that have the same lead actor”). Also, the present invention presents the consumer with an intuitive user interface (pictures instead of text) that can be easily navigated with traditional remote controls (no need for keyboards).


Further features of the present invention are described in U.S. patent application Ser. No. 11/080,389 filed on Mar. 15, 2005 and entitled METHOD AND SYSTEM FOR DISPLAY GUIDE FOR VIDEO SELECTION, which is incorporated herein by reference.


An illustrative implementation of the present invention in a digital cable system will now be described, first in terms of functionality to the consumer, then in terms of implementation in a cable system or environment.


Consider a consumer in a digital cable system, who has access to VOD service, and also has a digital receiver device that includes PVR (personal video recorder) service. To start off with, the consumer will be watching a movie, so his display may show full screen video as depicted in FIG. 3. At any point in time during the movie, the consumer can initiate (by pressing a specific button on his remote control) an interactive information banner 38 to be displayed on his display, as illustrated in FIG. 4. In this example, the banner 38 contains the channel logo 40 on the left, and some textual description 42 of the current movie to the right. The description contains a number of “linked fields” 44, which are marked by some visual effect (in this example they are underlined). The fields 44 represent associative searches for assets with the same attribute (so the “Will Smith” field represents all assets that feature Will Smith as an actor).


The consumer can navigate between the linked field with buttons on the remote control (current selection may be indicated by highlighting it), and then activate one of the links by pressing yet another button on the remote control. For this example, assume the consumer activates the “Will Smith” field. This will lead into a metadata browsing screen (in this case for “Will Smith”) as illustrated in FIG. 5. This screen provides the results of a search for all assets that share the same metadata (in this case “Starring Will Smith”). In this example, the screen holds nine assets, and each asset is shown as a combination of a still picture 46 (clipped from the asset or from an alternate source) and the title 48 of the asset along with other information such as the release year 50 of the asset and a symbol 52 indicating where the asset is available. Possible values for symbol 52 are: VOD (available in the VOD archive) 52a, Showing (currently showing) 52b, PVR (available on PVR) 52c, and Guide (shows up in the Guide, so available in the future) 52d. Other possible values for symbol 52, as well as alternative sources of the assets, such as DVD jukeboxes, tape jukeboxes, and media delivered by IP networks (including Ethernet, fiber, carrier current, wireless, etc.), are also within the scope of the invention.


Typically, one of the assets is highlighted 54 (indicating current selection, in this case the “Wild Wild West” asset). Other methods of drawing attention to the presently selected asset, including but not limited to blinking, ghosting, color changes, alternate borders, etc. are within the scope of the present invention. The consumer can change the current selection using keys on the remote control. In case there are more assets than fit on the screen, the consumer can move to previous and next pages similarly using remote control buttons. The consumer can activate the currently selected asset by pressing a specific button on the remote control. This will take the consumer to a preview/trailer session for the selected asset. For this example, assume the consumer has selected “I Robot”, the resulting preview/trailer screen is illustrated in FIG. 6. The preview can be any length theatrical preview, during the preview the consumer has the ability to purchase for viewing the VOD asset by pressing a button on the remote control (in this case the “Select” button). The consumer also has the option of viewing the purchased asset immediately, or potentially selecting a later time to view the VOD asset, allowing for example a parent to make a purchase with password protected purchase option, for the children to view later in the evening. Further, if the VOD asset may be downloaded to a PVR, thereby allowing the consumer to then view the asset from the PVR. The consumer may also pause, fast forward, rewind the contents of the preview. Also, the consumer may press the remote control button for the interactive information banner, which will result in the interactive banner 42 as illustrated in FIG. 7. As discussed before, the consumer may now navigate the links in the banner, etc.


The preview/trailer may look slightly different for assets that are available through other means than VOD. FIG. 8 shows one embodiment of the preview screen when a currently showing asset is selected (in this example Ali), FIG. 9 shows one embodiment of the preview screen when an asset is selected that is available on PVR (in this example Enemy of the State), FIG. 10 shows one embodiment of the preview screen when an asset is selected that is available in the Guide (in this example Men In Black). The application logic for this illustrative embodiment is further shown and summarized in the process flow 56 in FIG. 11. Depending on the type of asset, different actions are taken that are appropriate for that asset, as previously discussed with FIGS. 6 and 8-10.


An implementation of this illustrative embodiment in a cable head end will now be discussed. This implementation is illustrated in FIG. 12. As shown, certain embodiments includes a VOD storage component 26, a VOD pump component 28, a Subscriber Management & Billing component 30, Management & Control component 32, Content Ingest component 34, Clip/Still Store component 58, Search Metadata Database component 64, Asset Availability Database component 70 and Search Application Server component 78 which plays out video assets, receives commands and control, and sends commands and controls to Receiver Device 22. The Receiver Device 22, which includes a Search Application 76, interacts with a Display Device 24 to allow Consumer 25 to view and/or select any desired video assets.


The Clip/Still Store component 58 is illustrated in greater detail in FIG. 13. The Clip/Still Store component 58 stores and manages previews, trailers, and still pictures that are associated with assets that are available to the consumer. The Clip/Still Store component 58 provides a unified database of various trailers and stills that are associated with an asset. The Clip/Still Store component 58 gets its information from various sources. First, whenever new content enters the VOD system, the Content Ingest module 34 notifies the Clip/Still Store component 58. If the new content already has associated clips/stills for preview, the Clip/Still Store component 58 simple administers and stores it for later use. If no clips/stills are associated with it, the Clip/Still Store component 58 may automatically extract appropriate clips/stills from it. Information supplied with the asset or obtained separately may provide one or more appropriate time/frame references for clips or stills from that asset. Second, the Clip/Still Store 58 may be connected to a variety of internal and external sources of clips and stills 60. Examples of these sources are online Internet Movie Databases (www.imdb.com), or libraries of VOD and other content. Third, the Clip/Still Store 58 may have a user interface 62 that allows operators to manually extract clips and stills from an asset.


Another system component is the Search Metadata Database (DB) 64, FIG. 12, as detailed in FIG. 14. This component 64 provides unified metadata for all assets that are available to the consumer. It also provides interfaces to search for assets based on metadata values. The Search Metadata Database 64 gets its information from various sources.


In one embodiment, new content entering the VOD system will typically come with metadata (for example, see the Cablelabs Metadata Specification and the like). Such metadata that typically comes with the video asset will be referred to as a “native metadata” and all other metadata obtained in a different way will be referred to as a “customized metadata.” The Content Ingest module 58 will notify the Search Metadata Database 64, which then administers and stores the native metadata. For example, new content may be a newly released movie Bad Boys II, starring Will Smith. The native metadata may contain the following information:


Title: Bad Boys II;


Director: Michael Bay;


Stars: Will Smith, Martin Lawrence, Jordi Molla


Genre: Action/Comedy/Crime/Thriller


Plot: Two narcotics cops investigate the ecstasy trafficking in Florida.


Alternatively, the Search Metadata Database 64 is connected to a variety of internal and external customized metadata sources 66. These can be public sources (such as IMDB described below), or libraries of VOD or other content. For example, customized metadata for Bad Boys II shown above can be alternatively downloaded from IMDB, which may contain additional information as shown below.


Title: Bad Boys II;


Director: Michael Bay;


Stars: Will Smith, Martin Lawrence, Jordi Molla, Gabrielle Union, Peter Stormare


Genre: Action/Comedy/Crime/Thriller/Sequel


Plot: Narcotics cops Mike Lowrey (Smith) and Marcus Bennett (Lawrence) head up a task force investigating the flow of ecstasy into Miami. Their search leads to a dangerous kingpin, whose plan to control the city's drug traffic has touched off an underground war. Meanwhile, things get sexy between Mike and Syd (Union), Marcus's sister.


In yet another alternative, the Search Metadata Database 64 may have a system 68 for automatically extracting customized metadata from the content portion of the video asset. Some examples of this include inspecting closed captioning information, image analysis for finding words for the opening and/or closing credits, comparison and matching to databases of actors and directors, etc. and any combination thereof. In certain embodiments, the present invention may use a combination of scanning of closed captioning data, combined with pattern recognition software to establish the genre of a movie. For example, the closed caption and pattern recognition software may establish that many exotic cars appear in the movie. Hence, “exotic cars” may be added to the metadata. Also there may be scene detection algorithms to locate the opening and closing credits of a movie, and then the use of character recognition algorithms to determine actors and directors automatically. For example, opening/closing credits may be searched to determine actors Gabrielle Union, Peter Stormare, Theresa Randle, Joe Pantoliano, Michael Shannon, John Seda, and the like appear in the movie. Also, audio (music) may be analyzed to determine genre of a movie, to recognize specific movies, or to determine the artist performing the soundtrack. For example, display guide may already contain in the current library “I Love You” by Justin Timberlake as one available music source. System 68 may compare the music being played in the movie with the available music sources and determine that the soundtrack of Bad Boys II contain “I Love You” by Justin Timberlake. The display guide may be updated to reflect this fact. Furthermore, voice recognition systems may be used to determine actors.


The Search Metadata Database 64 may also receive customized metadata from a user through a user interface 62 whereby consumers can attach customized metadata to content. Examples of interface 62 include, but are not limited to, a general-purpose computer or a cable set-top box having software and hardware that can receive input from one or more input devices such as a remote control. In certain embodiments, an operator may be interested in exotic cars and car chase scenes of a movie. In that case, the operator may utilize the user interface 62 to attach customized metadata “exotic cars” and “car chase” to the video asset.


In certain embodiments, one or more of the information sources described above may be combined. For example, after an operator has attached customized metadata “exotic cars” to Bad Boys II, Search Metadata Database 64 may automatically perform a search to determine if metadata “exotic cars” is associated as a native metadata with other video assets. If not, Search Metadata Database 64 may search any of the external or internal sources, such as IMDB, for customized metadata or other textual descriptions having “exotic cars.” If customized metadata “exotic cars” is not found, then system 69 may automatically search the content in the close captions, images, credits, and the like to search whether customized metadata “exotic cars” can be attached to the particular content. Operator may at any time have the option of adding metadata, e.g., “exotic cars,” (or removing if any of the above examples generated an incorrect metadata “exotic cars”) using interface 62.


Another component is the Asset Availability Database 70 in FIG. 12, as detailed in FIG. 15. This database 70 keeps track of which assets are available to the consumer at any point in time. It gets its information from a variety of sources. First, whenever new content enters the VOD system, the Content Ingest module 34 will notify the Asset Availability Database 70 to record and administer the presence of the asset (or delete it if the asset has been removed from the VOD system). Second, the Asset Availability Database 70 is connected to an electronic source of Program Information 72 (this information is typically supplied to cable operators to populate the Electronic Program Guides in the digital set-top boxes, an example of a supplier of electronic program information in the US is Tribune Data Services). The Asset Availability Database 70 uses this information to keep track of which assets/programs are available for viewing or recording on the various networks in the coming weeks. Third, the Asset Availability Database 70 periodically collects data from all digital receivers 74 that have PVR capability, this information specifies which assets each individual receiver has currently stored and available on its local hard disk drive or other storage medium. This information is typically collected in the background, to not disrupt cable system behavior (e.g. at night). The Asset Availability Database 70 normalizes all this data, and can generate a list of all assets that are available to a specific digital receiver 74 according to the following formula:

Assets_available_to_receiver=
IF (receiver_has_PVR)
THEN (assets_available_on_VOD+assets_present_in_program_information+assets_on_PVR)
ELSE (assets_available_on_VOD+assets_present_in_progam_information) END


Another component of the system is the Search Application 76, FIG. 12. This application resides in the Receiver Device 22 at the consumer's premise. It can be an embedded application, a downloadable application, or a built-in feature of another Receiver Device application (such as the Electronic Program Guide). The Search Application 76 has two major functions. First, whenever the consumer initiates enhanced search mode, it will set up a connection with the Search Application Server 78 in the back-end, and handle the user interface to the consumer (according to flow chart in FIG. 11), it will request all metadata, stills, and video play out functions from the Search Application Server 78. Second, in case the Receiver Device 22 includes a PVR, it will periodically send a list of assets available on PVR back to the Asset Availability Database 70 in the back-end. A final component of the system is the Search Application Server 78. This server acts as the engine of the application, whenever a consumer initiates enhanced search mode, the Search Application Server 78 receives a request to open a search session, and inside that session it will continue to get requests for metadata, stills, or video play outs. The Search Application Server 78 in turn will interact with the Clip/Still Store 58 to retrieve clips or stills, to the Search Metadata Database 64 to retrieve metadata, the Asset Availability Database 70 to find lists of available assets, and the VOD Storage and/or VOD Pump components to play out trailers and/or VOD assets.


One of the advantages of the present invention is that the required user input from the consumer can easily be mapped on an existing remote control device, thus avoiding the need for more complex input devices such as complex remote controls, remote keyboards, and/or remote pointing devices. In other words, it is straightforward to map all required user inputs on existing keys on existing remote controls. A sample mapping on physical remote control 80 keys is shown in FIG. 16 (note: this is only one of the possible mappings, also note that only the keys associated with this application are shown, in reality there will be plenty of other keys as well). However, the present invention should not be construed as excluding the use of complex remote controls, remote keyboards, and/or remote pointing devices. Any such devices, for example, a wireless mouse, a wireless keyboard, and the like, may be utilized in conjunction with or in place of a physical remote control 80.


Another component of the system is the On-Screen Remote Application 88 shown in FIG. 12. In one embodiment, this application resides in the Receiver Device 22 at the consumer's premise. It can be, for example, an embedded application, a downloadable application, or a built-in feature of another Receiver Device application (such as, for example, the Electronic Program Guide). The On-Screen Remote Application 88 has the following functions. First, whenever the consumer initiates an enhanced remote control operation that is not supported by the physical remote control 80 shown in FIG. 16, it will cause an image of an on-screen remote 90 to be displayed on display device 24 as shown in FIGS. 18-21. As shown, the on-screen remote 90 may include category indicia, which can generally be referred to as “buttons,” that can readily access any of the categories such as “My Previews,” “My Networks,” “My Favorites,” “My Ads,” and the like shown on the tabs. It should be noted that the term indicia and “button,” as used herein in reference to the on-screen remote 90, are meant to include any design, icon, shape, indicator, or the like shown as a part of the on-screen remote 90 that may be selected by a user using a physical input device and is not limited in appearance to a physical button of typical remote control devices. The on-screen remote 90 may further include number buttons that allows entry of, for example, channel numbers, scene chapter numbers from a DVD content, an audio track number in a musical content, specific time-referenced frame, and the like. The on-screen remote 90 may further include typical commands such as “play,” “pause,” “stop,” “record,” “fast forward,” “rewind,” and the like to control the content being viewed. Second, the On-Screen Remote Application 88 may provide capability to add customized buttons included in the on-screen remote 90. For example, if a user has created a new category “Exotic Cars” as described above, On-Screen Remote Application 88 may generate a new category button to reflect such a change. Moreover, if a consumer's favorite channel number is, for example, 29, the On-Screen Remote Application 88 may receive commands from the consumer to include the button “29” on the on screen remote 90. Alternatively, the On-Screen Remote Application 90 may determine, based on prior viewing history of a consumer, that the consumer's favorite channel number is 29 and provide an option to the consumer whether the “29” button should be added to the on-screen remote 90. Third, On-Screen Remote Application 88 may be able to determine which user is currently utilizing the on-screen display guide to customize the available buttons on the on-screen remote 90. For example, user selection buttons on the on-screen remote 90 may allow the consumer to indicate which family member is currently utilizing the on-screen display guide. For example, if a consumer interested in “Exotic Cars” is utilizing the on-screen display guide, on-screen remote 90 may include an “Exotic Car” category button whereas a default setting may not include the “Exotic Car” category button on the on-screen remote 90.


The implementation describes only one possible embodiment of the present invention. It should be clear to anyone skilled in the art that the invention can also be implemented in alternative embodiments and implementations. Without attempting to be comprehensive, alternative embodiments will now be disclosed


One enhancement to the previously described embodiment is to add personalization to the system. This would further refine the user interface to the personal preferences or history of the consumer. For example if a consumer is presented with all Will Smith movies, the system may take into account that the consumer is interested in Sci-Fi movies, and it would present the Will Smith movies from the Sci-Fi category first. Also the stills and clips could be personalized. For example different aspects of the movie may be highlighted to appeal to different personal profiles (the movie “Pearl Harbor” may be presented as a love story to someone interested in romantic movies, and as a war movie for someone interested in war movies, this would result in different clips and stills to be shown to represent the same movie). Moreover, any of the metadata found by Search Metadata Database 64 may be utilized for further customization. For example, all content may be categorized by metadata “exotic cars” and all content having metadata “exotic cars” may be presented to the user via a metadata browsing screen (similar to that shown in FIG. 5).


Such a feature could be implemented by adding a Personalization Server 82 to the back-end 20 infrastructure. This Personalization Server 82 is illustrated in FIG. 17. The purpose of this server 82 is to maintain personal profile information for each potential user of the system (consumer). The personalization server 82 builds and maintains these profiles from various inputs. First, it may get subscriber information from the cable operator's subscriber database 84. This information may include some basic demographics (gender), past VOD buying behavior, etc. Second, it may get information from other (external) demographic databases 86 with more detailed demographics (income, etc.). Examples of such database providers in the US include Axiom and InfoUSA. Third, it may collect viewing behavior from the various client devices 74. For example, client device 74 may be a device that determines that a certain member of the family (e.g., a son who is interested in “exotic cars”) is using the display guide. Client devices 74 may also include information on what programs are watched most frequently by that particular family member. The Personalization Database 82 will normalize all this information, and then apply it to the Clips/Stills collection 58 that is available, and to the metadata collection 64 that is available, and it will select the most appropriate Clips/Stills for a given consumer and/or customize the descriptive text or metadata towards a specific consumer.



FIGS. 18A-18D show example screen shots according to an embodiment of the invention. The images of television shows in these figures (and the subsequent figures) are for exemplary purposes only, and no claim is made to any rights for the shows displayed. All trademark, trade name, publicity rights and copyrights for the exemplary shows are the property of their respective owners. FIG. 18A shows a display for video assets which for this example are broadcast shows arranged by viewing time. The broadcast shows are displayed with a still or moving image of the broadcast show, also a network logo is included as part of the image, superimposed or combined with the image. A user can use a remote control to highlight a selected broadcast show for viewing or for interactively obtaining further information about the highlighted broadcast show. The user is not required to deal with channels or other underlying details of video asset delivery, but can simply navigate by more familiar terms, in this case by network. Further, the user may selectively add or remove entities (and arrange the order of the displayed networks) from the display, to personalize the display for that user. FIGS. 18B-D show different displays based on selected time slots as shown on the bottom of the image.



FIG. 19A shows another screen shot from this embodiment. In this case the user is viewing video assets for a particular show, where the video assets are available from a source such as video on demand, library, or other delivery service. The user can easily select a certain episode for viewing, or to obtain further information, for example as shown in FIG. 19B. As previously described, a user can have the ability to search for other video assets based on information and meta-data categories that are displayed with the image.



FIG. 20 shows another screen shot from this embodiment wherein a user may navigate using tabs positioned along a top of the display, and select different categories of video assets. In the present example, the user has selected a category of “My Favorites”, and is shown a selection of video assets for viewing. As shown in this figure, the video assets are available from a wide variety of sources, including DVD, broadcast, and pay per view broadcast. The user is able to select a video asset (through highlighting interactivity with a remote, or otherwise) for viewing from a vast number of video asset sources. Further, the user can navigate to other similar video assets (based on the meta-data categories) using the video assets presently listed in this favorite category.



FIG. 21 shows another screen shot from this embodiment, which shows the ability to provide advertisements, interactive shopping experiences or special offers to users. As shown in the image, selection of advertising assets are presented to the user, to allow the user to interact by selection and thereby view and/or receive special offers from such advertisers. A visual indication on an image can alert the user to a special offer or interactive opportunity for certain advertisements. The user has the ability to use meta-data categories to search for other advertisers or suppliers of goods and services, for example to search for other amusements parks based on a meta-data category for one image and advertisement for an amusement park.


Another implementation variation is to selectively use still pictures instead of video previews/trailers. This has a number of advantages: first still pictures may be more readily available than previews/trailers, especially for content that is available through other means than VOD (e.g., content that shows up in the Guide for two weeks from now), second this could limit the bandwidth consumption (still pictures take considerably less bandwidth and storage than moving video). Bandwidth use can be further limited by sending the still pictures in so-called broadcast carrousels and to have them stored at each client device 74 when needed (as opposite to sending them to the client device on request when needed). Broadcast carousels are a well known bandwidth saving technique in the digital video industry (an example is the DSM-CC Data Carrousel). It is within the scope of the invention to modify the system in such a way that it detects shortage of bandwidth, and then switches over to more bandwidth friendly techniques (stills), and switch back to using motion video when bandwidth is more available again.


Another implementation variation is to “auto cue” additional previews/trailers after the consumer is finished watching a preview. In other words: if a user previews the “Ali” preview and does not decide to buy the movie, or exits the application, the system may automatically start playing the next relevant preview (instead of going back to the Browsing Screen). It is possible to enhance the system in such a way as to effectively create an interactive movie barker channel (continuously playing relevant trailers).


Another implementation variation is to load trailers to hard disks of PVR-enabled Receiver Devices. This would allow these trailers to be played out from local hard disk (even if they refer to a movie asset that is available on VOD, or as linear programming). The trailers could be downloaded when bandwidth is available (e.g., at night), and this would also make the system much more bandwidth efficient.


Another implementation variation is to use the system to represent assets from additional sources (in addition to, or instead of, VOD and PVR and linear programming). Examples would include: assets that are available via Broadband IP networks, assets that are available on DVD or DVD-Recorder, assets that are available via Digital Terrestrial networks, assets that are available via Direct-To-Home (DTH) satellite, assets that are available on Near-Video-On-Demand (NVOD) channels, assets that are available via Subscription-Video-On-Demand (SVOD), etc. Further, assets can be downloaded from a network or path that does not provide enough bandwidth for real-time viewing. The asset may be downloaded to the PVR, and the consumer can be alerted when the asset is fully downloaded, or alternatively, when enough of the asset is downloaded to allow the consumer to begin viewing from the PVR while downloading continues in parallel (in effect using the PVR as a buffering system).


Another implementation variation is to change the User Interface Look & Feel to accommodate different flavors of interfaces. The system may easily be modified to provide different views or representations of the video (either as still picture or as moving video) in combination with a representation of metadata. Also different input devices can easily be supported (more advanced remote controls, keyboards, media control center consoles, etc.).


Another implementation variation is to give viewers more control/preview capabilities by presenting them with a screen that shows them the various parts of the movie that they are (about to) see. This screen can look very similar to the metadata browsing screen (or the scene selection screen typically used in many DVD titles today), and allow the viewer to get a better understanding of the flow of the movie, and give the viewer control to navigate the movie in a more user friendly manner.


Another implementation variation is to use moving video in the metadata browsing screen (instead of still pictures). The various assets can be shown as moving pictures, and only the audio of the currently selected asset would be rendered. In order to make implementation easier, the moving pictures can be low-quality, or even animated still pictures.


Although the invention has been shown and described with respect to illustrative embodiments thereof, various other changes, omissions and additions in the form and detail thereof may be made therein without departing from the spirit and scope of the invention.

Claims
  • 1. A method comprising: receiving, at a server, first customized metadata from a content portion of a video asset from a plurality of video assets from a plurality of video asset sources;receiving, at the server, second customized metadata that relates to the content portion of the video asset;in response to determining, by the server, that the second customized metadata is associated with the first customized metadata of the video asset from the plurality of video assets, storing in a database an association linking the second customized metadata to the video asset;storing in the database an indicator of a source of the video asset, the source being one of the plurality of video asset sources; andgenerating for display on a display device a display screen including: a selectable link including the association linking the first customized metadata, the second customized metadata, and the video asset; anda symbol corresponding to the indicator of the source of the video asset,wherein the display screen simultaneously presents the plurality of video asset sources comprising a video-on-demand source, a currently showing source, a personal video recorder source, and a future broadcast video asset source, andwherein the respective symbol is unique to each source.
  • 2. The method of claim 1, further comprising: automatically searching a match between the second customized metadata and native metadata of a second video asset of the plurality of video assets; andin response to the server determining the match between the second customized metadata and the native metadata of the second video asset, attaching the second customized metadata to the second video asset.
  • 3. The method of claim 1, further comprising: receiving, at the server, programming data related to the plurality of video assets over a network from at least one source of the plurality of video asset sources;causing display of a plurality of links associated with the plurality of video assets available from the plurality of video asset sources; andin response to a selection of one of the plurality of links, performing an action appropriate for the video asset source corresponding to the selected video asset.
  • 4. The method of claim 1, further comprising combining the first customized metadata and the second customized metadata and storing the combined metadata as associated with the video asset.
  • 5. The method of claim 1, wherein receiving, at the server, first customized metadata from a content portion of a video asset comprises: automatically extracting, by the server, metadata from the content portion of the video asset,wherein the automatic extraction of the first customized metadata is performed by inspecting closed captioning information.
  • 6. The method of claim 1, wherein receiving, at the server, first customized metadata from a content portion of a video asset comprises: automatically extracting, by the server, metadata from the content portion of the video asset,wherein the automatic extraction of the first customized metadata is performed by pattern recognition to determine a genre associated with the video asset.
  • 7. The method of claim 1, wherein receiving, at the server, first customized metadata from a content portion of a video asset comprises: automatically extracting, by the server, metadata from the content portion of the video asset,wherein the automatic extraction of the first customized metadata results in character recognition of cast within the video asset.
  • 8. The method of claim 1, wherein the second customized metadata is received from a user interface.
  • 9. The method of claim 8, wherein the user interface can be used to add or delete the second customized metadata to the video asset.
  • 10. The method of claim 1, wherein the second customized metadata relates to a topic of user interest.
  • 11. A system comprising: communication circuitry configured to access a database containing a plurality of video assets from a plurality of video asset sources; andcontrol circuitry configured to: receive first customized metadata from a content portion of a video asset from the plurality of video assets;receive second customized metadata that relates to the content portion of the video asset;in response to determining that the second customized metadata is associated with the first customized metadata of the video asset from the plurality of video assets, store in a database an association linking the second customized metadata to the video asset; andstore in the database an indicator of a source of the video asset, the source being one of the plurality of video asset sources; andgenerating for display on a display device a display screen including: a selectable link including the association linking the first customized metadata, the second customized metadata, and the video asset; anda symbol corresponding to the indicator of the source of the video asset,wherein the display screen simultaneously presents the plurality of video asset sources comprising a video-on-demand source, a currently showing source, a personal video recorder source, and a future broadcast video asset source, andwherein the respective symbol is unique to each source.
  • 12. The system of claim 11, further comprising: automatically searching a match between the second customized metadata and native metadata of a second video asset of the plurality of video assets; andin response to the server determining the match between the second customized metadata and the native metadata of the second video asset, attaching the second customized metadata to the second video asset.
  • 13. The system of claim 11, further comprising: receiving, at the server, programming data related to the plurality of video assets over a network from at least one source of the plurality of video asset sources;causing display of a plurality of links associated with the plurality of video assets available from the plurality of video asset sources; andin response to a selection of one of the plurality of links, performing an action appropriate for the video asset source corresponding to the selected video asset.
  • 14. The system of claim 11, further comprising combining the first customized metadata and the second customized metadata and storing the combined metadata as associated with the video asset.
  • 15. The system of claim 11, wherein receiving, at the server, first customized metadata from a content portion of a video asset comprises: automatically extracting, by the server, metadata from the content portion of the video asset,wherein the automatic extraction of the first customized metadata is performed by inspecting closed captioning information.
  • 16. The system of claim 11, wherein receiving, at the server, first customized metadata from a content portion of a video asset comprises: automatically extracting, by the server, metadata from the content portion of the video asset,wherein the automatic extraction of the first customized metadata is performed by pattern recognition to determine a genre associated with the video asset.
  • 17. The system of claim 11, wherein receiving, at the server, first customized metadata from a content portion of a video asset comprises: automatically extracting, by the server, metadata from the content portion of the video asset,wherein the automatic extraction of the first customized metadata results in character recognition of cast within the video asset.
  • 18. The system of claim 11, wherein the second customized metadata is received from a user interface.
  • 19. The system of claim 11, wherein the user interface can be used to add or delete the second customized metadata to the video asset.
  • 20. The system of claim 11, wherein the second customized metadata relates to a topic of user interest.
RELATED APPLICATIONS

This patent application is a continuation of U.S. application Ser. No. 14/798,988, filed Jul. 14, 2015, which is a continuation of U.S. application Ser. No. 11/503,476, filed Aug. 11, 2006, now U.S. Pat. No. 9,087,126. U.S. application Ser. No. 11/503,476 is a continuation-in-part of U.S. application Ser. No. 11/080,389, filed Mar. 15, 2005, and is a continuation-in-part of U.S. application Ser. No. 11/081,009, filed Mar. 15, 2005, now U.S. Pat. No. 9,396,212, both of which claims the benefit of U.S. Provisional Application No. 60/560,146, filed Apr. 7, 2004; all of which are hereby incorporated by reference.

US Referenced Citations (308)
Number Name Date Kind
1096453 Petterson May 1914 A
3024108 Foerster Mar 1962 A
3366731 Edward Jan 1968 A
3639686 Walker et al. Feb 1972 A
4331974 Cogswell et al. May 1982 A
4475123 Dumbauld et al. Oct 1984 A
4573072 Freeman Feb 1986 A
4602279 Freeman Jul 1986 A
4625235 Watson Nov 1986 A
4638359 Watson Jan 1987 A
4703423 Bado et al. Oct 1987 A
4716410 Nozaki Dec 1987 A
4789235 Borah et al. Dec 1988 A
4814883 Perine et al. Mar 1989 A
4847698 Freeman Jul 1989 A
4847699 Freeman Jul 1989 A
4847700 Freeman Jul 1989 A
4850007 Marino et al. Jul 1989 A
4918516 Freeman Apr 1990 A
5099422 Foresman et al. Mar 1992 A
5105184 Pirani et al. Apr 1992 A
5155591 Wachob Oct 1992 A
5173900 Miller et al. Dec 1992 A
5220501 Lawlor et al. Jun 1993 A
5231494 Wachob Jul 1993 A
RE34340 Freeman Aug 1993 E
5253940 Abecassis Oct 1993 A
5260778 Kauffman et al. Nov 1993 A
5291395 Abecassis Mar 1994 A
5305195 Murphy Apr 1994 A
5343239 Lappington et al. Aug 1994 A
5361393 Rossillo Nov 1994 A
5414455 Hooper et al. May 1995 A
5422468 Abecassis Jun 1995 A
5424770 Schmelzer et al. Jun 1995 A
5434678 Abecassis Jul 1995 A
5442390 Hooper et al. Aug 1995 A
5442771 Filepp et al. Aug 1995 A
5446919 Wilkins Aug 1995 A
5448568 Delpuch et al. Sep 1995 A
5513306 Mills et al. Apr 1996 A
5515098 Carles May 1996 A
5519433 Lappington et al. May 1996 A
5526035 Lappington et al. Jun 1996 A
5528282 Voeten et al. Jun 1996 A
5528304 Cherrick et al. Jun 1996 A
5537141 Harper et al. Jul 1996 A
5548532 Menand et al. Aug 1996 A
5550735 Slade et al. Aug 1996 A
5559548 Davis et al. Sep 1996 A
5566353 Cho et al. Oct 1996 A
5584025 Keithley et al. Dec 1996 A
5585838 Lawler et al. Dec 1996 A
5585858 Harper et al. Dec 1996 A
5592551 Lett et al. Jan 1997 A
5594910 Filepp et al. Jan 1997 A
5610653 Abecassis Mar 1997 A
5617142 Hamilton Apr 1997 A
5632007 Freeman May 1997 A
5634849 Abecassis Jun 1997 A
5635978 Alten et al. Jun 1997 A
5636346 Saxe Jun 1997 A
5638113 Lappington et al. Jun 1997 A
5652615 Bryant et al. Jul 1997 A
5671225 Hooper et al. Sep 1997 A
5682196 Freeman Oct 1997 A
5684918 Abecassis Nov 1997 A
5696869 Abecassis Dec 1997 A
5717814 Abecassis Feb 1998 A
5717923 Dedrick Feb 1998 A
5724091 Freeman et al. Mar 1998 A
5724472 Abecassis Mar 1998 A
5724521 Dedrick Mar 1998 A
5734413 Lappington et al. Mar 1998 A
5740388 Hunt Apr 1998 A
5740549 Reilly et al. Apr 1998 A
5754939 Herz et al. May 1998 A
5758259 Lawler May 1998 A
5761601 Nemirofsky et al. Jun 1998 A
5764275 Lappington et al. Jun 1998 A
5768521 Dedrick Jun 1998 A
5774170 Hite et al. Jun 1998 A
5774664 Hidary et al. Jun 1998 A
5778181 Hidary et al. Jul 1998 A
5784095 Robbins et al. Jul 1998 A
5784528 Yamane et al. Jul 1998 A
5786521 Darsow et al. Jul 1998 A
5796945 Tarabella Aug 1998 A
5802314 Tullis et al. Sep 1998 A
5805974 Hite et al. Sep 1998 A
5825884 Zdepski et al. Oct 1998 A
5828809 Chang et al. Oct 1998 A
5835087 Herz et al. Nov 1998 A
5861881 Freeman et al. Jan 1999 A
5867208 Mclaren Feb 1999 A
5873068 Beaumont et al. Feb 1999 A
5887243 Harvey et al. Mar 1999 A
5889506 Lopresti et al. Mar 1999 A
5903263 Emura May 1999 A
5907837 Ferrel et al. May 1999 A
5913031 Blanchard Jun 1999 A
5917830 Chen et al. Jun 1999 A
5929850 Broadwin et al. Jul 1999 A
5931901 Wolfe et al. Aug 1999 A
5937331 Kalluri et al. Aug 1999 A
5949407 Sato Sep 1999 A
5978799 Hirsch Nov 1999 A
5986692 Logan et al. Nov 1999 A
5990883 Byrne et al. Nov 1999 A
5991735 Gerace Nov 1999 A
6002393 Hite et al. Dec 1999 A
6018768 Ullman et al. Jan 2000 A
6020883 Herz et al. Feb 2000 A
6026368 Brown et al. Feb 2000 A
6029045 Picco et al. Feb 2000 A
6038000 Hurst Mar 2000 A
6038367 Abecassis Mar 2000 A
6049569 Radha et al. Apr 2000 A
6067348 Hibbeler May 2000 A
6075551 Berezowski et al. Jun 2000 A
6104334 Allport Aug 2000 A
6107992 Ishigaki Aug 2000 A
6108486 Sawabe et al. Aug 2000 A
6118442 Tanigawa Sep 2000 A
6128655 Fields et al. Oct 2000 A
6137834 Wine et al. Oct 2000 A
6141358 Hurst et al. Oct 2000 A
6160570 Sitnik Dec 2000 A
6219839 Sampsell Apr 2001 B1
6304852 Loncteaux Oct 2001 B1
6314571 Ogawa et al. Nov 2001 B1
6327574 Kramer et al. Dec 2001 B1
6330286 Lyons et al. Dec 2001 B1
6343287 Kumar et al. Jan 2002 B1
6357042 Srinivasan et al. Mar 2002 B2
6360234 Jain et al. Mar 2002 B2
6361326 Fontana et al. Mar 2002 B1
6408278 Carney et al. Jun 2002 B1
6411992 Srinivasan et al. Jun 2002 B1
6419137 Marshall et al. Jul 2002 B1
6421067 Kamen et al. Jul 2002 B1
6424991 Gish Jul 2002 B1
6445306 Trovato et al. Sep 2002 B1
6449657 Stanbach et al. Sep 2002 B2
6457010 Eldering et al. Sep 2002 B1
6463444 Jain et al. Oct 2002 B1
6463585 Hendricks et al. Oct 2002 B1
6466975 Sterling Oct 2002 B1
6469633 Wachter Oct 2002 B1
6496856 Kenner et al. Dec 2002 B1
6502076 Smith Dec 2002 B1
6525818 Yin et al. Feb 2003 B1
6567980 Jain et al. May 2003 B1
6574793 Ngo et al. Jun 2003 B1
6588013 Lumley et al. Jul 2003 B1
6601237 Ten et al. Jul 2003 B1
6611624 Zhang et al. Aug 2003 B1
6671880 Shah-nazaroff et al. Dec 2003 B2
6678332 Gardere et al. Jan 2004 B1
6681395 Nishi Jan 2004 B1
6694482 Arellano et al. Feb 2004 B1
6698020 Zigmond et al. Feb 2004 B1
6714909 Gibbon et al. Mar 2004 B1
6735628 Eyal May 2004 B2
6765557 Segal et al. Jul 2004 B1
6769127 Bonomi et al. Jul 2004 B1
6785289 Ward et al. Aug 2004 B1
6806909 Radha et al. Oct 2004 B1
6850252 Berg Feb 2005 B1
6857024 Chen et al. Feb 2005 B1
6877134 Fuller et al. Apr 2005 B1
6882793 Fu et al. Apr 2005 B1
6934917 Lin Aug 2005 B2
6934965 Gordon et al. Aug 2005 B2
7150029 Ebling et al. Dec 2006 B1
7152207 Underwood et al. Dec 2006 B1
7168084 Hendricks et al. Jan 2007 B1
7188356 Miura et al. Mar 2007 B1
7207053 Asmussen Apr 2007 B1
7222163 Girouard et al. May 2007 B1
7257604 Wolfe Aug 2007 B1
7260564 Lynn et al. Aug 2007 B1
7277928 Lennon Oct 2007 B2
7293275 Krieger et al. Nov 2007 B1
7404200 Hailey et al. Jul 2008 B1
7581182 Herz Aug 2009 B1
7610555 Klein et al. Oct 2009 B2
7624412 McEvilly et al. Nov 2009 B2
7630986 Herz et al. Dec 2009 B1
7779429 Neil et al. Aug 2010 B2
7779439 Sie et al. Aug 2010 B2
7873972 Zaslavsky et al. Jan 2011 B2
7895624 Thomas et al. Feb 2011 B1
7987491 Reisman Jul 2011 B2
8073955 Gagnon et al. Dec 2011 B1
8087058 Cohen Dec 2011 B2
8132204 Haberman Mar 2012 B2
8225194 Rechsteiner et al. Jul 2012 B2
8245259 Mccoskey et al. Aug 2012 B2
8250051 Bugir et al. Aug 2012 B2
8250251 Malalur Aug 2012 B2
8418203 Nishio et al. Apr 2013 B1
9087126 Haberman Jul 2015 B2
9396212 Haberman Jul 2016 B2
9445133 Mock Sep 2016 B2
10986403 Donian et al. Apr 2021 B2
20010013124 Klosterman et al. Aug 2001 A1
20010018693 Jain et al. Aug 2001 A1
20010037465 Hart et al. Nov 2001 A1
20010037565 Prasad et al. Nov 2001 A1
20010056577 Gordon et al. Dec 2001 A1
20020026359 Long et al. Feb 2002 A1
20020038299 Zernik et al. Mar 2002 A1
20020054020 Perkes May 2002 A1
20020056093 Kunkel et al. May 2002 A1
20020056119 Moynihan May 2002 A1
20020057288 Edmonds et al. May 2002 A1
20020057297 Grimes et al. May 2002 A1
20020057336 Gaul et al. May 2002 A1
20020059610 Ellis May 2002 A1
20020067376 Martin Jun 2002 A1
20020083443 Eldering et al. Jun 2002 A1
20020092017 Klosterman et al. Jul 2002 A1
20020095676 Knee et al. Jul 2002 A1
20020124256 Suzuka Sep 2002 A1
20020170062 Chen et al. Nov 2002 A1
20020175930 Koide et al. Nov 2002 A1
20020188948 Florence Dec 2002 A1
20030001880 Holtz et al. Jan 2003 A1
20030009773 Carlson Jan 2003 A1
20030011718 Clapper Jan 2003 A1
20030014752 Zaslavsky et al. Jan 2003 A1
20030018755 Masterson et al. Jan 2003 A1
20030028889 Mccoskey et al. Feb 2003 A1
20030028896 Swart et al. Feb 2003 A1
20030033174 Ikeda et al. Feb 2003 A1
20030054884 Stern Mar 2003 A1
20030070175 Panabaker Apr 2003 A1
20030084445 Pilat May 2003 A1
20030088872 Maissel et al. May 2003 A1
20030093790 Logan May 2003 A1
20030093792 Labeeb et al. May 2003 A1
20030101451 Bentolila et al. May 2003 A1
20030110181 Schuetze et al. Jun 2003 A1
20030110500 Rodriguez Jun 2003 A1
20030110503 Perkes Jun 2003 A1
20030121055 Kaminski et al. Jun 2003 A1
20030126600 Heuvelman Jul 2003 A1
20030126605 Betz et al. Jul 2003 A1
20030149975 Eldering et al. Aug 2003 A1
20030152363 Jeannin et al. Aug 2003 A1
20030167471 Roth et al. Sep 2003 A1
20030172374 Vinson et al. Sep 2003 A1
20030177503 Sull et al. Sep 2003 A1
20030195863 Marsh Oct 2003 A1
20030229900 Reisman Dec 2003 A1
20030237093 Marsh Dec 2003 A1
20040003403 Marsh Jan 2004 A1
20040025180 Begeja et al. Feb 2004 A1
20040031058 Reisman Feb 2004 A1
20040041723 Shibamiya et al. Mar 2004 A1
20040066397 Walker et al. Apr 2004 A1
20040078807 Fries Apr 2004 A1
20040095317 Zhang et al. May 2004 A1
20040097606 Lee et al. May 2004 A1
20040098743 Gutta et al. May 2004 A1
20040111465 Chuang et al. Jun 2004 A1
20040111742 Hendricks et al. Jun 2004 A1
20040117831 Ellis et al. Jun 2004 A1
20040136698 Mock Jul 2004 A1
20040152054 Gleissner et al. Aug 2004 A1
20040158876 Lee Aug 2004 A1
20040168187 Chang Aug 2004 A1
20040170388 Seo et al. Sep 2004 A1
20040193426 Maddux et al. Sep 2004 A1
20040199657 Eyal et al. Oct 2004 A1
20040220926 Lamkin et al. Nov 2004 A1
20040226051 Carney et al. Nov 2004 A1
20040252119 Hunleth et al. Dec 2004 A1
20040252120 Hunleth et al. Dec 2004 A1
20040252193 Higgins Dec 2004 A1
20040255336 Logan et al. Dec 2004 A1
20040268386 Logan et al. Dec 2004 A1
20040268393 Hunleth et al. Dec 2004 A1
20050005288 Novak Jan 2005 A1
20050010950 Carney et al. Jan 2005 A1
20050010953 Carney et al. Jan 2005 A1
20050050218 Sheldon Mar 2005 A1
20050062888 Wood et al. Mar 2005 A1
20050086689 Dudkiewicz et al. Apr 2005 A1
20050086691 Dudkiewicz et al. Apr 2005 A1
20050086692 Dudkiewicz et al. Apr 2005 A1
20050097470 Dias et al. May 2005 A1
20050097606 Barrett et al. May 2005 A1
20050105806 Nagaoka et al. May 2005 A1
20050166224 Ficco Jul 2005 A1
20050182792 Israel Aug 2005 A1
20050193425 Sull et al. Sep 2005 A1
20050252193 Iya et al. Nov 2005 A1
20050253808 Yoshida Nov 2005 A1
20060061595 Goede et al. Mar 2006 A1
20070133937 Nakamura et al. Jun 2007 A1
20070282818 Thoms et al. Dec 2007 A1
20080115169 Ellis et al. May 2008 A1
20100031193 Stark et al. Feb 2010 A1
20110126246 Thomas et al. May 2011 A1
20130024906 Carney et al. Jan 2013 A9
20130219419 Silver et al. Aug 2013 A1
Foreign Referenced Citations (22)
Number Date Country
1096453 May 2001 EP
10162028 Jun 1998 JP
H10162028 Jun 1998 JP
H11098425 Apr 1999 JP
H11164276 Jun 1999 JP
11261908 Sep 1999 JP
2000341598 Dec 2000 JP
2002014964 Jan 2002 JP
2002157269 May 2002 JP
2002204406 Jul 2002 JP
2003006100 Jan 2003 JP
2003061000 Feb 2003 JP
2003069912 Mar 2003 JP
2003153099 May 2003 JP
2003530782 Oct 2003 JP
2003534737 Nov 2003 JP
2007515768 Jun 2007 JP
0011869 Mar 2000 WO
0178382 Oct 2001 WO
03024108 Mar 2003 WO
2004084296 Sep 2004 WO
2005031467 Apr 2005 WO
Non-Patent Literature Citations (10)
Entry
Supplementary European Search Report, dated Apr. 13, 2010, issued in EP Patent Application No. EP 07836534.3 (6 pages).
Farag et al., “A Human-based Technique for measuring Video Data Similarity,” Proceeding of the Eighth IEEE International Symposium on Computer and Communication, 6 pages (2003).
International Search Report and written Opinion for International Patent Application No. PCT/US2007/013124 dated Dec. 7, 2007.
International Search Report and Written Opinion of International Application No. PCT/US07/17433 dated Feb. 27, 2008.
International Search Report and Written Opinion of PCT/US05/11515 published May 24, 2007.
International Search Report and Written Opinion of PCT/US2005/11590 published Apr. 24, 2007.
Smeaton et al., “Indexing, Browsing and Searching of Digital Video,” Arist—Annual Review of Information Science and Technology, Chapters, vol. 38, pp. 371-407 (Oct. 2003).
Stark. Korina, et al., U.S. Appl. No. 60/567,177, filed Apr. 30, 2004.
Supplementary European Search Report for European Patent Application No. 05733091 dated Mar. 11, 2008.
Supplementary European Search Report for European Patent Application No. 05733846, dated Dec. 19, 2007.
Related Publications (1)
Number Date Country
20210152871 A1 May 2021 US
Provisional Applications (1)
Number Date Country
60560146 Apr 2004 US
Continuations (2)
Number Date Country
Parent 14798988 Jul 2015 US
Child 17125969 US
Parent 11503476 Aug 2006 US
Child 14798988 US
Continuation in Parts (2)
Number Date Country
Parent 11081009 Mar 2005 US
Child 11503476 US
Parent 11080389 Mar 2005 US
Child 11081009 US