Consumers have been benefiting from additional freedom and control over the consumption of digital media content. One example is the proliferation of personal video recorder systems (PVRs) that allow consumers to record television shows for later viewing. The adoption of PVRs has furthered interest in on-demand, consumer-driven experiences with content consumption. Examples of existing systems include on-demand digital cable, internet video streaming services, and peer-to-peer distribution networks. Other existing systems include music and video stores providing consumers with content that may be purchased and subsequently viewed on personal video or audio players.
Video catalog services list the programming available through existing video services. Such video catalog services are typically developed based on the music service or video blog service associated therewith. As a result, there are certain design limitations. For instance, the existing video catalogs are derived from a music or video blog catalog which lacks support for concepts particular to the video space such as “channel” and “series”. In addition, there is no support for offline video catalog browsing. Users must be online to browse the video catalog in the existing systems. Further, there is no support for ad-sponsored free video downloads. Existing systems also typically rely on a single source for catalog content.
Embodiments of the invention provide a catalog of media items to a user. In an embodiment, the invention aggregates catalog data received from a plurality of content providers. The catalog data is associated with a plurality of media items available from each of the content providers. The aggregation occurs based on rules to create a user catalog in a pre-defined catalog format for consumption by a user.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Other features will be in part apparent and in part pointed out hereinafter.
Corresponding reference characters indicate corresponding parts throughout the drawings.
In an embodiment, the invention includes a media content catalog service such as illustrated in
Further, although described primarily in the context of video media files, aspects of the invention may be applied to various forms of digital media, including video and multimedia files (e.g., movies, movie trailers, television shows, etc.), audio files (e.g., music tracks, news reports, audio web logs, audio books, speeches, comedy routines, etc.), media broadcasts (e.g., webcasts, podcasts, audiocasts, videocasts, video blogs, blogcasts, etc.), and images.
Aspects of the invention support both online and offline catalog browsing with a hybrid catalog request model. Alternatively or in addition to viewing the catalog online, the hybrid catalog request model enables catalog data to be downloaded by a client for offline catalog browsing, and enables catalog data to be requested on-demand.
In one embodiment, the invention interlinks, merges, aggregates, or otherwise combines catalog data from multiple content providers to create an integrated catalog with metadata from the content providers based on rules. The integrated catalog contributes to a consistent user experience. The rules are configurable and may be updated without recompiling the aggregation engine. Exemplary rules are shown in Appendix A. While the combination of the catalog data may be referred to as “interlinking” and/or “merging” in particular embodiments, aspects of the invention are operable with any process to combine the catalog data.
Referring again to
The content owners 106 or providers supply program content (e.g., video and/or audio files) with associated metadata. Exemplary video fundamentals provided by the content owners 106 are shown in Appendix B. The example in Appendix B is merely exemplary. Other embodiments of the video fundamentals (e.g., markup language files) are contemplated. This metadata also includes the locations in the video at which ads can be inserted (e.g., ad breaks), and which ads provider is responsible for running the ad campaigns. One or more ads providers or advertisers 108 sell ads against the ad breaks. The ads providers supply ad content 110. The ads providers also run an ad engine and report collection service 112 for collecting the reports of which ads have been played. Furthermore, the ads providers make available ad manifest files via an ad manifest service 114. The ad manifests may be distributed via database, stream, file, or the like. The ad manifests include information about the current ad campaigns including which ads (or groups of ads) should be associated with which types of program content. The ad manifests also specify when the advertising may be shown and on what devices/formats. The ad manifests further include the definition of tracking events for reporting on the advertising playback (e.g., a video ad was played, thus it can be billed).
Content ingestion servers 116 receive the program content supplied by the content owners 106 together with the location of the ad manifests. Content delivery networks 118 interface with the media service client 104 or other computing device associated with the user 102 to deliver the content items including program content and advertisements to the user 102.
The user 102 interfaces with the media service client 104, application, computing device, or the like that provides functionality such as browsing, searching, downloading, managing and consuming the content items. The media service client 104 downloads catalog data 142 from a catalog service 122 and allows the user 102 to browse it in search of content items. Once an item is selected for download, the corresponding ad manifest is retrieved by the media service client 104 and stored. The ad manifest for each item of program content includes the information for determining which ads should be downloaded together with the program content. The media service client 104 downloads the selected program content and associated ads. Downloading includes retrieving the program content and associated ads. Downloading may also include receiving the program content and associated ads pushed from another computing device (e.g., pushed from a server at regular intervals).
The catalog service 122 includes or has access to a memory area 130. The memory area 130 stores a plurality of interlinking rules 140. The interlinking rules 140 define the processing of input data. The processing may include interlinking, merging, or any other combination of the catalog data 142. The memory area 130 further stores the catalog data 142 from the content owners 106. The catalog data 142 is associated with a plurality of media items available from each of the content owners 106. The catalog data 142 includes metadata items describing the media items. Exemplary metadata items describe aspects of the media item such as category, genre, contributor, ratings, and roles (e.g., actors, actresses).
In one embodiment, one or more computer-readable media or other memory areas such as memory area 130 associated with the catalog service 122 have computer-executable components comprising a rules component 132, an interface component 134, an aggregation engine component 136, and a front end component 138. The rules component 132 enables configuration, by the user 102, of interlinking rules 140 for combining catalog data such as catalog data 142 from the content providers or owners 106. The interface component 134 receives, from the content owners 106, the catalog data 142 including a plurality of metadata items. Each of the plurality of metadata items includes channel metadata and group metadata. The channel metadata and group metadata describe a media item associated with the metadata item. The aggregation engine component 136 combines the catalog data 142 received by the interface component 134 at least by comparing the channel metadata and group metadata from the received catalog data 142 to identify similar media items. The front end component 138 provides the combined catalog data 142 to the user 102. In an embodiment, the rules component 132 updates the interlinking rules 140 based on input from the user 102 without recompiling the aggregation engine component 136.
In an embodiment of the invention, a computer, computing device, or other general purpose computing device is suitable for use as the catalog service 122 in the figures illustrated and described herein. The computer has one or more processors or processing units and access to a memory area such as memory area 130.
The computer typically has at least some form of computer readable media. Computer readable media, which include both volatile and nonvolatile media, removable and non-removable media, may be any available medium that may be accessed by computer. By way of example and not limitation, computer readable media comprise computer storage media and communication media. Computer storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media. Those skilled in the art are familiar with the modulated data signal, which has one or more of its characteristics set or changed in such a manner as to encode information in the signal. Wired media, such as a wired network or direct-wired connection, and wireless media, such as acoustic, RF, infrared, and other wireless media, are examples of communication media. Combinations of any of the above are also included within the scope of computer readable media.
In operation, a computing device executes computer-executable instructions such as those illustrated in the figures to implement aspects of the invention.
Referring next to
The received catalog data is aggregated or merged at 208 based on the configured interlinking and/or merging rules. In an embodiment, interlinking comprises comparing the metadata items from the catalog data to identify similar media items. Aspects of the invention are operable with a plurality of techniques for identifying similar media items, or other forms of metadata matching, including, for example, fuzzy matching techniques. Merging and aggregating the catalog data includes parsing the catalog data received from the plurality of content providers and assigning the parsed catalog data to one or more fields of the multi-field schema.
The user catalog is generated at 210 or formatted from the aggregated catalog data based on the pre-defined catalog format for consumption by the user. In an embodiment, generating the user catalog includes propagating the user catalog to a front end database for access by the network interface. The generated user catalog is provided to the user at 212, for example, by a network interface, on a scheduled basis or on demand.
Aspects of the invention further include transmitting a portion of the generated user catalog to the user. The portion of the generated user catalog may represent an incremental update to a previously transmitted user catalog. Transmission of the incremental update reduces download time and conserves bandwidth. The incremental updates reflect the changes made to the catalog since the last download. Such options may be based on user preferences or system-defined preferences. Exemplary preferences may direct, for example, the download of a full catalog if the user is operating from a personal computer or download incremental updates if the user is operating from a mobile device.
In one embodiment, one or more computer-readable media have computer-executable instructions for performing the method illustrated in
Referring next to
Referring next to
Referring next to
Referring next to
Referring next to
where SS represents the step similarity, RS represents the rule similarity, RW represents the rule weight, and RC represents the rule count.
In one embodiment, the fuzzy matching algorithm works on two tables at a time (e.g., “source” and “destination” tables). Multiple columns of each table are considered for the fuzzy look up. Each column may have a different significance to the rule, therefore, columns are weighted. To control the quality of the interlinking, each column has a minimum similarity. If the similarity is less then the minimum, it is not considered. The summation of the weighted similarity of all columns defines the similarity of the rule. An exemplary summation is shown below in equation (2)
wherein RS represents the rule similarity, CS represents the column similarity, CW represents the column weight, and CC represents the column count.
Data normalization is performed at the column level. In one embodiment, multiple normalization processes are defined for each column and executed in a predefined order.
In “row level interlinking,” the names, birth dates, birth places, etc. of two people may be compared. These attributes are stored in a single row because of their one-to-one relationship with the people. A result row is built from multiple source rows. In “collection level interlinking,” the peoples' works (e.g., songs they've sang, movies they have starred in, etc.) are compared. A result collection is built from multiple source collections. To perform collection level interlinking, each item of the collection is compared as with row level interlinking. Treated as a collection, the collection level similarity may be calculated and compared to a predefined minimum collection similarity. There are many ways of defining collection similarity, as shown in equations (3), (4), and (5).
CS=MC/SCC (3)
CS=MC/DCC (4)
CS=MC/(SCC+DCC−MC) (5)
where SCC represents a source collection count, DCC represents a destination collection count, MC represents a match count, and CS represents a collection similarity. Alternatively or in addition to a calculated relevant number, collection similarity may also be defined as a fixed number.
In one embodiment, the collection level interlinking is performed after the row level interlinking. In an embodiment, the fuzzy matching algorithm uses a set of pre-defined working tables as the input and output at runtime.
To differentiate columns from different sources in a row merge, or to differentiate collections from different sources in a collection merge, a priority is assigned to each of column/collection. There are two types of priority: static (e.g., predefined) and dynamic (e.g., content based). A static priority is a predefined, fixed value which does not change based on the content of the data. A dynamic priority is based on the content of the data. For example, for a row merge, the priority may be based on the value of the content or the string length of the content. For a collection merge, the priority may be based on the collection item count and a maximum and/or minimum value of a column.
In an embodiment, the priorities are the same from different sources. There are multiple, different ways of handling such a conflict including selecting one of the sources, or concatenating or summing the content from the conflicting sources. When performing a row merge, selecting one of the sources includes selecting one of the source column contents as the result column content. Concatenating the content comprises concatenating the string type data from the source columns for use as the result column content. Summing the content comprises summing the numerical data in the source column content as the result column content.
When performing a collection merge, selecting one of the sources includes selecting one of the collections as the result collection. Concatenating the content comprises adding the source collections items to the result collection.
Aspects of the invention may be implemented as a class of application programming interface routines. In an embodiment, MergeWorkerBase is the base abstract class, and different types of merges are implemented as different sub classes thereof. A RowMergeWorker class implements the logic of row merge. A CollectionMergeWorker class implements the logic of collection merge.
Although described in connection with an exemplary computing system environment, embodiments of the invention are operational with numerous other general purpose or special purpose computing system environments or configurations. The computing system environment illustrated in
The following examples further illustrate embodiments of the invention. The figures, description, and examples herein as well as elements not specifically described herein but within the scope of aspects of the invention constitute means for aggregating the metadata items from the catalog data into the merged catalog data.
The order of execution or performance of the operations in embodiments of the invention illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and embodiments of the invention may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the invention.
Embodiments of the invention may be implemented with computer-executable instructions. The computer-executable instructions may be organized into one or more computer-executable components or modules. Aspects of the invention may be implemented with any number and organization of such components or modules. For example, aspects of the invention are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments of the invention may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
When introducing elements of aspects of the invention or the embodiments thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements.
Having described aspects of the invention in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the invention as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
The following are exemplary business rules for aggregating content from multiple providers and from a single provider.
Video Aggregation among Multiple Providers
Partner A and Partner B have Movie A in their inventory, and they both provide the cover art and metadata for the movie. Even though the movie can be purchased/downloaded from the two content providers, aspects of the invention provide a consistent experience to users for browsing the movie in the catalog described herein. For example, the cover art and its movie metadata only show up once the initial catalog browsing experience. When the user decides to purchase or download Movie A, the service presents the offers from both Partner A and Partner B and allows the user to choose from which partner to purchase.
An exemplary configuration for the business rules for aggregating the video catalog are shown below.
Interlinking the video from Partner A and Partner B
Partner A provides two similar law-related television shows. To help the user find the desired show, the service displays all episodes for both shows. The business rules to implement these actions are based on the following pattern: the video title starts with “Law” and the video type is either a series, mini-series, limited series, or movie.
Exemplary video fundamentals data are shown below as an extensible markup language (XML) file.
An exemplary video catalog schema is shown below.
Video Catalog Element
Video Element
Video element defines a unique piece of video content, such as a Movie, a TV episode, a Music Video or a User Generated Content (UGC). Every unique video content will be a unique Video element. If same content is released in different format (such as HD vs. SD or different languages), each format will be a unique “Video Instance” element. However, they will all belong to the same “Video” parent element.
VideoInstance Element
Video Instance is a different format or version or language of a Video (i.e. content is the same).
VideoFile Element
VideoFile element contains information of the physical video file (e.g. missionimpossible.wmv) for downloading/streaming. Generally, each VideoInstance will have only one VideoFile element, however, in cases when the file size for a given VideoInstance is too big (e.g. Titanic HD version), might break into multiple small files, result in multiple VideoFile element. For details on data relationship, please refer to the Video Catalog Data Relationship Diagram.
AudioFormat Element
AudioFormat element contains audio format information for the video.
VideoFormat Element
VideoFormat element contains video format information for the video.
Series Element
Series element contains information for a TV series such as season, episode information etc.
VideoGroup Element
Video Group is for grouping purposes. Below are a few scenario using Video Group for grouping
Trailer Element
Trailer element enables provider to associate video trailers to a Video (movies or TV episode), a TV series, a TV Series Season, a Video group or a Video Instance.
Poster Element
Poster element enables provider to associate poster image files to a Video (movies or TV episode), a TV series, a Season of a TV series, a Video group or a Video Instance.
ImageFormat Element
ImageFormat element contains image format information for poster or supplementary image file.
Offer Element
Offer element enables provider to associate offer to a Video (movies or TV episode), a TV series, a Season of a TV series, a Video group or a Video Instance.
SupplementaryFile Element
SupplementaryFile element enables provider to associate any supplementary files that are non-poster, non-trailer to the Video. Examples are: Screenshots for the video, licensing agreement files, attribution files etc.
Category Element
Category contains categories/genre information for Video content. You can associate multiple categories to a Video. For example, for a Video that is “Romantic Comedy”, you can associate two Category IDs to it—“Romance” and “Comedy”.
Role Element
Role contains the cast and character information for the video.
Flag Element
Flag contains attribute information associated with the video.
Rating Element
Rating contains parental rating information associated with the video. Multiple ratings can be associated to a given video
Contributor Element
Contributor contains the actor/actress/cast info.
VChannel Element
VChannel is the Virtual video channel, it can be a traditional network channel or a virtual channel defined by provider.
Marker Element
Marker contains position information on inserting ads. It is used in Ad-sponsored Video content.
Tag Element
Tag defines a list of “keyword” associate with the video, and is used to facilitate search.
AdGroup Element
AdGroup contains the ad information for Ad-sponsored Video content.
Video Type
Audio Encoding Type
Definition Format Type
Aspect Ratio Type
Video Encoding Type
Download Type
Image Type
Associate Element Type
Supplementary Content Type
Supplementary File Type
Category ID
Role ID
Flag ID
Rating Type ID
This application is a continuation of U.S. patent application Ser. No. 11/461,589, filed Aug. 1, 2006, the entire contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5291554 | Morales | Mar 1994 | A |
5721832 | Westrope et al. | Feb 1998 | A |
5774170 | Hite et al. | Jun 1998 | A |
5838314 | Neel et al. | Nov 1998 | A |
6003041 | Wugofski | Dec 1999 | A |
6119098 | Guyot et al. | Sep 2000 | A |
6211869 | Loveman et al. | Apr 2001 | B1 |
6463444 | Jain et al. | Oct 2002 | B1 |
6463468 | Buch et al. | Oct 2002 | B1 |
6490001 | Shintani et al. | Dec 2002 | B1 |
6567980 | Jain et al. | May 2003 | B1 |
6698020 | Zigmond et al. | Feb 2004 | B1 |
6732366 | Russo | May 2004 | B1 |
6766526 | Ellis | Jul 2004 | B1 |
6910191 | Segerberg et al. | Jun 2005 | B2 |
6992728 | Takagi et al. | Jan 2006 | B2 |
6993553 | Kaneko et al. | Jan 2006 | B2 |
7003478 | Choi | Feb 2006 | B1 |
7210159 | Roop et al. | Apr 2007 | B2 |
20010014876 | Miyashita | Aug 2001 | A1 |
20010042249 | Knepper et al. | Nov 2001 | A1 |
20020023164 | Lahr | Feb 2002 | A1 |
20020073033 | Sherr et al. | Jun 2002 | A1 |
20020097979 | Lowthert et al. | Jul 2002 | A1 |
20020157098 | Zustak et al. | Oct 2002 | A1 |
20020178443 | Ishii | Nov 2002 | A1 |
20020178449 | Yamamoto et al. | Nov 2002 | A1 |
20030037068 | Thomas et al. | Feb 2003 | A1 |
20030077067 | Wu et al. | Apr 2003 | A1 |
20030110490 | Dew et al. | Jun 2003 | A1 |
20030177490 | Hoshino et al. | Sep 2003 | A1 |
20030233653 | Hwang et al. | Dec 2003 | A1 |
20030237093 | Marsh | Dec 2003 | A1 |
20040006606 | Marotta et al. | Jan 2004 | A1 |
20040030615 | Ling | Feb 2004 | A1 |
20040068739 | Russ et al. | Apr 2004 | A1 |
20040086120 | Akins, III et al. | May 2004 | A1 |
20040193609 | Phan et al. | Sep 2004 | A1 |
20040268386 | Logan et al. | Dec 2004 | A1 |
20050152686 | Takashimizu et al. | Jul 2005 | A1 |
20050198006 | Boicey et al. | Sep 2005 | A1 |
20050262528 | Herley et al. | Nov 2005 | A1 |
20050278230 | Shirasaka et al. | Dec 2005 | A1 |
20060008256 | Khedouri et al. | Jan 2006 | A1 |
20060059095 | Akins, III et al. | Mar 2006 | A1 |
20060085816 | Funk et al. | Apr 2006 | A1 |
20060094406 | Cortegiano | May 2006 | A1 |
20060167903 | Smith et al. | Jul 2006 | A1 |
20060212347 | Fang et al. | Sep 2006 | A1 |
20060259926 | Scheelke et al. | Nov 2006 | A1 |
20070002175 | Narushima et al. | Jan 2007 | A1 |
20070061842 | Walter et al. | Mar 2007 | A1 |
20070083894 | Gonsalves et al. | Apr 2007 | A1 |
20070124201 | Hu et al. | May 2007 | A1 |
20070136742 | Sparrell | Jun 2007 | A1 |
20070288963 | Ahmad-Taylor et al. | Dec 2007 | A1 |
20080028101 | Dewa | Jan 2008 | A1 |
Number | Date | Country |
---|---|---|
0177897 | Oct 2001 | WO |
03065219 | Aug 2003 | WO |
Entry |
---|
Smith et al., “Visually Searching the Web for Content,” IEEE Multimedia, July-September 199, pp. 12-20, IEEE, USA. |
Unknown, “Welcome to CSA, your Guide to Discovery,” CSA Illumina, 2006, 1 page, CSA, USA. |
Unknown, “AVCataloger Overview,” NC Software Inc., 2006, 3 pages, USA. |
Unknown, “Your 24-7 Video Store,” Apple iTunes, printed from http://www.apple.com/itunes/videos, printed on Jul. 10, 2006, 3 pages, Apple Computer, Inc., USA. |
Unknown, “Google to Launch Video Marketplace,” Google Press Center, Jan. 6, 2006, Google, Inc., printed from http://www.google.com/press/pressrel/video—marketplace.html, printed on Jul. 10, 2006, 3 pages, USA. |
Number | Date | Country | |
---|---|---|---|
20110209185 A1 | Aug 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11461589 | Aug 2006 | US |
Child | 13099914 | US |