This patent claims the benefit of U.S. patent application Ser. No. 12/101,738, filed on Apr. 11, 2008, which is hereby incorporated by reference in its entirety.
The present disclosure relates generally to media encoding and, more particularly, to methods and apparatus to generate and use content-aware watermarks.
Media-centric companies are often interested in tracking the number of times that audience members are exposed to media compositions (e.g., television programs, motion pictures, internet videos, radio programs, etc.). To track such exposures, companies often generate audio and/or video signatures (i.e., a representation of some, preferably unique, portion of the media composition or the signal used to transport the media composition) of media compositions that can be used to determine when those media compositions are presented to audience members. Additionally, companies embed identification codes into media compositions to monitor presentations of those media compositions to audience members by comparing identification codes retrieved from media compositions presented to audience members with reference to identification codes stored in a reference database in association with information descriptive of the media compositions. These identification codes can also be referred to as watermarks.
Configurations of data collection systems to collect signatures, and/or watermarks from media compositions typically vary depending on the equipment used to receive, process, and display media signals in each monitored consumption site (e.g., a household). For example, media consumption sites that receive cable television signals, satellite television signals, and/or Internet signals typically include set top boxes (STB's) and/or computers that receive media signals from a cable, a satellite, and/or an Internet service provider. Media delivery systems configured in this manner may be monitored using hardware, firmware, and/or software that interfaces with the STB to extract information (e.g., codes) or generate signal information (e.g., signatures) therefrom.
Although the following discloses example methods, apparatus, and systems including, among other components, software executed on hardware, it should be noted that such methods, apparatus, and systems are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of these hardware and software components could be embodied exclusively in hardware, exclusively in software, or in any combination of hardware and software. Accordingly, while the following describes example methods, apparatus, and systems, the examples provided are not the only way to implement such methods, apparatus, and systems.
The example methods and apparatus described herein can be used to generate content-aware watermarks. A watermark used in audio/video content is a piece of information (e.g., a code) that is embedded in the audio/video content. In some instances, a watermark may be used to establish ownership of audio/video media compositions by designing the watermark to be indicative of a particular entity or embedding information into the watermark to identify the entity. Additionally or alternatively, a watermark can include an identification code that can be used to correlate the watermark with an identity of the audio/video media compositions by comparing the code in the watermark to codes stored in a reference database in association with respective audio/video identifiers (e.g., titles, program names, etc.).
Unlike traditional watermarks that do not include information that alone is indicative of the content of the audio and/or video media compositions in which the watermarks are embedded, the proposed example methods and apparatus can be used to generate content-aware watermarks that include descriptive information pertaining to the audio/video content of their respective media compositions. In some instances, the content-aware watermarks can also be generated to include content-descriptive information corresponding to the locations in the audio/video composition at which the content-aware watermarks are embedded. For example, a content-aware watermark embedded in a scene of a video may include information indicative of a product or service (e.g., a soft drink, a financial service, a retail establishment chain, etc.) appearing (e.g., advertised) in that scene.
To generate the content-aware watermarks, the example methods and apparatus can be configured to receive audio/video content, decode closed captioning information in the audio/video content, and encode select words or phrases of the closed captioning information into the watermark. Closed caption text represents words and phrases that are spoken or otherwise presented on an audio track of media to convey messages, ideas, etc. to audience members. Selected words or phrases can be used as keywords representative of the audio/video content and/or scenes presented or mentioned in the media composition. In some example implementations, a keyword can be indicative of media content that is presented at a point in a media composition where a content-aware watermark is embedded in the media composition. In example implementations in which information descriptive about particular scenes or points in a media presentation are not desired, words or phrases can be selected from the beginning (or any other portion) of a media composition and encoded for audio and/or video watermark insertion in one or more locations of the media composition. In example implementations in which closed captioning information is not available, the proposed example methods and apparatus can be configured to use a speech-to-text converter to convert audio-track speech to text that can then be used to encode select words or phrases of audio/video content into a watermark.
In addition to or instead of using closed captioning information or speech-to-text conversion, the proposed example methods and apparatus may be configured to detect metadata (e.g., title, program name, international standard audiovisual number (ISAN), or any other identifier information), detect scene changes, detect blank frames or MPEG splice points, and/or detect logos and generate watermarks based on any one or more of the detected information. In the illustrated examples described herein, metadata refers to supplementary information describing specific instances of content in a media composition such as, for example, a creation date and time, a content ID of the media composition, creator information, blank frame information, decode information associated with watermarks, keyframe information, scene change information, and/or audio event information. For example, metadata may include temporal and/or spatial information defining events such as blank frames, scene changes, or audio events in the media composition. In some examples, the temporal information includes timestamps associated with specific times in the media composition at which events occur. Often, the timestamps include a start time and an end time that define the start and stop boundaries associated with an occurrence of an event. The spatial information includes location descriptions such as (x, y) locations on, for example, a video monitor on which an event appears. For example, if an event includes a blank frame, the (x, y) locations will define an entire video presentation screen. For data storage efficiency, the example methods and apparatus can be configured to generate coded versions of the detected information to, for example, compress the information.
Keywords (e.g., the selected words or phrases) embedded in the content-aware watermarks can be used to detect when people or audience members are exposed to or consume particular media content. For example, a media meter installed in an audience member home configured to monitor advertisement exposure can extract keywords from watermarks associated with television and/or radio advertisements to determine the product advertisements or brand advertisements to which the household members were exposed. For example, the brand name of a soft drink may be a keyword embedded in the watermark that can be used to determine that a household member was exposed to an advertisement for that soft drink. In some example media measurement applications, the generated watermarks can be extracted at an audience member household from audio/video presented to an audience member of the household. The extracted watermarks can then be forwarded to a central facility via, for example, a network connection, and the central facility can decode the words or phrases encoded in each watermark for subsequent analysis. Example analyses may include identifying brand name and/or product name keywords to determine advertisement exposures to particular brands and/or products. Other example analyses can also include comparing the words or phrases with words or phrases stored in a reference library of words or phrases stored in association with audio/video identifiers (e.g., movie titles, show titles, television programming names, etc.).
In addition, the keywords embedded in the content-aware watermarks can be used to enable searching for particular audio/video content stored in a database or throughout a network (e.g., an intranet, the Internet, etc.). For example, if a person is interested in finding video presentations mentioning a particular sports drink, the person can search the database or the network using the name of that sports drink. In some example implementations, internet search engine service providers or internet media providers could index the keywords in the content-aware watermarks to enable search engine users to find audio/video media of interest anywhere on the Internet or in particular data stores corresponding to the internet search engine service providers or internet media providers.
Turning to
In the illustrated example, a personal computer 110 may be coupled via an internetwork 112 (e.g., the Internet) to the internet video media server 102a, the internet audio content media server 102b, and/or the advertising media server 102c. The personal computer 110 may be used to decode and present media content received from any of those servers 102a-c. The personal computer 110 includes a content-aware watermark decoder 114 to extract content-aware watermarks from presented media content and to decode embedded keywords from the content-aware watermarks. In the illustrated example, the personal computer 110 communicates the extracted keywords to the central facility 108 for subsequent analysis. For example, an analysis server 116 in the central facility 108 can use the keywords to determine the number of times that users of the personal computer 110 were exposed to particular media content or to advertisements for particular products or brands. That is, if a keyword is the name of a financial service, the analysis server 116 can determine the number of times that users of the personal computer 110 were exposed to the name for that financial service (whether in an advertisement or elsewhere (e.g., news stories)) based on the number of times the personal computer 110 communicates the same financial service keyword to the central facility 108. In other example implementations, the analysis server 116 can compare received keywords to keywords stored in a reference database 118 in association with media identifiers, brand names, product names, etc. The reference database 118 may additionally or alternatively be used when generating content-aware watermarks by storing pre-determined or pre-identified terms of interest that are to be selected from media compositions for encoding into content-aware watermarks to be embedded in the media compositions. Additionally or alternatively, the reference database 118 may be configured to store code books having keywords stored in association with unique identifier codes corresponding to unique identifier codes encoded in content aware watermarks. When decoders extract the unique identifiers from content aware watermarks and communicate the unique identifiers to the central facility 108, the analysis server 116 can compare the received unique identifiers with unique identifiers in the reference database 118 to determine exposures to particular media content. The analysis server 116 can store exposure levels for keywords, advertisements and/or other audio/video media in an exposure database 120.
In some example implementations, the personal computer 110 may be configured to execute analysis processes to perform at least some or all of the analyses described above as being performed by the analysis server 116. In such example implementations, the personal computer 110 communicates the results of its analyses to the central facility 108 for storage in the exposure database 120 and/or for further processing by the analysis server 116. In yet other example implementations, the personal computer 110 may not extract keywords from content-aware watermarks but may instead communicate the content-aware watermarks to the central facility 108. The analysis server 116 may then extract the keywords from the content-aware watermarks for subsequent analysis.
In the illustrated example, a television 122 receives media content from the advertising media server 102c, the television media server 102d, and/or the motion picture media server 102e via a mediacast network 124. The mediacast network 124 may be an analog and/or digital broadcast network, a multicast network, and/or a unicast network. In the illustrated example, the television 122 is coupled to a media meter 126 having a content-aware watermark decoder 114 to extract content-aware watermarks from presented media content and to decode embedded keywords from the content-aware watermarks. The decoder 114 of the media meter 126 is substantially similar or identical to the decoder 114 of the personal computer 110. In addition, the media meter 126 operates in substantially the same way as the personal computer 110 with respect to extracting, decoding, and/or processing content-aware watermarks. That is, the media meter 126 can be configured to extract keywords from content-aware watermarks and communicate the keywords to the central facility 108. Alternatively or additionally, the media meter 126 can communicate the content-aware watermarks to the central facility 108 so that the analysis server 116 at the central facility can extract the keywords. In some example implementations, the media meter 126 may be configured to analyze the keywords for determining media exposure and may communicate the analysis results to the central facility 108 for storage in the exposure database 120 and/or for further processing by the analysis server 116.
In the illustrated example, a search engine server 110 may be configured to index media compositions stored in the media servers 102a-c based on keywords in the content-aware watermarks embedded in those media compositions. In this manner, users accessing the search engine service via personal computers (e.g., the personal computer 110) connected to the internetwork 112 can use text searches to search for media compositions based on the keywords in the content-aware watermarks. This enables providing more comprehensive searchability of media compositions (e.g., video files, audio files, etc.) based on their contents than do search processes that search media compositions based on file names, user-generated tags, or user-generated descriptive information about media files since such file names, user-generated tags, and user-generated descriptive information may not include keywords present in the content that might be of interest to a user searching for that content.
After the content-aware watermark encoder 104 generates the content-aware watermark 206, a watermark embedder 208 can embed the watermark 206 in one or more frames of the media excerpt 202 using any suitable watermark embedding technique. The watermark embedder 208 can be configured to embed the watermark 206 in a video portion of the media excerpt 202 and/or an audio portion of the media excerpt 202. In some example implementations, embedding a watermark in a video domain enables using relatively larger watermarks because of the relatively larger bandwidth available for video than is typically available for audio.
Turning to
Turning now to
Although the example implementations of
Some or all of the data interface 402, the closed caption text decoder 404, the speech-to-text converter 406, the metadata detector 408, the media features detector 410, the word selector 412, the data compressor 414, and/or the watermark encoder 416, or parts thereof, may be implemented using instructions, code, and/or other software and/or firmware, etc. stored on a machine accessible medium and executable by, for example, a processor system (e.g., the example processor system 1310 of
To transmit and receive data, the example content-aware watermark encoder 104 is provided with the data interface 402. In the illustrated example, the data interface 402 can be used to receive media composition data (e.g., audio data, video data, etc.), closed caption data, metadata, etc. from media sources (e.g., computer interfaces, cable boxes, televisions, media players, etc.), and communicate content-aware watermarks to, for example, the watermark embedder 208 (
To extract or decode closed caption text from media data received via the data interface 402, the example content-aware watermark encoder 104 is provided with the closed caption text decoder 404. In some example implementations, the closed caption text decoder 404 may be omitted from the example content-aware watermark encoder 104 and the content-aware watermark encoder 104 may be configured to receive decoded closed caption text from a closed caption text decoder of a media source coupled to the data interface 402.
To convert speech from media audio tracks to text, the example content-aware watermark encoder 104 is provided with the speech-to-text converter 406. In the illustrated example, the speech-to-text converter 406 is used to recognize words in media that does not have closed caption text associated therewith or in situations where closed caption text cannot be obtained (e.g., failure or omission of the closed caption text decoder 404). In example implementations in which speech-to-text conversion capabilities are not desired, the speech-to-text converter 406 can be omitted from the example content-aware watermark encoder 104.
To detect metadata in media, the example content-aware watermark encoder 104 is provided with the metadata detector 408. In the illustrated example, the example content-aware watermark encoder 104 includes the media features detector 410 configured to detect particular characteristics or features (e.g., scene changes, blank frames, MPEG splice points, logos, etc.) in media content and generate metadata descriptive of those characteristics or features.
To select words or phrases to form keywords, the example content-aware watermark encoder 104 is provided with the word selector 412. In the illustrated example, the word selector 412 is configured to select words or phrases in metadata, closed caption text, and/or audio tracks indicative or descriptive of respective media content. Additionally or alternatively, the word selector 412 may be configured to select words or phrases that might be of interest to a user searching for media content. To select the words or phrases, the word selector 412 may be configured to use weighted numeric factors or values assigned to pre-determined or pre-identified terms stored in the reference database 118 of
To compress data (e.g., keywords, unique identifiers, metadata, etc.) for insertion in content-aware watermarks, the example content-aware watermark encoder 104 is provided with the data compressor 414. In some example implementations, the amount of data space in media frames, packets, etc. may be limited and, thus, compressing keywords and/or other data used to form a content-aware watermark may be used to ensure that the watermark may be successfully embedded in the media. In some example implementations, the data compressor 414 may be configured to compress data using an encoding technique involving truncating a keyword to generate a partial keyword of a particular character length and encoding each character of the partial keyword using a predetermined number of bits (e.g., five bits) to form a character-bit compressed partial keyword. For example, the compressor 414 may be configured to truncate keywords to their first five characters and encode each of the first five characters using five bits per character for a total of twenty bits per keyword. For the English language, each alphabetic character in the English alphabet that is typically represented in binary using ASCII binary code can be assigned a relatively shorter unique bit combination such that when a particular alphabetic character appears in a keyword in ASCII binary code, the data compressor 414 can represent that alphabetic character using its associated, relatively shorter unique bit combination (e.g., ‘A’=00001(binary), ‘B’=00010(binary), ‘C’=00011(binary), etc.). If a particular media composition allows 50 bits every two seconds for watermarking purposes, ten characters can be transmitted via one or more content-aware watermarks every two seconds (i.e., (25 bits/second)/(5 bits/character)=5 characters per second or 10 characters every two seconds).
In other example implementations, the data compressor 414 may be configured to encode keywords by discarding predetermined alphabetic characters. For example, for each keyword selected by the word selector 412, the data compressor can omit certain vowels or all vowels from the keyword to form a partial keyword before embedding the keyword in a content-aware watermark. Alternatively, the data compressor 414 can omit certain consonants or a mix of vowels and consonants from a keyword to generate a partial keyword.
Additionally or alternatively, the data compressor 414 can be configured to perform Huffman or Arithmetic coding processes to encode keywords selected by the word selector 412 and/or partial keywords generated by the data compressor 414 as described above. In such an implementation, the data compressor 414 can be configured to assign fewer bits to encode characters that are more probable of being present in keywords (i.e., characters that have a higher frequency of occurrence among different keywords) and relatively more bits to encode characters that are less probable of being present in keywords.
To generate and encode content-aware watermarks with the data (e.g., keywords, unique identifiers, metadata, etc.) selected by the metadata detector 408, the media features detector 410, and/or the word selector 412, the example content-aware watermark encoder 104 is provided with the watermark encoder 416. In the illustrated example, the watermark encoder 416 can encode or embed compressed and/or non-compressed keyword(s) into watermarks to generate content-aware watermarks.
Additionally or alternatively, the watermark encoder 416 may be configured to encode or embed unique identifiers (e.g., the unique identifiers 808 of
Some or all of the media interface 602, the watermark detector 604, the data extractor 606, the keyword decoder 607, the signature generator 608, the data interface 610, and/or the timestamp generator 612, or parts thereof, may be implemented using instructions, code, and/or other software and/or firmware, etc. stored on a machine accessible medium and executable by, for example, a processor system (e.g., the example processor system 1310 of
To receive audio and/or video media, the example content-aware watermark decoder 114 is provided with the media interface 602. To detect watermarks (e.g., the content-aware watermark 206 of
To extract keyword(s) and/or unique identifier(s) from the detected content-aware watermarks, the example content-aware watermark decoder 114 is provided with the data extractor 606. For example, the data extractor 606 may extract the keywords ‘New York,’ ‘Bubblee,’ and/or ‘soda’ from the content-aware watermark 206 described above in connection with
In the illustrated example, the content-aware watermark decoder 114 is also provided with the keyword decoder 607 to decode whole or partial keywords detected in content-aware watermarks. As discussed above in connection with
In the illustrated example, the content-aware watermark decoder 114 is also provided with a signature generator 608 to generate signatures of audio and/or video portions of the media received via the media interface 602. In the illustrated example, the signature generator 608 generates signatures of video or audio frames specified by metadata in metadata-based content-aware watermarks. For example, if a content-aware watermark indicates the presence of a blank frame at a certain location in a media composition, the signature generator 608 can generate one or more signatures of one or more audio or video frames following the blank frame. In some example implementations, the signatures can be compared to reference signatures stored in, for example, the reference database 118 of
To store the keyword(s), unique identifier(s), and/or signature(s) in a memory and/or communicate the same to the central facility 108 (
Flow diagrams depicted in
The example processes of
Turning to
If the example content-aware watermark encoder 104 determines that it should create audio track-based keyword(s) (block 506) (e.g., the content-aware watermark encoder is configured to create audio track-based keyword(s) and an audio track and/or closed caption text is present), the media features detector 410 (
If the media data portion does not include closed caption text (block 508), the speech-to-text converter 406 (
After the word selector 412 selects the keyword(s) at block 516, or if the content-aware watermark encoder 104 determined that it should not create audio track-based keywords (block 506), the example content-aware watermark encoder 104 then determines whether it should create metadata-based keyword(s) (block 518). For example, if a user sets a configuration option of the content-aware watermark encoder 104 to not generate metadata-based keyword(s) or if a user sets a configuration option to only generate audio track-based keyword(s), the content-aware watermark encoder 104 will determine that it should not create metadata-based keywords (block 518) and control will advance to block 530 (
If the example content-aware watermark encoder 104 determines that it should create metadata-based keyword(s) (block 518) (e.g., the content-aware watermark encoder is configured to create metadata-based keyword(s)), the metadata detector 408 (
If metadata is not present in the media data portion (block 520), the media features detector 410 (
After the media features detector 410 generates metadata based on the detected features (or characteristics) (block 524) or after the metadata detector 408 retrieves the metadata from the media data portion (block 522), the word selector 412 (
After the metadata keyword(s) are selected (or created) (block 528) or if the content-aware watermark encoder 104 determines that it should not create metadata keywords (block 518), the content-aware watermark encoder 104 determines whether keyword(s) have been selected (or created) (block 530) (
If keyword(s) have been selected (or created) (block 530), the content-aware watermark encoder 104 determines whether it should use unique identifier(s) (block 532). For example, the content-aware watermark encoder 104 may be configured to encode unique identifiers (e.g., the unique identifiers 808 of
The content-aware watermark encoder 104 determines whether it should compress the keyword(s) or the unique identifier(s) (block 536). For example, configuration settings of the content-aware watermark encoder 104 may indicate whether to compress keyword(s) or the unique identifier(s) and/or the content-aware watermark encoder 104 may be configured to compress the keyword(s) or the unique identifier(s) when they exceed a threshold value for the size and/or number of the keyword(s) or the unique identifier(s). If the content-aware watermark encoder 104 determines that it should compress the keyword(s) or the unique identifier(s) (block 536), the data compressor 414 (
After the data compressor 414 compresses the keyword(s) or the unique identifier(s) (block 538) or if the content-aware watermark encoder 104 determines that it should not compress the keyword(s) nor the unique identifier(s) (block 536), the watermark encoder 416 (
After the watermark embedder 208 embeds the content-aware watermark in the media composition or if keyword(s) have not been selected (or created) (block 530), the content-aware watermark encoder 104 determines whether it should select another media data portion (block 544) for which to generate a content-aware watermark. In the illustrated example, if the content-aware watermark encoder 104 has not processed all of the media composition received at block 502, the content-aware watermark encoder 104 is configured to select another media data portion, in which case, the data interface 402 selects another media data portion (block 546) and control returns to block 506 of
Turning to
Initially, the media interface 602 (
The data interface 610 (
In the illustrated example, the analysis server 116 at the central facility 108 can add the tally counts generated by the content-aware watermark decoder 114 to tally counts collected from other content-aware watermark decoders to develop media ratings based on audience sizes in combination with audience percentages exposed to particular media content represented by keywords associated with content-aware watermarks. For example, a rating metric may indicate that for an audience group or panel (e.g., a nationwide audience) of a particular size, 10% of that audience was exposed to media content featuring (e.g., mentioning or displaying) a ‘Bubblee’ brand product.
The content-aware watermark decoder 114 determines whether it should generate any signatures (block 708). For example, the content-aware watermark decoder 114 may have configuration settings specifying that it should generate signatures when metadata-based content-aware watermarks are present such as blank frame content-aware watermarks indicative of blank frame locations. If the content-aware watermark decoder 114 determines that it should generate one or more signature(s) (block 708), the signature generator 608 (
After the signature generator 608 generates the signature(s) (block 710) or if the example content-aware watermark decoder 114 determines that it should not generate any signatures (block 708), the timestamp generator 612 generates one or more timestamp(s) (block 714) indicating the date and/or time at which the keyword(s) or unique identifier(s) were extracted and/or the signature(s) were generated. Typically, the timestamping is done at the monitored media site.
The data interface 610 stores the data (i.e., the keyword(s), unique identifier(s), timestamp(s), and/or signature(s)) in a memory (block 716) such as, for example, the memory 1324 or the memory 1325 of
Turning now to
The search engine server 110 selects a first media composition (block 904) in which to search for the provided search term(s). The watermark detector 604 (
The search engine server 110 then determines if there is any match (block 910) between any of the search terms provided at block 902 and any of the keywords retrieved at block 908. If a match is found (block 910), the search engine server 110 creates (or retrieves) a network link (block 912) for the media composition. For example, the search engine server 110 can generate (or retrieve) a hypertext link, a uniform resource locator (URL) link, or any other type of network link to retrieve the media composition from its storage location.
After the search engine server 110 creates the network link (block 912) or if no match is found (block 910), the search engine server 110 determines whether it should search another media composition (block 914). For example, if the search engine server 110 has not completed searching all of the media compositions that it is aware of, the search engine server 110 will search another media composition. If the search engine server 110 determines that it should search another media composition (block 914), control passes back to block 904 and the search engine server 110 selects another media composition. If the search engine server 110 determines that it has finished searching and there is no other media composition to search (block 914), the search engine server 110 presents one or more network link(s) to media composition(s) (block 916) identified as having one or more keyword(s) matching the search terms provided at block 902. For example, the search engine server 110 may present the network links via a web page for a user. The example process of
Turning to
If the data compressor 414 determines that it should form a partial keyword (block 1004), the data compressor 414 forms a partial keyword (block 1006) based on the keyword obtained at block 1002. In the illustrated example, the data compressor 414 can form the partial keyword using any technique discussed above in connection with
Turning to
After the keyword decoder 607 decodes the whole or partial keyword to ASCII format (block 1106) or if the keyword decoder 607 determines that the whole or partial keyword is not character-bit compressed (block 1104), the keyword decoder 607 determines if the keyword received at block 1102 and/or decoded at block 1106 is a partial keyword (block 1108). If the keyword is a partial keyword (block 1108), the keyword is a partial keyword (block 1108), the keyword decoder 607 reconstructs the partial keyword (block 1110) to form a whole keyword. In the illustrated example, the keyword decoder 607 reconstructs the keyword using a spell checker process as discussed above in connection with
Turning now to
In the illustrated example, the media compositions may be stored on the television media server 102d, the motion picture media server 102e, or on any other server communicatively coupled to the television server 102d. In this manner, when the television media server 102d is broadcasting or otherwise distributing media, the television media server 102d can perform the example process of
Initially, the television media server 102d receives one or more keyword(s) (block 1202) provided by, for example, one or more advertisers. For example, advertisers may provide keywords that indicate media content that would be consumed by typical audiences that the advertiser would like to target for presenting advertisements associated with those keywords. The television media server 102d receives a media segment (block 1204) in which to search for the provided keyword(s). For example, prior to or during broadcasting of a media composition (made up of a plurality of media segments), the television media server 102d may analyze each media segment of the media composition to determine whether it includes content-aware watermarks having one or more of the keyword(s) provided at block 1202. The watermark detector 604 (
The television media server 102d then determines if there is any match (block 1210) between any of the keywords provided at block 1202 and any of the keywords retrieved at block 1208. If a match is found (block 1210), the television media server 102d selects an advertisement associated with the matched keyword(s) for presentation (block 1212) in association with the media segment or with some other media segment of the media composition.
After the television media server 102d selects an advertisement for presentation (block 1212) or if no match is found (block 1210), the television media server 102d determines whether it should analyze another media segment (block 1214). For example, if the television media server 102d has not completed analyzing all of the segments of the media composition, the search engine server 102d will search another media segment. If the television media server 102d determines that it should analyze another media segment (block 12), control passes back to block 1204 and the television media server 102d receives another media segment. If the television media server 102d determines that it has finished analyzing all of the media segments (block 1214), the example process of
The processor 1312 of
The system memory 1324 may include any desired type of volatile and/or non-volatile memory such as, for example, static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, read-only memory (ROM), etc. The mass storage memory 1325 may include any desired type of mass storage device including hard disk drives, optical drives, tape storage devices, etc.
The I/O controller 1322 performs functions that enable the processor 1312 to communicate with peripheral input/output (I/O) devices 1326 and 1328 and a network interface 1330 via an I/O bus 1332. The I/O devices 1326 and 1328 may be any desired type of I/O device such as, for example, a keyboard, a video display or monitor, a mouse, etc. The network interface 1330 may be, for example, an Ethernet device, an asynchronous transfer mode (ATM) device, an 802.11 device, a DSL modem, a cable modem, a cellular modem, etc. that enables the processor system 1310 to communicate with another processor system.
While the memory controller 1320 and the I/O controller 1322 are depicted in
Although certain methods, apparatus, and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. To the contrary, this patent covers all methods, apparatus, and articles of manufacture fairly falling within the scope of the appended claims either literally or under the doctrine of equivalents.
Number | Name | Date | Kind |
---|---|---|---|
5572246 | Ellis et al. | Nov 1996 | A |
5574962 | Fardeau et al. | Nov 1996 | A |
5579124 | Aijala et al. | Nov 1996 | A |
5581800 | Fardeau et al. | Dec 1996 | A |
5612729 | Ellis et al. | Mar 1997 | A |
5621454 | Ellis et al. | Apr 1997 | A |
6122403 | Rhoads | Sep 2000 | A |
6226387 | Tewfik et al. | May 2001 | B1 |
6272176 | Srinivasan | Aug 2001 | B1 |
6314518 | Linnartz | Nov 2001 | B1 |
6374036 | Ryan et al. | Apr 2002 | B1 |
6377965 | Hachamovitch et al. | Apr 2002 | B1 |
6411725 | Rhoads | Jun 2002 | B1 |
6480825 | Sharma et al. | Nov 2002 | B1 |
6614844 | Proehl | Sep 2003 | B1 |
6647129 | Rhoads | Nov 2003 | B2 |
6690813 | Kimura et al. | Feb 2004 | B2 |
6724914 | Brundage et al. | Apr 2004 | B2 |
6738495 | Rhoads et al. | May 2004 | B2 |
6741684 | Kaars | May 2004 | B2 |
6892175 | Cheng et al. | May 2005 | B1 |
6901606 | Wright et al. | May 2005 | B2 |
6912294 | Wang et al. | Jun 2005 | B2 |
6973574 | Mihcak et al. | Dec 2005 | B2 |
6975746 | Davis et al. | Dec 2005 | B2 |
6983051 | Rhoads | Jan 2006 | B1 |
6988201 | Xu et al. | Jan 2006 | B1 |
6999598 | Foote et al. | Feb 2006 | B2 |
7007166 | Moskowitz et al. | Feb 2006 | B1 |
7017045 | Krishnamachari | Mar 2006 | B1 |
7095871 | Jones et al. | Aug 2006 | B2 |
7142691 | Levy | Nov 2006 | B2 |
7184571 | Wang et al. | Feb 2007 | B2 |
7224819 | Levy et al. | May 2007 | B2 |
7263203 | Rhoads et al. | Aug 2007 | B2 |
7269734 | Johnson et al. | Sep 2007 | B1 |
7289643 | Brunk et al. | Oct 2007 | B2 |
7315621 | Noridomi et al. | Jan 2008 | B2 |
7316025 | Aijala et al. | Jan 2008 | B1 |
7369675 | Pelly et al. | May 2008 | B2 |
7460991 | Jones et al. | Dec 2008 | B2 |
7983441 | Vestergaard et al. | Jul 2011 | B2 |
8015200 | Seiflein et al. | Sep 2011 | B2 |
8332478 | Levy et al. | Dec 2012 | B2 |
8528033 | McCarthy et al. | Sep 2013 | B2 |
8805689 | Ramaswamy et al. | Aug 2014 | B2 |
20020145622 | Kauffman et al. | Oct 2002 | A1 |
20020162118 | Levy et al. | Oct 2002 | A1 |
20020188841 | Jones et al. | Dec 2002 | A1 |
20020191810 | Fudge et al. | Dec 2002 | A1 |
20030110078 | Chang et al. | Jun 2003 | A1 |
20030133592 | Rhoads | Jul 2003 | A1 |
20040006469 | Kang | Jan 2004 | A1 |
20040028257 | Proehl | Feb 2004 | A1 |
20040073916 | Petrovic et al. | Apr 2004 | A1 |
20040078188 | Gibbon et al. | Apr 2004 | A1 |
20040125125 | Levy | Jul 2004 | A1 |
20040216173 | Horoszowski et al. | Oct 2004 | A1 |
20050053235 | Clark et al. | Mar 2005 | A1 |
20050141704 | Van Der Veen | Jun 2005 | A1 |
20050144006 | Oh | Jun 2005 | A1 |
20060018506 | Rodriguez et al. | Jan 2006 | A1 |
20060047517 | Skeaping | Mar 2006 | A1 |
20060059509 | Huang et al. | Mar 2006 | A1 |
20060159303 | Davis et al. | Jul 2006 | A1 |
20060212705 | Thommana et al. | Sep 2006 | A1 |
20060224452 | Ng | Oct 2006 | A1 |
20060285722 | Moskowitz et al. | Dec 2006 | A1 |
20070031000 | Rhoads et al. | Feb 2007 | A1 |
20070047763 | Levy | Mar 2007 | A1 |
20070199017 | Cozen et al. | Aug 2007 | A1 |
20070230739 | Johnson et al. | Oct 2007 | A1 |
20070266252 | Davis et al. | Nov 2007 | A1 |
20070274611 | Rodriguez et al. | Nov 2007 | A1 |
20070291848 | Aijala et al. | Dec 2007 | A1 |
20080052516 | Tachibana et al. | Feb 2008 | A1 |
20080066098 | Witteman et al. | Mar 2008 | A1 |
20090158318 | Levy | Jun 2009 | A1 |
20090164378 | West et al. | Jun 2009 | A1 |
Number | Date | Country |
---|---|---|
0064094 | Oct 2000 | WO |
0237498 | May 2002 | WO |
Entry |
---|
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/101,738, on Mar. 17, 2011 (11 pages). |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/101,738, on Oct. 20, 2011 (12 pages). |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/101,738, on Nov. 16, 2012 (13 pages). |
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 12/101,738, on May 28, 2013 (12 pages). |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/101,738, on Oct. 10, 2013 (7 pages). |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 12/101,738, Mar. 28, 2014 (11 pages). |
Stanescu et al., “Embedding Data in Video Stream using Steganography,” SACI 2007-4th International Symposium on Applied Computation Intelligence and Informatics, 2007 (4 pages). |
Ton Kalker, “Applications and Challenges for Audio Fingerprinting,” 111th AES Convention, 2001 (15 pages). |
Patent Cooperation Treaty, “International Search Report,” issued in connection with International Patent Application No. PCT/US2008/060101, on Feb. 27, 2009 (3 pages). |
Patent Cooperation Treaty, “Written Opinion of the International Search Authority,” issued in connection with International Patent Application No. PCT/US2008/060101, on Feb. 27, 2009 (4 pages). |
Digital Copyright Technologies, “Digital Copyright Protection for Multimedia Documents,” retrieved on Aug. 12, 2010, from http://vision.unige.ch/publications/postscript/99/HerrigelPun—telecom99.pdf (4 pages). |
Hartung et al., “Digital Watermarking of Raw and Compressed Video,” Telecommunications Institute, University of Erlangen-Nuremberg, Systems for Video Communication, Oct. 1996 (9 pages). |
Parviaien et al., “Large Scale Distributed Watermarking of Multicast Media through Encryption,” Department of Computer Science, Lulea University of Technology, 2001 (10 pages). |
Jian Zhao, “Applying Digital Watermarking Techniques to Online Multimedia Commerce,” Proc. of the International Conference on Imaging Science, Systems, and Applications (CISSA97), Jun. 30-Jul. 3, 1997, Las Vegas, USA (7 pages). |
International Bureau, “International Preliminary Report on Patentability,” issued in connection with International Patent Application No. PCT/US2008/060101, on Oct. 12, 2010 (4 pages). |
Number | Date | Country | |
---|---|---|---|
20140321694 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12101738 | Apr 2008 | US |
Child | 14324901 | US |