The subject matter of this patent document relates to management of multimedia content and more specifically to facilitate access and delivery of metadata, programs and services associated with a multimedia content based on watermarking techniques.
The use and presentation of multimedia content on a variety of mobile and fixed platforms have rapidly proliferated. By taking advantage of storage paradigms, such as cloud-based storage infrastructures, reduced form factor of media players, and high-speed wireless network capabilities, users can readily access and consume multimedia content regardless of the physical location of the users or the multimedia content. A multimedia content, such as an audiovisual content, can include a series of related images, which, when shown in succession, impart an impression of motion, together with accompanying sounds, if any. Such a content can be accessed from various sources including local storage such as hard drives or optical disks, remote storage such as Internet sites or cable/satellite distribution servers, over-the-air broadcast channels, etc.
In some scenarios, such a multimedia content, or portions thereof may contain only one type of content, including, but not limited to, a still image, a video sequence and an audio clip, while in other scenarios, the multimedia content, or portions thereof, may contain two or more types of content such as audiovisual content and a wide range of metadata. The metadata can, for example include one or more of the following: channel identification, program identification, content and content segment identification, content size, the date at which the content was produced or edited, identification information regarding the owner and producer of the content, timecode identification, copyright information, closed captions, and locations such as URLs where advertising content, software applications, interactive services content, and signaling that enables various services, and other relevant data that can be accessed. In general, metadata is the information about the content essence (e.g., audio and/or video content) and associated services interactive services, targeted advertising insertion).
Such metadata is often interleaved, prepended or appended to a multimedia content, which occupies additional bandwidth, can be lost when content is transformed into a different format (such as digital to analog conversion, transcoded into a different file format, etc.), processed (such as transcoding), and/or transmitted through a communication protocol/interface (such as HDMI, adaptive streaming). Notably, in some scenarios, an intervening device such as a set-top box issued by a multichannel video program distributor (MVPD) receives a multimedia content from a content source and provides the uncompressed multimedia content to a television set or another presentation device, which can result in the loss of various metadata and functionalities such as interactive applications that would otherwise accompany the multimedia content. Therefore alternative techniques for content identification can complement or replace metadata multiplexing techniques.
The disclosed technology relates to methods, devices, systems and computer program products that utilize an enhanced watermark extractor to facilitate access and delivery, and utilization of metadata, programs and services that are associated with a. primary multimedia content.
One aspect of the disclosed embodiments relates to a device that includes a processor and a memory including processor executable code. The processor executable code, when executed by the processor, causes the device to configure a watermark extractor to process digital samples of a primary content to extract a plurality of watermark messages from the primary content and to produce an indication as to a state of the watermark detector. The watermark messages can include information that identifies a resource on a remote server to retrieve metadata associated with a section of the primary content. The state of the watermark extractor includes one the following states: (a) an unmarked content state indicating that at least a first section of the primary content that is processed by the watermark extractor does not include detected watermarks messages, (b) a marked content state indicating that at least a second section of the primary content that is processed by the watermark extractor includes one or more embedded watermark messages or parts thereof, or (c) a gap state indicating that at least a third section of the primary content that is processed by the watermark extractor immediately subsequent to the second section of the primary content does not include watermark messages or parts thereof.
The watermark extractor transitions from a one state to another state upon occurrence of an event that is based on one or more of: (1) failure to detect a watermark message, or part thereof, subsequent to detection of at least one watermark message, (2) detection of a watermark message, or part thereof, subsequent to a failure to detect at least one watermark message, (3) detection of a section of the primary content with low activity, or (4) failure to detect embedded watermark messages for a predetermined interval of time. Based one or more events or pattern of events, retrieval of the metadata is enabled, or use of the metadata is modified. For example, a new secondary content associated with the primary content can be downloaded an presented, presentation of an existing secondary content can be modified, and the like.
The above noted device can be implemented as part of a. variety of devices, such as a consumer electronic device that is coupled to a television set or as part of a mobile device.
In the following description, for purposes of explanation and not limitation, details and descriptions are set forth in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to those skilled in the art that the present invention may be practiced in other embodiments that depart from these details and descriptions.
Additionally, in the subject description, the word “exemplary” is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word exemplary is intended to present concepts in a concrete manner.
To mitigate the issues that can arise from the loss of content metadata that are carried in separate metadata channels is to embed watermarks into the content to enable automatic content recognition (ACR) and metadata recovery, Watermarks can be embedded in the audio and/or video portions of a content and are substantially imperceptible to a viewer (or listener) of the content. Properly designed watermarks can be immune to various content processing operations and channel impairments, such as compression and decompression, cropping, scaling, transcoding, format conversion, noise addition, acoustic propagation, optical (e.g., free space) transmission, digital-to-analog (D/A) and analog-to-digital (A/D) conversions and the like.
Once the embedded watermarks are detected by a watermark detector (also sometimes referred to as a watermark extractor), the payload of the watermark can be used to identify the content and recover the metadata associated with the identified content. In ACR applications, watermarks are often designed with a set of requirements that differ from requirements that are imposed on other watermark detectors, such as copy control watermark detectors. For example, in ACR applications it is critical to be able to recognize a content very quickly. After a content is recognized, the associated metadata can be recovered to enable various operations, such as receiving an additional content, performing dynamic advertising insertion, or participating in an interactive opportunity. Further, the viewing device (or an associated device) can be connected to the Internet (or more generally, to a remote database) for the retrieval of the additional content, for participating in the interactive opportunities or other services.
In
It should be noted that while in some implementations, the Receiver is a separate component than the set-top box, in other implementations the Receiver may include, or be part of a larger device that includes, any one or combinations of additional components such as a set-top box, a display, keyboard or other user interface devices, or a watermark detector, as well as processors (e.g., microprocessors, digital signal processors (DSPs), etc.) and other circuitry that may be needed for implementation of such device, or devices.
The watermark structure in some exemplary embodiments includes the following fields: a Domain ID and a Sequence ID, Each Domain ID is assigned by a central authority to a Domain Registrant who controls assignment and use of the Sequence ID codes under that domain. Each Domain ID maps one-to-one to an Internet domain name which is used to retrieve metadata associated with Sequence IDs in that domain. The Domain Registrar in
Domain Lookup Server(s) maintain a copy of the Domain Registration database which maps each registered Domain ID to a domain name and keeps it current using the PUBLISH protocol with the Domain Registrar. Domain Lookup Server(s) also employ a standardized protocol (e.g., designated as LOOKUP in
Domain Servers can be Internet servers that are accessible at the domain name associated with a registered Domain ID and can provide metadata to Receivers in response to queries triggered by watermark detections. In some implementations, queries employ a standardized message protocol (e.g., designated as QUERY in
In one example implementation, a 50-bit payload can be embedded in every 1.5 seconds of the content. In this example, the watermark payload can be standardized with the following structure: [Payload Type:2] [Payload:48]. That is, the right-most 48 bits are designated to carry the payload and the 2 left-most bits are designated to carry the Payload Type. For example, the Payload Type values can be in the range 0 to 3, where a “0” designates a Reserved payload type, a “1” designate a Large Domain payload type, a “2” designates a Medium Domain payload type, and a “3” designates a Small Domain payload type. The payload type values can thus each describe the structure of the payload.
The Domain field from any structure can be mapped into a unique Domain ID by prepending the Payload Type value to the Domain field and zero-padding (on the right) to 32 bits. For ASCII encoding, the Domain ID can be represented as an 8-character hexadecimal value. Domain field value of 0 can be reserved in all domains. The Sequence field from any structure can be mapped directly into a Sequence ID, For ASCII encoding hexadecimal representation of the Sequence field (leading zeroes optional) can be utilized. Sequence IDs with decimal value of 1024 or less can be reserved for use as Control Codes. Control Codes are currently reserved.
The trigger bit, when set (e.g., to a value of “1”), can inform the Receiver of an event that may activate the Receiver to perform various operations such as requesting metadata from the domain server. It can indicate that further services or features, such as interactive content or advertising insertion associated with the Sequence ID is available to the Receiver from the domain server associated with the payload's Domain ID. In some implementations the trigger field can include multiple bits.
The watermark payload can undergo various coding, modulation and formatting operations before being embedded into a content. For example, the payload may be error correction code (ECC) encoded, scrambled, interleaved with other packets, appended with a synchronization or registration header, encrypted or channel coded to form a sequence of bits with particular characteristics. Once embedded into a host content, the embedded host content can be processed by a watermark extractor to recover the embedded watermark bits (or, more generally, symbols), and perform the reverse of the above coding, modulation or formatting schemes to recover the payload. In some instances, statistical techniques are used to recover the embedded symbols from the content using multiple instances of embedded watermarks.
Lookup Service/server and Domain Lookup Server can carry out analogous operations. The various components in
One or more Server Lookup Services are established. These services may be operated by ATSC, the Server Registrar, Content Owners, ATSC Receiver manufacturers, or a third party. Each Server Lookup Service maintains a database of all Server Code/Server Name associations published by the Server Registrar and responds to lookup requests from ATSC Receivers. The Server Lookup Services do not need to access or store any-broadcast metadata; they simply provide ATSC Receivers with access to Server Names associated with Server Codes detected from broadcast watermarks.
A Content Source, acting either as a Server Registrant or in concert with a Server Registrant, associates a valid registered Server Code and one or more unique Interval Codes and maps them to intervals of broadcast content essence. The Content Source embeds those codes in the broadcast content using a Watermark Inserter prior to delivery of the broadcast content to an MVPD. The Sever Code can be analogous to the Sequence ID described in the exemplary watermark payload above.
The Interval Codes and the metadata for those same intervals of broadcast essence (e.g. any interactive content, signaling, metadata, triggers, channel identifier, media timeline timecode, etc.) are associated together in a database which is provided to a Content, Signaling, and Metadata Server (“CSM Server”). Content Sources may associate and embed watermarks continuously throughout their program material using sequentially increasing Interval Codes (e.g., analogous the Sequence ID described in the exemplary watermark payload above), may embed watermarks only in those intervals of content where interactive services are enabled, or may embed an Interval Code repeatedly through a program segment where an interactive service is available but does not require timing precision. Content Sources may register additional Code Domains in advance of depleting the Interval Code space associated with a given Server Code and may associate newly assigned Server Codes with the same Internet domain name to maintain infrastructure continuity.
The CSM Server responds to various requests from ATSC Receivers, including delivery of signaling and interactive content based on interactive service data received from a complete broadcast stream. The CSM Server also responds to code metadata queries, in which a query containing the watermark payload (e.g. in the ASCII representational format) is submitted by the WM Client in an ATSC Receiver, with a request for metadata associated with the interval of broadcast content. The metadata included in the CSM Server response may include channel identifiers, timecodes, content or segment identifiers, triggers, etc. It should be noted that while metadata services can be hosted in the same servers as the content and signaling services, they may alternatively be hosted on different servers from those used for content and signaling services.
To enable the architecture that is depicted in
PUBLISH is a protocol whereby the Server Registrar notifies interested ecosystem participants of a newly established or updated mapping between a Server Code and an Internet domain name and publishes the association to Server Lookup Services.
LOOKUP is a protocol whereby an ATSC Receiver can submit a Server Code to a Server Lookup Service and receive a response containing the associated Server Name which has been most recently published by the Server Registrar.
QUERY is a protocol whereby an ATSC Receiver can submit a Server Code and Interval Code to a CSM Server and receive ATSC metadata (e.g. channel, timecode, interactive services triggers, etc.) associated with the specified interval of broadcast content.
The systems of
One use case for such watermarks is to provide interactive applications that enhance audio/video experience of viewers. In this scenario, the receiver uses information that it obtains from the extracted watermarks to access a web based server and to download secondary content, which can be used to enhance the primary content; such a secondary content is typically presented in synchronization with the primary content. The secondary content can be also created simultaneously with the first content, and linking them through watermarks may be done by the content producers. The secondary content can include T-commerce, director's commentary, character background, alternate language tracks, statistics of athletes in a sport event, etc.
Another use case for the disclosed technology can be the insertion or replacement of interstitial content such as advertisements and promotions which are not the same for all viewers. Such advertisement and promotions may be selected based on various factors such as known viewer preferences, viewer location (which may be determined based on the viewer's IP address), the time at which content is being viewed, or other factors. These are generally referred to as “targeted ads.” Typically targeted ads arc performed under the control of a content distributor that uses the embedded watermarks to carry information that is obtained by the client device to recover insertion instructions. Further use cases include audience measurement, rights administration, proof of performance, etc.
The detectors that are designed to detect such watermarks for ACR and other above noted applications, are often designed with a set of requirements that differ from requirements that are imposed to other watermark detectors, such as copy control watermark detectors. For example, time to the first watermark payload detection is more important for ACR watermarks compared to a copy control watermarks because of, for example, the importance of enabling synchronized presentation of a secondary content with a primary content. Also for ACR detectors it is desirable to report the timing of watermark boundaries as precise as possible. Finally, for ACR detectors it is desirable detect changes in the content rendering timeline. For example when a user decides to switch from one content to another, or choses to skip forward or backward within a content, the ACR detector should recognize such an action as fast as possible and to report it to the entities or applications at the higher levels of hierarchy. It should be noted that the term ACR detector is not used to limit the scope of the disclosure to automatic content recognition application. But rather ACR provides one example use of the disclosed technology and is used to illustrate the underlying concepts. The disclosed embodiments provide refinements to the watermark detection processes, systems and devices that enable the above requirements and features to be implemented in an improved manner.
One of the basic assumption in describing some of the disclosed embodiments is that the watermark carries a string of digital symbols (which can be represented as a binary string). This string typically carries a synchronization portion (or a header portion), followed by a payload portion, and error correction and/or error detection strings. The watermark payload can also carry information about the primary content's timeline. Typically, this is achieved by including a field within the watermark payload (or a separate watermark) that constitutes a counter, which is incremented for each subsequent watermark. By detecting the watermark counter and knowing the watermark's extent (e.g., the duration or length of the primary content that each watermark occupies), the starting point within the primary content where watermark embedding started can be calculated. The watermark payload can further contain additional fields, such as an content ID, a channel ID, or a trigger flag. The trigger flag may signal to the device to perform predefined actions. For example, a trigger flag can signal to the receiver to halt any modification of audio and video in the primary content. Such a flag may be useful when the primary audiovisual content introduces an emergency alert that should not be disturbed.
Examples Marked Segment End Detection Considerations: End of watermarked content segment can be established by absence of watermark detection at the expected moment or location or within a predefined time interval or distance for the time/location that presence of a watermark is expected. However, this approach may not produce reliable results in all instances due to missing watermarks due to, for example, unfavorable content properties, or content processing operations such as a perceptual compression that may have degraded or removed the embedded watermarks.
One way to improve the reliability of marked content end detection is to use watermark prediction, Predicted watermarks symbols obtained from the content can be correlated with previously extracted watermark symbols and if the correlation value is high, may be concluded that the content includes embedded watermarks at the predicted locations. But if the correlation value is low, it may be concluded that the watermarks do not reside at the predicted locations. This techniques enables the determination as to whether or not the content is watermarked even if the watermark cannot be detected on its own without the prediction information.
In some prediction techniques, possible changes in the watermark payload at the prediction location are taken into account. For example, watermark prediction can take into account an expected increase in the predicted watermark counter value. In scenarios where the changes in watermark payload are unpredictable or are uncertain, such as changed in the trigger flag, predictions can account for each possible payload status, and test for correlations between the extracted watermark payload from the predicted location and each of the possible predictions (or until a correlation value of above a particular threshold is reached).
One way to determine a correlation between the predicted and extracted watermarks is to predict a watermark waveform and correlate it with extracted watermark waveform. This approach may require a very precise timing of embedded watermarks, as well as a significant processing power, which may not be suitable for all applications, Therefore. it may be preferred that only correlations between the predicted watermark bit pattern the extracted bit pattern be performed. In one example of this technique, the number of mismatches between the predicted and extracted bit patterns are computed and if the number meets a predetermined threshold, the end of marked content can be signaled. When the watermark bit pattern is long, it is often advantageous to correlate the extracted bit patterns to the predicted bit patterns on strings that are shorter than the entire watermark string, This way, the end of marked segment can be detected faster compared to the scenario in which the entire watermark is predicted.
Furthermore, prediction of a fragment of the watermark bit string can he used to quickly confirm that the extracted payload is not a false detection. It is well known that when error correction code (ECC) decoding is used for detecting watermarks, a watermark can be falsely detected even in a content that does not include a watermark, or a watermark can be detected with an incorrect value from a marked content. Typically, error correction algorithms provide information on how many errors are corrected in the particular extraction event. If the number of corrected errors is too high, then the false positive probability may be unacceptably high. In cases where the number of corrected errors is too high, declaration of successful payload extraction can be postponed by first confirming that the correctness of the payload by: predicting a subsequent bit string fragment, and verifying that it indeed can be extracted. Only if the subsequent bit string fragment is found with sufficiently low mismatch count (i.e., with reliability that is higher than a predefined threshold), the detector reports the extracted payload. This way, the time to first watermark detection can be shorter compared to the scenario in which correlation with entire watermark string is performed.
One exemplary method for detecting the end of a watermark includes predicting an expected watermark payload including predicting the state of a watermark counter, detecting a watermark, and correlating the predicted watermark with the detected watermark that includes comparing the predicted and detected state of a watermark counter, and concluding that a watermark is present when the correlation is above a predetermined threshold. Predicting the state of the watermark counter can include predicting a watermark waveform, and correlating the predicted watermark with the detected watermark can include comparing the predicted and detected watermark waveforms.
In some embodiments, Predicting the state of the watermark counter can include predicting a watermark bit pattern and correlating the predicted watermark with the detected watermark can include comparing the predicted and detected watermark bit patterns. For example, the correlating can include counting the number of mismatches between predicted and detected bit patterns and if the number of mismatches is above a threshold then signaling and end of the watermarked content. In some embodiments, the correlation is only performed on a subset of the entire bit string of the detected watermark. In another embodiment, the watermark includes error correction codes and correlation includes counting the number of corrected errors, and if the number of corrected errors exceeds a threshold, confirming the correctness of the payload by predicting subsequent bit string fragments and determining the number of mismatches between predicted and detected bit string fragments, and reporting the extracted payload only if the number of mismatches is below a predetermined threshold.
Example Watermark Boundary Precision Considerations: In addition to above noted correlation of predicted and extracted bit patterns, it is beneficial to have a good prediction of bit boundaries. Typically a slight shift of bit boundaries produces also a bit pattern that is well correlated to the embedded pattern. In order to improve detector performance in the presence of uncertainty in the precise locations of bit boundaries, um bit-patterns of the same watermark can be extracted, with time offsets that are only a fraction of a bit interval. Typically bit patterns with different time offset have different correlation values when matched to predicted bit pattern, and the timing of the pattern associated with the best correlation value can be used as the best prediction of the watermark position in time.
The same technique can be used to improve watermark boundary detection. In particular, extract multiple watermarks are extracted with offsets in their position that are a fraction of the bit interval, are correlated with a predicted pattern (or subjected to error correction code), and the number of bit errors is counted in each of trials. The position of watermark with the smallest bit error count can be considered as the most likely watermark position and used to calculate the content timeline.
Alternatively, the data indicative of how the bit errors change with the shift of bit boundary position can be used in mathematical model or function to identify the most likely location that results in a minimum bit-error count. For example, the change of bit-error counts can be modeled as a function of time by approximating it in a least-square sense by a second order polynomial. The minimum location obtained by the second order polynomial can then be used to approximate the watermark boundary location with better accuracy.
One aspect of the disclosed embodiments relates to a method of predicting watermark boundaries that includes extracting multiple bit patterns of the same watermark, where the multiple bit patterns have time offsets from each other that are a fraction of a bit interval. By correlating each extracted multiple bit pattern with a predicted bit pattern, it can be determined which bit pattern produces the highest correlation with the predicted bit pattern, and the position of the bit pattern with the highest correlation can be selected as the predicted watermark position. The selected watermark position in time can be used to also determine the watermark boundary. Another exemplary method of predicting watermark boundaries includes extracting multiple bit patterns of the same watermark, where the multiple bit patterns have time offsets from each other that are a fraction of a bit interval. The method also includes determining bit errors as a function of a. shift in bit boundary to determine the position with the least bit error count, and selecting position of the determined position with the least bit error count as the selected prediction of the watermark position in time.
Example Gap Detection Considerations: One of benefits provided by the disclosed watermarks is the ability to enable a receiver with web access to download additional application from the Internet o enhance the primary audiovisual content or to insert or replace interstitial content such as advertisements and promotions which are not the same for all viewers (a.k.a. targeted ads). It is thus desirable to identify certain events that can occur at the receiver device, such as when a user switches to a different channel, pauses the content, or skips ahead or back within the content.
In some scenarios, the rendering device, such as a TV set, may not be aware of the above described, and other, user actions. For example, when a TV set is receiving content from a Set Top Box (STB) or a Personal Video Recorder (PVR) over an HDMI cable, those content sources may not be able to inform the TV set about content transitions. Furthermore, an STB or a PVR may intentionally mask content transitions by synthesizing content (e.g. blank frames and silent audio) and buffering the content from a new source in order to avoid resynchronization and HDCP re-authentication. Therefore, detection of an audio silence interval or a group of blank screens may provide a mechanism for recognizing content transitions.
Yet, audio silence and featureless frames of video may occur in the original content, as well. For example, audio can be muted at the source due to noise gate or profanity filters, or in order to signal that it is time for advertisement insertion. Thus, it is desirable to discriminate between gaps that are produced due to user actions and gaps that pre-exist in the content (e.g., gaps that exist prior to embedding of watermarks). One way to achieve this is to use dither embedding, Dither embedding is sometimes used to insert watermarks into sections of the content devoid of significant activity (e.g., silent or quite intervals, flat areas, etc.), which are not naturally suitable for insertion of watermarks. Dither signal is generally a low amplitude signal that resembles noise and can be shaped so that, When added to a content, it does not produce objectionable or even perceptible artifacts. By modulating the dither signal in a particular manner, different watermark symbol values can be inserted into the sections of the content that are devoid of significant activity, while maintaining imperceptibility of the embedded watermarks. Note that typically watermark embedding is done as part of content distribution phase (in order to discriminate different distribution channels) and all gaps introduced at the source can be covered by dither embedding.
In the presence of dither embedding, the device can determine whether or not a detected gap is an integral part of the content by predicting the watermark bit string that is expected in be present at the location of the detected gap, and checking if such a string is potential watermark symbols that are detected from the location of the gap (with few mismatches). If the expected bit string is found, then the gap must have existed in the content prior to embedding. But if the expected string is not found, then the gap is likely created by a subsequent content transition, such as switching from one content to another, or skipping forward or backward within the content.
Even in the absence of dither embedding and above described discrimination between gaps that exist prior to embedding from gaps that created by content transition, it is still possible to automatically detect gaps and use this information to, for example, avoid display of secondary content or targeted ads during such gaps. In this scenario, the timing of presentation of interactive content should not be scheduled during or immediately after gaps, if possible.
Example Detector State Change Considerations: A watermark detector output of the disclosed embodiments can be described as moving between three different states: an unmarked content state, a marked content state and a gap state. An unmarked content does not include an embedded watermark; a marked content include embedded watermarks; and a gap state is indicative of a content that is assumed to have embedded watermark which can not be detected due to detection of a gap.
Gap Start, Gap End and Trigger events occur only between Watermark Segment Start and Watermark Segment End events (i.e., during a watermarked segment).
A Watermark Segment Start event is output from the watermark detector when a watermark code is detected in the input primary content which does not have continuity with a previously detected watermark code. Continuity exists when successive watermark codes conform to the watermark segment embedding specification. For example, those watermarks can have the same Server Code, successive Interval Codes, the same trigger bit status, and a watermark code spacing of 1.5 seconds. A Watermark Segment Start event can cause a transition from the Unmarked Content State to the Marked Content State, or a transition from the Marked Content State to the same state when caused by detection of a discontinuous watermark code.
A Gap Start event is output from the watermark detector when a watermark code is not detected with continuity from the primary content following a previous watermark code. In some embodiments, the Gap Start event is accompanied by a low audio/video condition, a blank interval indication, or other indications of low content activity. A Gap Start event causes a transition from the Marked Content State to the Gap State.
A Gap End event is output from the watermark detector when, following a Gap Start event, a low audio/video condition, a blank interval indication, or other low content activity indications are no longer present or when a watermark code is detected. A Gap End event causes a transition from the Gap State to the Marked Content State. Examples of an content with low activity include an audio segment with audio characteristics below a predetermined threshold, an audio segment that is mute, a blank video frame, a video frame with a blanks portion, or a video frame or a section of a video frame with low visual activity. Based on experiments conducted by the inventors, disturbances, such as channel change, skip forward or skip back, in the playback of a digital television broadcast produces brief intervals of low or zero content activity, such as silence intervals. As noted earlier, in some embodiments dither embedding is used during, or prior to, content distribution to embed watermark messages even in low activity content sections. In such embodiments, failure to detect watermarks from low activity sections of a received content is a strong indication that a content interruption due to a user action (e.g., channel change, skip ahead, etc.) has taken place. In some scenarios, detection of such content interruptions causes the associated interactive secondary content to he suspended.
A Watermark Segment End event is output when the watermark detector determines that a watermark code cannot be detected with continuity in the primary content following a previous Watermark Segment Start event and a low audio/video condition or a blank interval indication is not present. A Watermark Segment End event is only output based on a failure to detect a continuous watermark code; it is not output when a discontinuous watermark code is detected (in this case, a Watermark Segment Start event is output). A Watermark Segment End event causes a transition from the Marked Content State to an Unmarked Content State.
A Trigger event is output from the watermark detector when the value of the Trigger field of a watermark code is determined to have changed between consecutive watermark codes in a watermark segment. When a Trigger event occurs, the watermark detector outputs the watermark code, and the timing information associated with the detected watermark (e.g., content timeline at which the trigger event occurred, starting boundary of the watermark payload carrying an instance of a watermark code, etc.
Referring again to
Further. when low content activity (e.g., low audio) is detected and predicted packet fragments have too many errors, a Gap Start State is signaled, signifying a likely content transition, and the detector move to the Gap State. Finally, in some embodiments, when no watermarks are found over a predefined time interval of T seconds (including continuously failing prediction attempts) the Watermark End Event is signaled, signifying that content has been switched and all interactivity should be canceled.
From the Gap State, a transition to the Marked Content State is possible when a watermark is detected or watermark prediction succeeds (e.g., mismatch between predicted and extracted bit patterns is below a threshold). Also when watermark detection fails, but high activity content (e.g., an increased audio energy) is found before a predefined time interval of T seconds expires, the detector exits the Gap State and transition to the Marked Content State, as part of a Gap End event. When watermark detection fails over the predefined time interval, detector signals Watermark Segment End event, signifying that all interactivity should be canceled upon transitioning from the Gap State to the Unmarked Content State.
The above-described state machine architecture enables systematic access to metadata at a remote server based on a state of the watermark detector and particular events that cause transitions to different states. In one embodiment, a watermark detector state machine includes a watermark detector having three starts including an unmarked content state, a marked content state and a gap state. In the unmarked content state, the detection of a watermark triggers a watermark segment start event that causes the watermark detector to query a web server to access metadata and to transition the watermark detector to a marked content state. In the marked content state, the detection of a trigger flag causes no change in the state but causes a query to a web server to access metadata. The detection of a discontinuous watermark code causes the detector to remain the marked content state, but causes a query to a web server to access metadata. The detection of low content activity signals a gap start event and a detector transaction to a gap state, while the detection of no watermark over a predefined time interval signals a watermark segment end event and a detector transition to the unmarked content state. In the gap state, the detection of a watermark triggers a gap end event and a transition to a marked content state.
Before turning to detailed discussions of
The following scenarios illustrate how the secondary content presentation can be affected base on the type of content interruption. When a user decides to access the program guide while viewing a particular content, part of the main video program can be obstructed by the program guide information that is displayed as an overlay, while the audio content remains uninterrupted. in some implementations, the program guide and the main video program are presented in a picture-in-picture (PIP) format, in two separate windows, which typically causes the main window to shrink. In another scenario, an interruption in the audio playback can occur when a user mutes the audio while viewing a main program uninterrupted. When a secondary content is also being presented in association with the m content, such interruptions may necessitate different changes to the presentation of the secondary content, as well. For example, when a program guide is being viewed, presentation of the secondary content may need to be paused, or presented as a semi transparent overlay, in order to allow proper viewing of the program guide, On the other hand, when a program is muted, the presentation of a secondary content may continue uninterrupted.
As an initial step in enabling the proper presentation of the secondary content, various transitions to from one state to another state caused by gaps or detection of a change in watermark values musty be quickly recognized. As illustrated by the above examples, identification of the type of gaps (or actions that caused such gaps) can further trigger acquisition of metadata and/or new or additional secondary content, modify the use of the metadata or to cause the presentation of an existing secondary content to be modified.
Referring now to
It is evident from the examples of
One aspect of the disclosed technology relates to a method of distinguishing between pre-existing gaps and gaps caused by user actions in content received by a receiver that includes embedding a watermark content using a dither signal such that gaps in the content will include the dither signal, and detecting a watermark in a portion of the content having a gap, the content being in a state subsequent to a possible gap-generating user action. This method also includes predicting an expected watermark bit stream in the gap portion, comparing the detected with the predicted bit stream to determine if the detected bit stream matches the predicted bit stream, and if a match within a predetermined error tolerance exists, then indicating that the gap existed prior to any user-action.
In the above method. the watermark extractor transitions from a one state to another state upon occurrence of an event that is based on one or more of: (1) failure to detect a watermark message, or part thereof, subsequent to detection of at least one watermark message, (2) detection of a watermark message, or part thereof, subsequent to a. failure to detect at least one watermark message, (3) detection of a section of the primary content with low activity, or (4) failure to detect embedded watermark messages for a predetermined interval of time. Referring back to
In one exemplary embodiment, modifying the use of the metadata includes modifying presentation of a secondary content that is associated with the primary content. In some embodiments, the information included in the one or more of the plurality of watermark messages includes a server code and an interval code. In one exemplary embodiment, upon initialization of the device or the watermark extractor, the watermark extractor enters the unmarked content state. In another exemplary embodiment, the watermark extractor remains in the marked content state upon occurrence of an event that is based on detection of a change in value of a watermark message extracted from the primary content in comparison with a value of a previously detected watermark message from the primary content. In yet another exemplary embodiment, the third section of the primary content has low content activity. For example, such a content section with low activity can have amplitude or energy values that are zero or are below corresponding predefined threshold values. In still another exemplary embodiment, while the watermark extractor is in the unmarked content state, detection of a watermark message signals a watermark start segment event that causes the watermark extractor to transition to the marked content state.
In another exemplary embodiment, at least one of the plurality of watermark messages extracted by the watermark extractor includes a trigger field that indicates a change in the metadata associated with the primary content, and, while the watermark extractor is in the marked content state, detection of a watermark message having the trigger field with a particular value or status signals a trigger event that causes the watermark extractor to remain in the marked content state. According to one embodiment, while the watermark extractor is in the marked content state, detection of a watermark message with a value that does not conform to an expected value signals a watermark start segment event that causes the watermark extractor to remain in the marked content state. For example, the expected value can be determined based on a value of a previously detected watermark message. In some embodiments, the value of the watermark message that does not conform to the expected value includes an interval code value that does not conform to an expected change in an interval code value of a previously detected watermark message. In some embodiments, the value of the watermark message that does not conform to the expected value includes a server code value that is different from a previously detected server code value.
According to one exemplary embodiment, while the watermark extractor is in the marked content state, failure to detect a watermark message that is accompanied by an indication of low content activity signals a gap start event that causes the watermark extractor to transition to the gap state. In some embodiments, the indication of low content activity is obtained by processing the primary content to detect amplitude or energy values associated with the primary content that are below corresponding predefined threshold values. In another exemplary embodiment, while the watermark extractor is in the marked content state, failure to detect a watermark message that is not accompanied by an indication of low content activity signals a watermark segment end event that causes the watermark extractor to transition to the unmarked content state.
In another exemplary embodiment, while the watermark extractor is in the gap state, failure to detect a watermark message that is not accompanied by an indication of low content activity signals a gap end event that causes the watermark extractor to transition to the marked content state. In yet another exemplary embodiment, while the watermark extractor is in the gap state, detection of a watermark message signals a gap end event that causes the watermark extractor to transition to the marked content state. According to another exemplary embodiment, while the watermark extractor is in the gap state, failure to detect a watermark message for a predetermined period of time that is not accompanied by an indication of low content activity signals a gap end event that causes the watermark extractor to transition to the unmarked content state.
In some embodiments, the above noted method for to enable access to metadata also includes identifying a particular interruption in presentation of the primary content based on occurrence of one or more events and one or more states of the watermark extractor. In one exemplary embodiment, the particular interruption is a change in a source of the primary content that is identified by a first transition of the watermark extractor from the marked content state to the gap state, followed a second transition from the gap state to the marked content state and a third transition from the marked content state to the unmarked content state. In another exemplary embodiment, the particular interruption is a pause in presentation of the primary content that is identified by a first transition of the watermark extractor from the marked content state to the gap state, followed by a second transition from the gap state to the marked content state, a third transition from the marked content state to the unmarked content state and a fourth transition to the unmarked content state to the marked content state. In this embodiment, tine fourth transition is based on detection of a watermark value that is expected to occur immediately prior to the first transition.
According to another exemplary embodiment, the particular interruption is a skip forward to another section of the primary content that is identified by a watermark value, while the watermark extractor is in the marked content state, that is expected to occur at a future location within the primary content. In yet another exemplary embodiment, the particular interruption is a skip back to another section of the primary content that is identified by a watermark value, while the watermark extractor is in the marked content state, that is expected to occur at a previous location within the primary content. In still another exemplary embodiment, while the watermark extractor is in the marked content state, detection of a watermark message based on a watermark prediction technique that is accompanied by detection of the section of the primary content with low activity causes the watermark extractor to remain in the marked content state.
Another aspect of the disclosed embodiments relates to a computer program product, embodied on one or more non-transitory computer readable media, that includes program code for processing digital samples of a primary content using a watermark extractor that it implemented at least partially in electronic circuits to extract a plurality of watermark messages from the primary content and to produce an indication as to a state of the watermark detector. The one or more of the plurality of watermark messages include information that identifies a resource on a remote server to retrieve metadata associated with a section of the primary content, and the state of the watermark extractor includes one the following states: (a) an unmarked content state indicating that at least a first section of the primary content that is processed by the watermark extractor does not include detected watermarks messages, (b) a marked content state indicating that at least a second section of the primary content that is processed by the watermark extractor includes one or more embedded watermark messages or parts thereof, or (c) a gap state indicating that at least a third section of the primary content that is processed by the watermark extractor immediately subsequent to the second section of the primary content does not include watermark messages or parts thereof. The watermark extractor transitions from a one state to another state upon occurrence of an event that is based on one or more of: (1) failure to detect a watermark message, or part thereof, subsequent to detection of at least one watermark message, (2) detection of a watermark message, or part thereof, subsequent to a failure to detect at least one watermark message, (3) detection of a section of the primary content with low activity, or (4) failure to detect embedded watermark messages for a predetermined interval of time. The above noted computer program product further includes program code for using the one or more events or pattern of events to retrieve the metadata or modify a use of the metadata.
The components or modules that are described in connection with the disclosed embodiments can be implemented as hardware, software, or combinations thereof. For example, a hardware implementation can include discrete analog and/or digital components that are, for example, integrated as part of a printed circuit board. Alternatively, or additionally, the disclosed components or modules can be implemented as an Application Specific Integrated Circuit (ASIC) and/or as a Field Programmable Gate Array (FPGA) device. Some implementations may additionally or alternatively include a digital signal processor (DSP) that is a specialized microprocessor with an architecture optimized for the operational needs of digital signal processing associated with the disclosed functionalities of this application.
Various embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), Blu-ray Discs, etc. Therefore, the computer-readable media described in the present application include non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
For example, one aspect of the disclosed embodiments relates to a computer program product that is embodied on a non-transitory computer readable medium. The computer program product includes program code for carrying out any one or and/or all of the operations of the disclosed embodiments.
The foregoing description of embodiments has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit embodiments of the present invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. The embodiments discussed herein were chosen and described in order to explain the principles and the nature of various embodiments and its practical application to enable one skilled in the art to utilize the present invention in various embodiments and with various modifications as are suited to the particular use contemplated. The features of the embodiments described herein may be combined in all possible combinations of methods, apparatus, modules, systems, and computer program products, as well as in different sequential orders, Any embodiment may further be combined with any other embodiment.
This application claims the benefit of priority of U.S. Provisional Patent Application No. 62/084,465, filed Nov. 25, 2014, and U.S. Provisional Patent Application No. 62/093,996, filed Dec. 18, 2014, the entire contents of which are incorporated by reference as part of the disclosure of this document.
Number | Name | Date | Kind |
---|---|---|---|
6122610 | Isabelle | Sep 2000 | A |
6145081 | Winograd et al. | Nov 2000 | A |
6175627 | Petrovic et al. | Jan 2001 | B1 |
6411725 | Rhoads et al. | Jun 2002 | B1 |
6427012 | Petrovic | Jul 2002 | B1 |
6430301 | Petrovic | Aug 2002 | B1 |
6490579 | Gao et al. | Dec 2002 | B1 |
6577747 | Kalker et al. | Jun 2003 | B1 |
6683958 | Petrovic | Jan 2004 | B2 |
6721439 | Levy et al. | Apr 2004 | B1 |
6792542 | Lee et al. | Sep 2004 | B1 |
6839673 | Choi et al. | Jan 2005 | B1 |
6888943 | Lam et al. | May 2005 | B1 |
6931536 | Hollar | Aug 2005 | B2 |
7024018 | Petrovic | Apr 2006 | B2 |
7140043 | Choi et al. | Nov 2006 | B2 |
7159118 | Petrovic | Jan 2007 | B2 |
7224819 | Levy et al. | May 2007 | B2 |
7343397 | Kochanski | Mar 2008 | B2 |
7460667 | Lee et al. | Dec 2008 | B2 |
7533266 | Bruekers et al. | May 2009 | B2 |
7548565 | Sull et al. | Jun 2009 | B2 |
7707422 | Shin et al. | Apr 2010 | B2 |
7774834 | Chauhan et al. | Aug 2010 | B1 |
7779271 | Langelaar | Aug 2010 | B2 |
7983922 | Neusinger et al. | Jul 2011 | B2 |
7986806 | Rhoads | Jul 2011 | B2 |
7991995 | Rabin et al. | Aug 2011 | B2 |
8005258 | Petrovic et al. | Aug 2011 | B2 |
8015410 | Pelly et al. | Sep 2011 | B2 |
8055013 | Levy et al. | Nov 2011 | B2 |
8059815 | Lofgren et al. | Nov 2011 | B2 |
8059858 | Brundage et al. | Nov 2011 | B2 |
8081757 | Voessing et al. | Dec 2011 | B2 |
8085935 | Petrovic | Dec 2011 | B2 |
8103049 | Petrovic et al. | Jan 2012 | B2 |
8138930 | Heath | Mar 2012 | B1 |
8151113 | Rhoads | Apr 2012 | B2 |
8181262 | Cooper et al. | May 2012 | B2 |
8189861 | Rucklidge | May 2012 | B1 |
8194803 | Baum et al. | Jun 2012 | B2 |
8249992 | Harkness et al. | Aug 2012 | B2 |
8259873 | Baum et al. | Sep 2012 | B2 |
8280103 | Petrovic et al. | Oct 2012 | B2 |
8301893 | Brundage | Oct 2012 | B2 |
8315835 | Tian et al. | Nov 2012 | B2 |
8321679 | Petrovic et al. | Nov 2012 | B2 |
8340348 | Petrovic et al. | Dec 2012 | B2 |
8346532 | Chakra et al. | Jan 2013 | B2 |
8346567 | Petrovic et al. | Jan 2013 | B2 |
8467717 | Croy et al. | Jun 2013 | B2 |
8479225 | Covell et al. | Jul 2013 | B2 |
8483136 | Yuk et al. | Jul 2013 | B2 |
8533481 | Petrovic et al. | Sep 2013 | B2 |
8538066 | Petrovic et al. | Sep 2013 | B2 |
8560604 | Shribman et al. | Oct 2013 | B2 |
8588459 | Bloom et al. | Nov 2013 | B2 |
8589969 | Falcon | Nov 2013 | B2 |
8601504 | Stone et al. | Dec 2013 | B2 |
8615104 | Petrovic et al. | Dec 2013 | B2 |
8666528 | Harkness et al. | Mar 2014 | B2 |
8682026 | Petrovic et al. | Mar 2014 | B2 |
8726304 | Petrovic et al. | May 2014 | B2 |
8745403 | Petrovic | Jun 2014 | B2 |
8768714 | Blesser | Jul 2014 | B1 |
8781967 | Tehranchi et al. | Jul 2014 | B2 |
8791789 | Petrovic et al. | Jul 2014 | B2 |
8806517 | Petrovic et al. | Aug 2014 | B2 |
8811655 | Petrovic et al. | Aug 2014 | B2 |
8825518 | Levy | Sep 2014 | B2 |
8838977 | Winograd et al. | Sep 2014 | B2 |
8838978 | Winograd et al. | Sep 2014 | B2 |
8869222 | Winograd et al. | Oct 2014 | B2 |
8898720 | Eyer | Nov 2014 | B2 |
8923548 | Petrovic et al. | Dec 2014 | B2 |
8959202 | Haitsma et al. | Feb 2015 | B2 |
8990663 | Liu et al. | Mar 2015 | B2 |
9009482 | Winograd | Apr 2015 | B2 |
9042598 | Ramaswamy et al. | May 2015 | B2 |
9055239 | Tehranchi et al. | Jun 2015 | B2 |
9106964 | Zhao | Aug 2015 | B2 |
9117270 | Wong et al. | Aug 2015 | B2 |
9147402 | Chen et al. | Sep 2015 | B2 |
9277183 | Eyer | Mar 2016 | B2 |
9596521 | Winograd et al. | Mar 2017 | B2 |
9602891 | Winograd et al. | Mar 2017 | B2 |
9607131 | Winograd et al. | Mar 2017 | B2 |
20020032864 | Rhoads et al. | Mar 2002 | A1 |
20020059622 | Grove et al. | May 2002 | A1 |
20020078233 | Biliris et al. | Jun 2002 | A1 |
20020138695 | Beardsley et al. | Sep 2002 | A1 |
20030012403 | Rhoads et al. | Jan 2003 | A1 |
20030055979 | Cooley | Mar 2003 | A1 |
20030084294 | Aoshima et al. | May 2003 | A1 |
20030193616 | Baker et al. | Oct 2003 | A1 |
20030228030 | Wendt | Dec 2003 | A1 |
20040039914 | Barr et al. | Feb 2004 | A1 |
20040101160 | Kunisa | May 2004 | A1 |
20040250080 | Levy et al. | Dec 2004 | A1 |
20050182792 | Israel et al. | Aug 2005 | A1 |
20060047704 | Gopalakrishnan | Mar 2006 | A1 |
20060053292 | Langelaar | Mar 2006 | A1 |
20060083242 | Pulkkinen | Apr 2006 | A1 |
20060115108 | Rodriguez et al. | Jun 2006 | A1 |
20060239501 | Petrovic et al. | Oct 2006 | A1 |
20070003103 | Lemma et al. | Jan 2007 | A1 |
20070039018 | Saslow et al. | Feb 2007 | A1 |
20070071037 | Abraham et al. | Mar 2007 | A1 |
20070135084 | Ido et al. | Jun 2007 | A1 |
20070250560 | Wein et al. | Oct 2007 | A1 |
20080037825 | Lofgren et al. | Feb 2008 | A1 |
20080263612 | Cooper | Oct 2008 | A1 |
20080297654 | Verberkt et al. | Dec 2008 | A1 |
20080301304 | Chitsaz et al. | Dec 2008 | A1 |
20090060055 | Blanchard et al. | Mar 2009 | A1 |
20090089078 | Bursey | Apr 2009 | A1 |
20090158318 | Levy | Jun 2009 | A1 |
20090319639 | Gao et al. | Dec 2009 | A1 |
20100023489 | Miyata et al. | Jan 2010 | A1 |
20100054531 | Kogure et al. | Mar 2010 | A1 |
20100063978 | Lee et al. | Mar 2010 | A1 |
20100097494 | Gum et al. | Apr 2010 | A1 |
20100111355 | Petrovic et al. | May 2010 | A1 |
20100131461 | Prahlad et al. | May 2010 | A1 |
20100172540 | Davis et al. | Jul 2010 | A1 |
20100174608 | Harkness et al. | Jul 2010 | A1 |
20100281142 | Stoyanov | Nov 2010 | A1 |
20110058188 | Guo et al. | Mar 2011 | A1 |
20110088075 | Eyer | Apr 2011 | A1 |
20110103444 | Baum et al. | May 2011 | A1 |
20110161086 | Rodriguez | Jun 2011 | A1 |
20110164784 | Grill et al. | Jul 2011 | A1 |
20110188700 | Kim et al. | Aug 2011 | A1 |
20110261667 | Ren et al. | Oct 2011 | A1 |
20110281574 | Patel et al. | Nov 2011 | A1 |
20110286625 | Petrovic et al. | Nov 2011 | A1 |
20110293090 | Ayaki et al. | Dec 2011 | A1 |
20110307545 | Bouazizi | Dec 2011 | A1 |
20110320627 | Landow et al. | Dec 2011 | A1 |
20120023595 | Speare et al. | Jan 2012 | A1 |
20120063635 | Matsushita et al. | Mar 2012 | A1 |
20120072731 | Winograd et al. | Mar 2012 | A1 |
20120102304 | Brave | Apr 2012 | A1 |
20120113230 | Jin | May 2012 | A1 |
20120117031 | Cha et al. | May 2012 | A1 |
20120122429 | Wood et al. | May 2012 | A1 |
20120129547 | Andrews, III et al. | May 2012 | A1 |
20120203556 | Villette et al. | Aug 2012 | A1 |
20120203734 | Spivack et al. | Aug 2012 | A1 |
20120216236 | Robinson et al. | Aug 2012 | A1 |
20120265735 | McMillan et al. | Oct 2012 | A1 |
20120272012 | Aronovich et al. | Oct 2012 | A1 |
20120272327 | Shin et al. | Oct 2012 | A1 |
20120300975 | Chalamala et al. | Nov 2012 | A1 |
20120304206 | Roberts et al. | Nov 2012 | A1 |
20120308071 | Ramsdell et al. | Dec 2012 | A1 |
20130007462 | Petrovic et al. | Jan 2013 | A1 |
20130024894 | Eyer | Jan 2013 | A1 |
20130031579 | Klappert | Jan 2013 | A1 |
20130060837 | Chakraborty et al. | Mar 2013 | A1 |
20130073065 | Chen et al. | Mar 2013 | A1 |
20130114848 | Petrovic et al. | May 2013 | A1 |
20130117571 | Petrovic et al. | May 2013 | A1 |
20130129303 | Lee et al. | May 2013 | A1 |
20130151855 | Petrovic et al. | Jun 2013 | A1 |
20130151856 | Petrovic et al. | Jun 2013 | A1 |
20130152210 | Petrovic et al. | Jun 2013 | A1 |
20130171926 | Perret et al. | Jul 2013 | A1 |
20130188923 | Hartley et al. | Jul 2013 | A1 |
20130227293 | Leddy et al. | Aug 2013 | A1 |
20130271657 | Park et al. | Oct 2013 | A1 |
20140037132 | Heen et al. | Feb 2014 | A1 |
20140047475 | Oh et al. | Feb 2014 | A1 |
20140059116 | Oh et al. | Feb 2014 | A1 |
20140059591 | Terpstra et al. | Feb 2014 | A1 |
20140067950 | Winograd | Mar 2014 | A1 |
20140068686 | Oh et al. | Mar 2014 | A1 |
20140074855 | Zhao et al. | Mar 2014 | A1 |
20140075465 | Petrovic et al. | Mar 2014 | A1 |
20140075469 | Zhao | Mar 2014 | A1 |
20140114456 | Stavropoulos et al. | Apr 2014 | A1 |
20140115644 | Kim et al. | Apr 2014 | A1 |
20140130087 | Cho et al. | May 2014 | A1 |
20140142958 | Sharma et al. | May 2014 | A1 |
20140149395 | Nakamura et al. | May 2014 | A1 |
20140196071 | Terpstra et al. | Jul 2014 | A1 |
20140219495 | Hua | Aug 2014 | A1 |
20140267907 | Downes et al. | Sep 2014 | A1 |
20140270337 | Zhao et al. | Sep 2014 | A1 |
20140279549 | Petrovic et al. | Sep 2014 | A1 |
20140325550 | Winograd et al. | Oct 2014 | A1 |
20140325673 | Petrovic | Oct 2014 | A1 |
20150030200 | Petrovic et al. | Jan 2015 | A1 |
20150043728 | Kim et al. | Feb 2015 | A1 |
20150043768 | Breebaart | Feb 2015 | A1 |
20150063659 | Poder et al. | Mar 2015 | A1 |
20150093016 | Jiang et al. | Apr 2015 | A1 |
20150121534 | Zhao et al. | Apr 2015 | A1 |
20150170661 | Srinivasan | Jun 2015 | A1 |
20150229979 | Wood et al. | Aug 2015 | A1 |
20150261753 | Winograd et al. | Sep 2015 | A1 |
20150264429 | Winograd et al. | Sep 2015 | A1 |
20150324947 | Winograd et al. | Nov 2015 | A1 |
20150340045 | Hardwick et al. | Nov 2015 | A1 |
20160037189 | Holden et al. | Feb 2016 | A1 |
20160055606 | Petrovic et al. | Feb 2016 | A1 |
20160055607 | Petrovic et al. | Feb 2016 | A1 |
20160057317 | Zhao et al. | Feb 2016 | A1 |
20160150297 | Petrovic et al. | May 2016 | A1 |
20160182973 | Winograd et al. | Jun 2016 | A1 |
20160241932 | Winograd et al. | Aug 2016 | A1 |
20170272839 | Winograd et al. | Sep 2017 | A1 |
20170280205 | Winograd et al. | Sep 2017 | A1 |
Number | Date | Country |
---|---|---|
1474924 | Nov 2004 | EP |
2439735 | Apr 2012 | EP |
2489181 | Aug 2012 | EP |
2899720 | Jul 2015 | EP |
2004163855 | Jun 2004 | JP |
2004173237 | Jun 2004 | JP |
2004193843 | Jul 2004 | JP |
2004194233 | Jul 2004 | JP |
2004328747 | Nov 2004 | JP |
2005051733 | Feb 2005 | JP |
2005094107 | Apr 2005 | JP |
2005525600 | Aug 2005 | JP |
20100272920 | Dec 2010 | JP |
1020080087047 | Sep 2008 | KR |
20100009384 | Jan 2010 | KR |
10201016712 | Feb 2011 | KR |
20120083903 | Jul 2012 | KR |
1020120128149 | Nov 2012 | KR |
20130074922 | Jul 2013 | KR |
20130078663 | Jul 2013 | KR |
101352917 | Jan 2014 | KR |
10201424049 | Jul 2014 | KR |
00059148 | Oct 2000 | WO |
2005017827 | Feb 2005 | WO |
2005038778 | Apr 2005 | WO |
2006051043 | May 2006 | WO |
2008045880 | Apr 2008 | WO |
2009031082 | Mar 2009 | WO |
2010073236 | Jul 2010 | WO |
2010135687 | Nov 2010 | WO |
2011046590 | Apr 2011 | WO |
2011116309 | Sep 2011 | WO |
2012177126 | Dec 2012 | WO |
2012177874 | Dec 2012 | WO |
2013025035 | Feb 2013 | WO |
2013163921 | Nov 2013 | WO |
2014014252 | Jan 2014 | WO |
2015138798 | Sep 2015 | WO |
2015168697 | Nov 2015 | WO |
2015174086 | Nov 2015 | WO |
2016028934 | Feb 2016 | WO |
2016028936 | Feb 2016 | WO |
2016029055 | Feb 2016 | WO |
2016086047 | Jun 2016 | WO |
Entry |
---|
International Search Report and Written Opinion dated Jan. 21, 2016 for International Application No. PCT/US2015/046166, filed Aug. 20, 2015 (8 pages). |
International Search Report and Written Opinion dated Apr. 12, 2016 for International Application No. PCT/US2015/066872, filed Dec. 18, 2015 (7 pages). |
Office Action dated Jun. 10, 2016 for Korean Patent Application No. 10-2016-7002291 (19 pages). |
Office Action dated Jul. 28, 2016 for Korean Patent Application No. 10-2016-7002289 (11 pages). |
Office action dated Nov. 30, 2016 for Korean Patent Application No. 10-2016-7002289 (4 pages). |
“ATSC—3.0 Automatic Content Recognition Watermarking Solutions,” ATSC Technology Group, Advanced Television Systems Committee, Inc., Jan. 2014 (6 pages). |
Aris Technologies, Inc. “Audio Watermarking System to Screen Digital Audio Content for LCM Acceptance,” May 1999 (17 pages). |
Bangaleea, R., et al., “Performance improvement of spread spectrum spatial-domain watermarking scheme through diversity and attack characterisation,” IEEE Africon, pp. 293-298, 2002. |
Hartung, F., et al., “Watermarking of MPEG-2 encoded video without decoding and re-coding,” Proc. SPIE Multimedia Computing and Networking 97, 3020:264-274, Feb. 1997. |
Hartung, F., et al., “Watermarking of uncompressed and compressed video,” Signal Processing, 3(66):283-301, May 1998. |
International Search Report and Written Opinion dated Aug. 13, 2015 for International Application No. PCT/US2015/029097, filed May 4, 2015 (14 pages). |
International Search Report and Written Opinion dated Dec. 7, 2015 for International Application No. PCT/US2015/045960, filed Aug. 19, 2015 (14 pages). |
International Search Report and Written Opinion dated Jan. 28, 2016 for International Application No. PCT/US2015/045964, filed Aug. 19, 2015 (8 pages). |
International Search Report and Written Opinion dated May 28, 2015 for International Application No. PCT/US2015/020282, filed Mar. 12, 2015 (7 pages). |
Kalker, T., et al., “System issues in digital image and video watermarking for copy protection,” Proc. IEEE Int. Conf. on Multimedia Computing and Systems, pp. 562-567, Jun. 1999. |
Kirovski, D., et al., “Multimedia content screening using a dual watermarking and fingerprinting system,” Proceedings of the tenth ACM international conference, pp. 372-381, 2002. |
Kirovski, D., et al., “Multimedia content screening using a dual watermarking and fingerprinting system,” Multimedia '02 Proceedings of the tenth ACM international conference on Multimedia, 2002 (11 pages). |
Verance Corporation, “Confirmedia,” PowerPoint presentation made to National Association of Broadcasters, Apr. 24, 2001 (40 pages). |
Zhao, J., “A WWW service to embed and prove digital copyright watermarks,” Proc. European Conf. on Multimedia Applications, Services and Techniques (ECMAST'96), May 1996 (15 pages). |
Zhao, J., “Applying digital watermarking techniques to online multimedia commerce,” Proc. Int. Conf. on Imaging Science, Systems and Applications (CISSA'97), Jun./Jul. 1997 (7 pages). |
International Search Report and Written Opinion dated Mar. 15, 2016 for International Application No. PCT/US2015/062514, filed Nov. 24, 2015 (10 pages). |
Extended European Search Report dated Sep. 21, 2017 for European Application No. 15762332.3 (9 pages). |
Furon, T., “A constructive and unifying framework for zero-bit watermarking,” CS.MM, Jan. 12, 2007. |
Extended European Search Report dated Nov. 21, 2017 for European Application No. 15785628.7 (7 pages). |
Number | Date | Country | |
---|---|---|---|
20160148334 A1 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
62084465 | Nov 2014 | US | |
62093996 | Dec 2014 | US |