This disclosure relates generally to audio watermarking and, more particularly, to methods and apparatus for audio watermarking a substantially silent media content presentation.
Audio watermarking is a common technique used to identify media content, such as television broadcasts, radio broadcasts, downloaded media content, streaming media content, prepackaged media content, etc., presented to a media consumer. Existing audio watermarking techniques identify media content by embedding an audio watermark, such as identifying information or a code signal, into an audible audio component having a signal level sufficient to hide the audio watermark. However, many media content presentations of interest do not include an audio component into which an audio watermark can be embedded, or may be presented with their audio muted or attenuated near or below a signal level perceivable by an average person and, thus, which is insufficient to hide an audio watermark.
Methods and apparatus for audio watermarking a substantially silent media content presentation are disclosed herein. Although the following discloses example methods and apparatus including, among other components, software executed on hardware, it should be noted that such methods and apparatus are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of these hardware and software components could be implemented exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, while the following describes example methods and apparatus, persons having ordinary skill in the art will readily appreciate that the examples provided are not the only way to implement such methods and apparatus.
As described herein, a media content presentation, including single and multimedia content presentations, includes one or more content components (also referred to more succinctly as components) that, when combined, form the resulting media content presentation. For example, a media content presentation can include a video content component and an audio content component. Additionally, each of the video content component and the audio content component can include multiple content components. For example, a media content presentation in the form of a graphical user interface (GUI) includes multiple video content components (and possibly one or more audio content components), with each video content component corresponding to a respective GUI widget (e.g., such as a window/screen, menu, text box, embedded advertisement, etc.) capable of being presented by the GUI. As another example, a video game can include multiple video content components, such as background graphic components, foreground graphic components, characters/sprites, notification overlays, etc., as well as multiple audio content components, such as multiple special effects and/or music tracks, that are selectably presented based on the current game play context.
As described herein, a media content presentation, or a content component of a media content presentation, is considered substantially silent if, for example, it does not include an audio component, or it includes one or more audio components that have been muted or attenuated to a level near or below the auditory threshold of the average person, or near or below the ambient or background audio noise level of the environment in which the media content is being presented. For example, a GUI presented by a media presenting device can present different GUI widgets, and possibly embedded advertisements, that do not have audio components and, thus, are substantially silent. As another example, in the context of a video game presentation, a game console may present game content that is silent (or substantially silent) depending on the context of the game as it is played by a user.
As described in greater detail below, an example disclosed technique to audio watermark a media content presentation involves obtaining a watermarked noise signal containing a watermark and a noise signal having energy substantially concentrated in an audible frequency band. Unlike conventional audio watermarking techniques, in the example disclosed technique the watermarked noise signal is attenuated to be substantially inaudible without being embedded (e.g., hidden) in a separate audio signal making up the media content presentation. Additionally, the example disclosed technique involves associating the watermarked noise signal with a substantially silent content component of the media content presentation. As discussed above, a media content presentation typically includes one or more media content components, and the example technique associates the watermarked noise signal with a content component that is substantially silent. Furthermore, the example technique involves outputting the watermarked noise signal during presentation of the substantially silent content component to thereby watermark the substantially silent content component making up the media content presentation.
In at least some example implementations, the noise signal used to form the watermarked noise signal is generated by filtering a white noise signal or a pseudorandom noise signal with a bandpass filter having a passband corresponding to a desired audible frequency band. The result is a filtered noise signal, also referred to as a pink noise signal. Additionally, in at least some example implementations, the watermark is an amplitude and/or frequency modulated signal having frequencies modulated to convey digital information to identify the substantially silent content component that is to be watermarked.
As mentioned above, to identify media content, conventional audio watermarking techniques rely on an audio component of the media content having sufficient signal strength (e.g., audio level) to hide an embedded watermark such that the watermark is inaudible to a person perceiving the media content, but is detectable by a watermark detector. Unlike such conventional techniques, at least some of the example audio watermarking techniques disclosed herein do not rely on any existing audio component of the media content to hide a watermark used to identify the media content (or a particular media content component). Instead, the example disclosed audio watermarking techniques embed the watermark in a filtered (e.g., pink) noise signal residing in the audible frequency band but that is attenuated such that the signal is inaudible to a person even when no other audio signal is present. In other words, the resulting watermarked noise signal is imperceptible relative to other ambient or background noise in the environment in which the media content is being presented. By not relying on an audio signal to embed the watermark information, at least some of the example disclosed audio watermarking techniques are able to watermark media content (or a particular media content component) that is substantially silent. In contrast, many conventional audio watermarking techniques are unable to watermark substantially silent media content. In this way, the example disclosed audio watermarking techniques can be used to mark and identify media content having substantially silent content components, such as GUIs and video games, which may not be able to be marked and identified by conventional audio watermarking techniques.
Turning to the figures, a block diagram of an example environment of use 100 for implementing and using audio watermarking according to the methods and/or apparatus described herein is illustrated in
The television 108 may be any type of television or, more generally, any type of media presenting device. For example, the television 108 may be a television and/or display device that supports the National Television Standards Committee (NTSC) standard, the Phase Alternating Line (PAL) standard, the Système Électronique pour Couleur avec Mémoire (SECAM) standard, a standard developed by the Advanced Television Systems Committee (ATSC), such as high definition television (HDTV), a standard developed by the Digital Video Broadcasting (DVB) Project, or may be a multimedia computer system, a PDA, a cellular/mobile phone, etc.
In the illustrated example, a video signal 112 and an audio signal 116 output from the game console 104 are coupled to the television 108. The example environment 100 also includes an example splitter 120 to split the audio signal 116 into a presented audio signal 124 to be coupled to an audio input of the television 108, and a monitored audio signal 128 to be coupled to an example monitor 132. As described in greater detail below, the monitor 132 operates to detect audio watermarks included in media content presentations (or particular content components of the media content presentations) output by the game console 104 and/or television 108. Furthermore, as described in greater detail below, an example watermark creator 136 creates audio watermarks according to the example techniques described herein for inclusion in game or other media content (or content component(s)) and/or to be provided to the game console 104 (and/or television 108 or other STB (not shown)) for storage and subsequent presentation by the game console 104 for detection by the monitor 132.
The splitter 120 can be, for example, an analog splitter in the case of an analog audio output signal 116, a digital splitter (e.g., such as a High-Definition Multimedia Interface (HDMI) splitter) in the case of a digital audio output signal 116, an optical splitter in the case of an optical audio output, etc. Additionally or alternatively, such as in an example in which the game console 104 and the television 108 are integrated into a single unit, the monitored audio signal 128 can be provided by an analog or digital audio line output of the game console 104, the television 108, the integrated unit, etc. As such, the monitored signal 128 provided to the monitor 132 is typically a line quality audio signal.
As illustrated in
Similarly, an example remote control device 144 capable of sending (and possibly receiving) control information is included in the environment 100 to allow the user to interact with the television 108. The remote control device 144 can send (and possibly receive) the control information using a variety of techniques, including, but not limited to, infrared (IR) transmission, radio frequency (RF) transmission, wired/cabled connection, etc. Like the game controller 140, the remote control device 144 allows the user to interact with one or more GUIs presented by the television 108. For example, the television 108 (or game console 104 or other STB (not shown) coupled to the television 108, etc.) may present one or more GUIs to enable the user to configure the television 108, access an electronic program guide (EPG), access a video-on-demand (VOD) program guide and/or select VOD programming for presentation, etc. In examples in which the game console 104 and the television 108 are integrated into a single unit, the game controller 140 and the remote control device 144 may correspond to the same device or different devices.
In the illustrated example, the game console 104 includes an example network connection 148 to allow the game console 104 to access an example network 152. The network connection 148 may be, for example, a Universal Serial Bus (USB) cable, an Ethernet connection, a wireless (e.g., 802.11, Bluetooth, etc.) connection, a phone line connection, a coaxial cable connection, etc. The network 152 may be, for example, the Internet, a local area network (LAN), a proprietary network provided by a gaming or other service provider, etc.
Using the network connection 148, the game console 104 is able to access the network 148 and connect with one or more example game content (or other service) providers 156. An example of such a game content provider is the Xbox LIVE™ service, which allows game content and other digital media to be downloaded to the game console 104, and also supports online multiplayer gaming. In such an example, the game console 104 implements one or more GUIs each presenting one or more GUI widgets that enable a user to access and interact with the Xbox LIVE service via the game controller 140.
To monitor media content and/or particular content components output by the game console 104 and/or television 108, the monitor 132 is configured to detect audio watermarks included in the monitored audio signal 128 and/or one or more monitored audio signals obtained by one or more example audio sensors 160 (e.g., such as one or more microphones, acoustic transducers, etc.) positionable to detect audio emissions from one or more speakers (not shown) of the television 108. As discussed in greater detail below, the monitor 132 is able to decode audio watermarks used to identify substantially silent media content and/or one or more substantially silent media content components included in a media content presentation output by the game console 104 and/or television 108. Additionally, the monitor 132 may be configured to detect conventional audio watermarks embedded in audible audio signals output by the game console 104 and/or television 108.
The monitor 132 includes an example network connection 164, which may be similar to the network connection 148, to allow the monitor 132 to access an example network 168, which may be the same as, or different from, the network 152. Using the network connection 164, the monitor 132 is able to access the network 168 to report detected audio watermarks and/or decoded watermark information (as well as any tuning information and/or other collected information) to an example central facility 172 for further processing and analysis. For example, the central facility 170 may process the detected audio watermarks and/or decoded watermark information reported by the monitor 132 to determine what media content or particular content components are being presented by the game console 104 and/or television 108 to thereby infer content consumption and interaction by a user in the environment 100.
As mentioned above, the watermark creator 136 creates audio watermarks according to the example techniques described herein for inclusion in game or other media content (or content component(s)) and/or to be provided to the game console 104 (and/or television 108 or other STB (not shown)) for storage and subsequent presentation for detection by the monitor 132. As discussed in greater detail below, the watermark creator 136 creates watermarked noise signals that can be associated with respective media content and/or respective individual content components that are themselves substantially silent and, thus, do not support conventional audio watermarking techniques. As such, a watermarked noise signal can be used to mark and identify (possible uniquely) particular media content or a particular content component. As illustrated in
Additionally or alternatively, the game console 104 can be pre-configured (e.g., pre-loaded) with one or more watermarked noise signals (e.g., such as watermarked noise signals associated with respective pre-configured GUI widgets presented by a console configuration GUI). Such pre-configuration is represented by a dotted line 176 in
Although the example environment 100 of
A block diagram of an example implementation of the watermark creator 136 of
To audio watermark the filtered noise signal from the noise filter 208, the watermark creator 136 of
In example implementations in which the watermark generator 212 generates a separate (e.g., amplitude and/or frequency modulated) watermark signal to be combined with the filtered noise signal, the watermark creator 136 of
Additionally, the watermark creator 136 of
To associate a generated watermarked noise signal with specific media content or a specific content component, the watermark creator 136 of
While an example manner of implementing the watermark creator 136 of
A block diagram of an example implementation of the console 104 of
The console 104 of
The console 104 of
The console 104 of
A user interface 320 is included in the console 104 to support user interaction via an input device, such as the game controller 140 and/or the remote control device 144 of
The content processor 324 is configured to select and prepare video and/or audio content for inclusion in a media content presentation to be output by the console 104. In an example implementation, the content processor 324 is to select and obtain video and/or audio content and/or content components from the content storage 308 based on user input(s) received via the user interface 320. Additionally or alternatively, the content processor 324 can obtain the selected video and/or audio content and/or content components by direct downloading and/or streaming from an external source, such as the content provider(s) 156. Additionally or alternatively, the content processor 324 can generate (e.g., render) video and/or audio content and/or content components on-the-fly based on, for example, stored machine-readable program instructions. The content processor 324 of the illustrated example is also configured to process the obtained video and/or audio content and/or content components for inclusion in a media content presentation. Such processing can include, but is not limited to, determining which content and content components to present when (e.g., content component sequencing), content component synchronization (e.g., such as synchronizing video and audio components), integration (e.g., overlay) with other media content and content components (e.g., such as advertisements provided by the advertisement processor 328, GUIs provided by the GUI processor 332, etc.), post-processing (e.g., such as image quality enhancement, special effects, volume control, etc.), etc.
The advertisement processor 328 is configured to select and prepare advertisements for inclusion in a media content presentation to be output by the console 104. In an example implementation, the advertisement processor 328 is to select and obtain advertisements or advertisement components from the advertisement storage 312 based on user input(s) received via the user interface 320 and/or other selection criteria (e.g., such as a random selection, selection tied to selected audio/video content, etc.). Additionally or alternatively, the advertisement processor 328 can obtain the advertisements by direct downloading and/or streaming from an external source, such as the content provider(s) 156. Additionally or alternatively, the advertisement processor 328 can generate (e.g., render) advertisements on-the-fly based on, for example, stored machine-readable program instructions (e.g., such as in the case of logos and/or still image advertisements). The advertisement processor 328 of the illustrated example is also configured to process the advertisement for inclusion in a media content presentation. Such processing can include, but is not limited to, scaling, cropping, volume control, etc.
The GUI processor 332 is configured to select and prepare a GUI for inclusion in a media content presentation to be output by the console 104. In an example implementation, the GUI processor 332 is to a select and obtain a GUI and/or one or more GUI content components (e.g., GUI widgets) from the content storage 308 based on user input(s) received via the user interface 320 and/or other selection criteria (e.g., such as automatic, or pop-up, presentation of GUIs or GUI widgets). Additionally or alternatively, the GUI processor 332 can obtain the selected GUI and/or GUI content components by direct downloading and/or streaming from an external source, such as the content provider(s) 156. Additionally or alternatively, the GUI processor 332 can generate (e.g., render) GUIs and/or GUI content components on-the-fly based on, for example, stored machine-readable program instructions. The GUI processor 332 of the illustrated example is also configured to process the obtained GUIs and/or GUI content components for inclusion in a media content presentation. Such processing can include, but is not limited to, determining which GUI components (e.g., widgets) to present and when to present them, integration (e.g., overlay) with other media content and content components (e.g., such as insertion of advertisements into a window of a GUI, insertion of video content in a window of a GUI, etc.), post-processing (e.g., such as highlighting of windows, text, menus, buttons and/or other special effects), etc.
To enable substantially silent media content and/or content components to be audio watermarked, the console 104 of
Assuming an examined content component is determined to be associated with a watermarked noise signal, the watermark processor 336 then obtains the respective watermarked noise signal associated with the examined content component from the watermarked noise signal storage 316. Additionally, the watermark processor 336 can perform post-processing on the obtained watermarked noise signal, such as audio attenuation or amplification, synchronization with the presentation of the associated content component, etc., to prepare the watermarked noise signal to be output by the console 104. For example, if the obtained watermarked noise signal has not already been scaled to be substantially inaudible without needing to be combined with (e.g., hidden in) a separate audio signal, the watermark processor 336 can perform such scaling. Additionally or alternatively, the watermark processor 336 can scale the obtained watermarked noise signal based on a configuration input and/or, if present, an audio sensor (not shown), to account for the ambient or background audio in the vicinity of the console 104. For example, in a loud environment, the audio level of the watermarked noise signal can be increased, whereas in a quiet environment, the audio level of the watermarked noise signal may need to be decreased.
In at least some example implementations, the watermark processor 336 may also select and obtain a watermarked noise signal from the watermarked noise signal storage 316 (or create the watermarked noise signal on-the-fly by implementing some or all of the functionality of the watermark creator 136 described above) based on an operating state of the console 104 instead of, or in addition to, being based on whether a particular (e.g., substantially silent) content component is to be included in the media content presentation. For example, if the watermark processor 336 determines that the console 104 is operating in substantially silent state, such as a mute state in which output audio has been muted or a low-volume state in which the output audio is below an auditory threshold, the watermark processor 336 may obtain a watermarked noise signal associated with and identifying the particular operating state (e.g., the mute state) for output while the console 104 is operating in that state. The watermarked noise signal may also identify one or more activities (e.g., such as applications, operations, etc.) being executed by the console 104 while the console is in the particular operating state (e.g., the mute state) causing the watermarked noise signal to be output. Additionally or alternatively, the watermark processor 336 may be configured to implement some or all of the functionality of the watermark creator 136 of
To output a media content presentation (e.g., such as including any, some or all of a video game presentation, a GUI, an embedded advertisement, etc.), the console 104 of
Although the example of
While an example manner of implementing the console 104 of
A block diagram of an example implementation of the monitor 132 of
The monitor 132 of
Additionally, in at least some example implementations, the watermark detector 408 is able to detect conventional audio watermarks embedded (e.g., hidden) in the media content presented by, for example, the console 104. Furthermore, in at least some example implementations, the watermark detector 408 is configured to decode detected audio watermarks to determine the marking and/or other identifying information represented by the watermark. Examples of watermark detection techniques that can be implemented by the watermark detector 408 include, but are not limited to, the examples disclosed in the above-referenced U.S. Pat. No. 6,272,176, U.S. Pat. No. 6,504,870, U.S. Pat. No. 6,621,881, U.S. Pat. No. 6,968,564, U.S. Pat. No. 7,006,555, and/or U.S. Patent Publication No. 2009/0259325.
The monitor 132 of
While an example manner of implementing the monitor 132 of
Flowcharts representative of example processes that may be executed to implement the example environment 100, the example console 104, the example monitor 132, the example watermark creator 136, the example noise generator 204, the example noise filter 208, the example watermark generator 212, the example combiner 220, the example scaler 224, the example content associator 228, the example watermarked noise signal output unit 232, the example receiving unit 304, the example content storage 308, the example advertisement storage 312, the example watermarked noise signal storage 316, the example user interface 320, the example content processor 324, the example advertisement processor 328, the example GUI processor 332, the example watermark processor 336, the example video processor 340, the example audio processor 344, the example audio interface 404, the example watermark detector 408 and/or the example reporting unit 412 are shown in
For example, any or all of the example environment 100, the example console 104, the example monitor 132, the example watermark creator 136, the example noise generator 204, the example noise filter 208, the example watermark generator 212, the example combiner 220, the example scaler 224, the example content associator 228, the example watermarked noise signal output unit 232, the example receiving unit 304, the example content storage 308, the example advertisement storage 312, the example watermarked noise signal storage 316, the example user interface 320, the example content processor 324, the example advertisement processor 328, the example GUI processor 332, the example watermark processor 336, the example video processor 340, the example audio processor 344, the example audio interface 404, the example watermark detector 408 and/or the example reporting unit 412 could be implemented by any combination of software, hardware, and/or firmware. Also, some or all of the processes represented by the flowcharts of
An example process 500 that may be executed to implement the example watermark creator 136 of
At block 525, the watermark creator 136 obtains identification or other marking information for each content component via the information input 216. Next, at block 530 the watermark generator 212 included in the watermark creator 136 generates an audio watermark for each content component representative of the information obtained at block 525. For example, at block 525 the watermark generator 212 can generate an amplitude and/or frequency modulated signal having one or more frequencies that are modulated to convey the information obtained at block 525. As another example, at block 525 the watermark generator 212 can modulate the filtered noise signal determined at block 520 directly to convey the identification information obtained at block 525.
At block 535, the combiner 220 included in the watermark creator 136 combines the filtered noise signal with the separate watermark signal to form a watermarked noise signal (e.g., if the filtered noise signal was not modulated directly by the watermark generator 212 to determine the watermarked noise signal). Additionally, at block 535 the scaler 224 included in the watermark creator 136 scales the watermarked noise signal to be substantially inaudible without needing to be embedded (e.g., hidden) in a separate audio signal making up the media content presentation. Then, if all identified components have not been watermarked (block 540), processing returns to block 510 and blocks subsequent thereto to audio watermark the next substantially silent content component. However, if all components have been watermarked (block 540), then at block 545 the content associator 228 (possibly in conjunction with the watermarked noise signal output unit 232) included in the watermark creator 136 stores the content association information (e.g., corresponding to the information obtained at block 515), along with the watermarked noise signals in, for example, the console 104 to allow each watermarked noise signal to be associated with its respective media content component. Execution of the example process 500 then ends.
An example process 600 that may be executed to implement the example console 104 of
At block 620, the watermark processor 336 examines each content component to be included in the media content presentation. In particular, at block 625 the watermark processor 336 determines whether each content component is associated with a respective watermarked noise signal stored in the watermarked noise signal storage 316 and/or that is to be generated on-the-fly by the watermark processor 336. For example, the watermark processor 336 may examine content association information stored in the watermarked noise signal storage 316 to determine whether a particular (substantially silent) content component is associated with a respective watermarked noise signal. If a particular content component is determined to be associated with a respective watermarked noise signal (block 625), then at block 630 the watermark processor 336 obtains the respective watermarked noise signal (e.g., from the watermarked noise signal storage 316 or by on-the-fly generation). Then, at block 635 the audio processor 344 combines the watermarked noise signal obtained at block 630 with the overall audio signal to be output from the console 104.
Then, if there are still content components remaining to be examined (block 640), processing returns to block 620 at which the next content component is examined by the watermark processor 336. Otherwise, if all content components have been examined (block 640), processing proceeds to block 645 at which the audio processor 344 outputs a combination of all the watermarked noise signals for all the respective substantially silent content components as combined via the processing at block 635. As such, multiple, overlapping watermarked noise signals associated with multiple substantially silent content components can be output by the console 104 at substantially the same time. Then, at block 615 the audio processor 344 combines the combined watermarked noise signals with any audible audio content to be output with the media content presentation. The processing at block 615 is optional, especially in example implementations in which the decision at block 610 is included and, as such, watermarked noise signals will be output only if the media content presentation is substantially silent.
Next, if the console 104 determines that media content presentation is to continue (block 650), processing returns to block 605 and blocks subsequent thereto. Otherwise, execution of the example process 600 ends.
An example process 700 that may be executed to implement the example monitor 132 of
Next, at block 710 the watermark detector 408 included in the monitor 132 detects any watermarks included in the monitored audio signal(s) obtained at block 705. For example, at block 710 the watermark detector 408 may detect watermark(s) included in watermarked noise signal(s) output from the console 104 or other media presenting device being monitored. Additionally or alternatively, the block 710 the watermark detector 408 may detect audio watermarks embedded (e.g., hidden) in audible audio content being presented by the console 104 or other media presenting device (as described above). For example, because audible audio content may overpower any watermarked noise signals, conventional audio watermarks embedded (e.g., hidden) in audible audio content may be detectable by the watermark detector 408 even if any watermarked noise signals are present. If any watermarks are detected (block 715), then at block 720 the reporting unit 412 included in the monitor 132 reports the detected watermarks and/or decoded watermark information to, for example, the central facility 172 (as described above). Then, if monitoring is to continue (block 725), processing returns to block 705 and blocks subsequent thereto. Otherwise, execution of the example process 700 ends.
The system 800 of the instant example includes a processor 812 such as a general purpose programmable processor. The processor 812 includes a local memory 814, and executes coded instructions 816 present in the local memory 814 and/or in another memory device. The processor 812 may execute, among other things, machine readable instructions to implement the processes represented in
The processor 812 is in communication with a main memory including a volatile memory 818 and a non-volatile memory 820 via a bus 822. The volatile memory 818 may be implemented by Static Random Access Memory (SRAM), Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory 820 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 818, 820 is typically controlled by a memory controller (not shown).
The processing system 800 also includes an interface circuit 824. The interface circuit 824 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a third generation input/output (3GIO) interface.
One or more input devices 826 are connected to the interface circuit 824. The input device(s) 826 permit a user to enter data and commands into the processor 812. The input device(s) can be implemented by, for example, a keyboard, a mouse, a touchscreen, a track-pad, a trackball, an isopoint and/or a voice recognition system.
One or more output devices 828 are also connected to the interface circuit 824. The output devices 828 can be implemented, for example, by display devices (e.g., a liquid crystal display, a cathode ray tube display (CRT)), by a printer and/or by speakers. The interface circuit 824, thus, typically includes a graphics driver card.
The interface circuit 824 also includes a communication device such as a modem or network interface card to facilitate exchange of data with external computers via a network (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
The processing system 800 also includes one or more mass storage devices 830 for storing software and data. Examples of such mass storage devices 830 include floppy disk drives, hard drive disks, compact disk drives and digital versatile disk (DVD) drives. The mass storage device 830 may implement the example content storage 308, the example advertisement storage 312 and/or the example watermarked noise signal storage 316. Alternatively, the volatile memory 818 may implement the example content storage 308, the example advertisement storage 312 and/or the example watermarked noise signal storage 316.
As an alternative to implementing the methods and/or apparatus described herein in a system such as the processing system of
Finally, although certain example methods, apparatus and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the appended claims either literally or under the doctrine of equivalents.
This patent arises from a continuation of U.S. application Ser. No. 12/750,359, entitled “METHODS AND APPARATUS FOR AUDIO WATERMARKING A SUBSTANTIALLY SILENT MEDIA CONTENT PRESENTATION” and filed on Mar. 30, 2010, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3703684 | McVoy | Nov 1972 | A |
5113437 | Best et al. | May 1992 | A |
5161210 | Druyvesteyn et al. | Nov 1992 | A |
5162905 | Itoh et al. | Nov 1992 | A |
5787334 | Fardeau et al. | Jul 1998 | A |
5822360 | Lee et al. | Oct 1998 | A |
5872588 | Aras et al. | Feb 1999 | A |
5940429 | Lam et al. | Aug 1999 | A |
6272176 | Srinivasan | Aug 2001 | B1 |
6363159 | Rhoads | Mar 2002 | B1 |
6504870 | Srinivasan | Jan 2003 | B2 |
6512796 | Sherwood | Jan 2003 | B1 |
6560349 | Rhoads | May 2003 | B1 |
6621881 | Srinivasan | Sep 2003 | B2 |
6674861 | Xu et al. | Jan 2004 | B1 |
6737957 | Petrovic et al. | May 2004 | B1 |
6845360 | Jensen et al. | Jan 2005 | B2 |
6968564 | Srinivasan | Nov 2005 | B1 |
7006555 | Srinivasan | Feb 2006 | B1 |
7184570 | Rhoads | Feb 2007 | B2 |
20070130580 | Covell et al. | Jun 2007 | A1 |
20070294716 | Jeong et al. | Dec 2007 | A1 |
20080168493 | Allen et al. | Jul 2008 | A1 |
20090077578 | Steuer et al. | Mar 2009 | A1 |
20090259325 | Topchy et al. | Oct 2009 | A1 |
20110246202 | McMillan et al. | Oct 2011 | A1 |
Number | Date | Country |
---|---|---|
2011201212 | Oct 2011 | AU |
2734666 | Sep 2011 | CA |
102208187 | Oct 2011 | CN |
2375411 | Oct 2011 | EP |
2000068970 | Mar 2000 | JP |
2001506098 | May 2001 | JP |
2001188549 | Jul 2001 | JP |
2001312299 | Nov 2001 | JP |
2006504986 | Feb 2006 | JP |
2011209723 | Oct 2011 | JP |
9827504 | Jun 1998 | WO |
2004036352 | Apr 2004 | WO |
2007012987 | Feb 2007 | WO |
Entry |
---|
IP Australia, “Exam Report,” issued in connection with Australian Patent Application No. 2011201212, on May 2, 2012, 2 pages. |
IP Australia, “Notice of Acceptance,” issued in connection with Australian Patent Application No. 2011201212, on May 24, 2013, 3 pages. |
Canadian Intellectual Property Office, “Office Action,” issued in connection with Canadian Patent Application No. 2,734,666, on Jun. 21, 2013, 4 pages. |
State Intellectual Property Office of China, “Notice of Decision of Grant,” issued in connection with Chinese Patent Application No. 201110077492.0, on Nov. 27, 2013, 5 pages. |
State Intellectual Property Office of China, “First Office Action,” issued in connection with Chinese Patent Application No. 201110077492.0, on Apr. 6, 2012, 6 pages. |
State Intellectual Property Office of China, “Second Office Action,” issued in connection with Chinese Patent Application No. 201110077492.0, on Nov. 2, 2012, 13 pages. |
European Patent Office, “Extended European Search Report,” issued in connection with European Patent Application No. 11002591.3, on Jul. 12, 2011, 7 pages. |
Japanese Patent Office, “Final Notice of Grounds for Rejection,” issued in connection with Japanese Patent Application No. 2011-062768, on Mar. 12, 2013, 5 pages. |
Japanese Patent Office, “Notice of Grounds for Rejection,” issued in connection with Japanese Patent Application No. 2011-062768, on Sep. 18, 2012, 7 pages. |
Tachibana, Ryuki, “Audio Watermarking for Live Performance,” Proc. of Security and Watermarking of Multimedia Contents V, Santa Clara, USA, vol. 5020, Jan. 2003, pp. 32-43. |
Tachibana et al., “An Audio Watermarking Method Using a Two-Dimensional Pseudo-Random Array,” Elsevier Signal Processing, vol. 82, 2002, 15 pages. |
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 12/750,359, on Feb. 14, 2012, 22 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 12/750,359, on Sep. 24, 2012, 12 pages. |
United States Patent and Trademark Office, “Notice of Allowability,” issued in connection with U.S. Appl. No. 12/750,359, on Dec. 14, 2012, 2 pages. |
IP Australia, “Patent Examination Report No. 1,” issued in connection with Australian Patent Application No. 2013203336, dated Oct. 10, 2014 (2 pages). |
Number | Date | Country | |
---|---|---|---|
20130103172 A1 | Apr 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12750359 | Mar 2010 | US |
Child | 13708266 | US |