This disclosure relates generally to audio watermarking and, more particularly, to audio watermarking for people monitoring.
Audience measurement systems typically include one or more site meters to monitor the media presented by one or more media devices located at a monitored site. Many such audience measurement systems also include one or more people meters to obtain information characterizing the composition(s) of the audience(s) in the vicinity of the media device(s) being monitored. In prior audience measurement systems, the people meters typically are separate from the site meters, or employ different signal processing technology than that employed by the site meters. For example, the site meters may be configured to process media signals captured from the monitored media devices to detect watermarks embedded in the media signals, whereas the people meters may be configured to capture and process images of an audience, and/or process input commands entered by members of the audience.
Wherever possible, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts, elements, etc.
Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to utilize audio watermarking for people monitoring are disclosed herein. Some example people monitoring methods disclosed herein include determining, at a user device, whether a first trigger condition for emitting an audio watermark identifying at least one of the user device or a user of the user device is satisfied. Such disclosed example methods also include, in response to determining that the first trigger condition is satisfied, providing a first audio signal including the audio watermark to an audio circuit that is to output an acoustic signal from the user device.
In some such examples, the first trigger condition is satisfied when an input audio signal sensed at the user device satisfies an audio threshold.
Some such disclosed example methods further include downloading a software application to the user device. In such examples, the software application determines whether the first trigger condition is satisfied and provides the first audio signal to the audio circuit.
In some such disclosed example methods, the first audio signal includes only the audio watermark, whereas in other disclosed example methods, the first audio signal includes the audio watermark combined with a second signal.
Some such disclosed example methods further include determining a level of an input audio signal, and adjusting a level of the first audio signal based on the level of the input audio signal. For example, adjusting the level of the first audio signal may include adjusting the level of the first audio signal to cause the first audio signal to be substantially masked by a source of the input audio signal when the acoustic signal is output from the user device.
In some such disclosed example methods, the audio watermark is a first audio watermark conveyed in a first range of frequencies different from a second range of frequencies used to convey a second audio watermark included in an input audio signal sensed by the user device.
Some such disclosed example methods further include determining, at the user device, whether a second trigger condition is satisfied, and in response to determining that the first trigger condition and the second trigger condition are satisfied, but not if either the first trigger condition or the second trigger condition is not satisfied, providing the first audio signal including the audio watermark to the audio circuit. In some such examples, the second trigger condition is satisfied when a location of the user device is determined to correspond to a first geographical area including a monitored media device. In some such examples, the second trigger condition is satisfied when a current time at the user device corresponds to a first time period. In some such examples, the second trigger condition is satisfied when a second audio signal is being provided to the audio circuit.
Some example people monitoring methods disclosed herein include detecting, with a processor (e.g., such as a site meter), a first watermark in a first audio signal obtained from an acoustic sensor. In such examples, the first watermark identifies media presented by a monitored media device, and the acoustic sensor is to sense audio in a vicinity of the monitored media device. Such disclosed example methods also include processing, with the processor, the first audio signal obtained from the acoustic sensor to determine whether a second watermark, different from the first watermark, is embedded in the first audio signal. In such examples, the second watermark identifies at least one of a user device or a user of the user device. Such disclosed example methods further include, when the second watermark is determined to be embedded in the first audio signal, reporting at least one of the second watermark or information decoded from the second watermark to identify at least one of the user device or the user of the user device as being exposed to the media presented by the monitored media device.
In some such disclosed example methods, the first watermark is conveyed in a first range of frequencies different from a second range of frequencies used to convey the second watermark.
In some such disclosed example methods, the first watermark is substantially inaudible to the user of the user device and the second watermark is substantially inaudible to the user of the user device, whereas in other such disclosed example methods, the first watermark is substantially inaudible to the user of the user device and the second watermark is substantially audible to the user of the user device.
In some such disclosed example methods, the first watermark is included in a media signal output from the monitored media device, and the second watermark is output from the user device.
These and other example methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to utilize audio watermarking for people detection are disclosed in further detail below.
As mentioned above, in prior audience measurement systems, the people meters used to obtain information characterizing audience composition typically are separate from the site meters used to monitor the media presented by one or more media devices located at a monitored site, or employ signal processing technology different than that employed by the site meters. Unlike such prior systems, example audience measurement systems implementing audio watermarking for people monitoring as disclosed herein are able to reuse the processing technology and capabilities of the site meters to also perform people monitoring. For example, some example audience measurement systems disclosed herein utilize people monitoring watermarks embedded in an acoustic signal output from a user device, such as the user's mobile phone, to identify the user device and/or the user as being in the vicinity of a monitored media device. In such examples, the site meter that is detecting media watermarks embedded in the media presented by the media device is also able to detect the people monitoring watermarks output from the user device.
In some disclosed examples, the people monitoring watermarks output from the user device are caused to be output by a software application downloaded to the user device, and/or are embedded in ringtones and/or other audio signals to be output by the user device during normal operation. In examples in which the people monitoring watermarks are caused to be output by a software application, the software application may evaluate one or more trigger conditions to optimize when to output the people monitoring watermarks, as disclosed in further detail below. In such examples, the site meter can correlate detection of the people monitoring watermarks with one or more of those trigger conditions. In examples in which the people monitoring watermarks are embedded in ringtones and/or other audio signals to be output by the user device during normal operation, the site meter may rely on opportunistic detection of the people monitoring watermarks to identify the user device and/or the user as being exposed to the media presented by the monitored media device.
In the context of media monitoring, watermarks may be transmitted within media signals. For example, watermarks can be used to transmit data (e.g., such as identification codes, ancillary codes, etc.) with media (e.g., inserted into the audio, video, or metadata stream of media) to uniquely identify broadcasters and/or media (e.g., content or advertisements), and/or to convey other information. Watermarks are typically extracted using a decoding operation.
In contrast, signatures are a representation of some characteristic of the media signal (e.g., a characteristic of the frequency spectrum of the signal). Signatures can be thought of as fingerprints. Signatures are typically not dependent upon insertion of identification codes (e.g., watermarks) in the media, but instead preferably reflect an inherent characteristic of the media and/or the signal transporting the media. Systems to utilize codes (e.g., watermarks) and/or signatures for media monitoring are long known. See, for example, Thomas, U.S. Pat. No. 5,481,294, which is hereby incorporated by reference in its entirety.
Turning to the figures, a block diagram of an example audience metering system 100 employing audio watermarking for people monitoring as disclosed herein is illustrated in
The audience measurement system 100 of the illustrated example includes an example site meter 115, also referred to as a site unit 115, a home unit 115, an audience measurement meter 115, etc., to monitor media presented by the media presentation device 110. In the illustrated example, the site meter 115 includes an example acoustic sensor 120, such as, but not limited to, a microphone, to sense acoustic signals 125 output (e.g., emitted) by the media presentation device 110. The site meter 115 of the illustrated example processes the resulting audio signals obtained from the acoustic sensor 120 to monitor the media presented by the media presentation device 110.
Additionally, the example site meter 115 of
In some examples, the audience measurement system 100 further includes an example people meter 145 to capture information about the audience exposed to media presented by the media presentation device 110. For example, the people meter 145 may be configured to receive information via an input device having a set of input keys, each assigned to represent a single audience member. In such examples, the people meter 145 prompts the audience members to indicate their presence by pressing the appropriate input key on the input device of the people meter 145. The people meter 145 of the illustrated example may also receive information from the site meter 115 to determine times at which to prompt the audience members to enter information on people meter 145.
In the illustrated example of
In the illustrated example, the media presentation device 110 monitored by the site meter 115 can correspond to any type of audio, video and/or multimedia presentation device capable of presenting media content audibly and/or visually. For example, the media presentation device 110 can correspond to a television and/or display device that supports the National Television Standards Committee (NTSC) standard, the Phase Alternating Line (PAL) standard, the Systeme Electronique pour Couleur avec Mémoire (SECAM) standard, a standard developed by the Advanced Television Systems Committee (ATSC), such as high definition television (HDTV), a standard developed by the Digital Video Broadcasting (DVB) Project, etc. As another example, the media presentation device 110 can correspond to a multimedia computer system, a personal digital assistant, a cellular/mobile smartphone, a radio, etc.
In the illustrated example, the user device 140 can correspond to any type of user device capable of emitting audio/acoustic signals. In some examples, the user device 140 is implemented by a portable device of the user, such as, but not limited to, a mobile phone or smartphone, a tablet (e.g., such as an iPad™), a personal digital assistant (PDA), a portable gaming device, etc., adapted to support audio watermarking for people monitoring in addition to its native functionality. In some examples, the user device 140 is implemented by a portable device dedicated to people monitoring, such as a portable people meter (PPM) to be carried the audience member 130. Also, although only one user device 140 is depicted in the example illustrated in
The site meter 115 included in the audience measurement system 100 of the illustrated example can correspond to any type of metering device capable of monitoring media presented by the media presentation device 110. In the illustrated example, the site meter 115 employs non-invasive monitoring not involving any physical connection to the media presentation device 110. For example, the site meter 115 processes audio signals obtained from the media presentation device 110 via the acoustic sensor 120 (e.g., a microphone) to detect media and/or source identifying audio watermarks embedded in audio portion(s) of the media presented by the media presentation device 110, to detect people monitoring audio watermarks embedded in the audio signals (e.g., acoustic signals) emitted by user devices, such as the acoustic signals 135 emitted by the user device 140, etc. In some examples, the site meter 115 may additionally utilize invasive monitoring involving one or more physical connections to the media presentation device 110. In such examples, the site meter 115 may additionally process audio signals obtained from the media presentation device 110 via a direct cable connection to detect media and/or source identifying audio watermarks embedded in such audio signals. In some examples, the site meter 115 may process video signals obtained from the media presentation device 110 via a camera and/or a direct cable connection to detect media and/or source identifying video watermarks embedded in video portion(s) of the media presented by the media presentation device 110. In some examples, the site meter 115 may process the aforementioned audio signals and/or video signals to generate respective audio and/or video signatures from the media presented by the media presentation device 110, which can be compared to reference signatures to perform source and/or content identification. Any other type(s) and/or number of media monitoring techniques can be supported by the site meter 115.
As disclosed in further detail below, the people monitoring watermarker 210 causes the user device 140A to emit acoustic signals, such as the acoustic signal 135, which include one or more people monitoring audio watermarks. As described above, the people monitoring audio watermark(s) identify the user device 140A and/or a user (e.g., the user 130) operating or otherwise associated with the user device 140A. In some examples, the people monitoring watermarker 210 evaluates one or more trigger conditions that condition when the people monitoring watermarker 210 is to cause the people monitoring audio watermarks to be output by the user device 140A. For example, and as disclosed in further detail below, such trigger conditions can be based on an input audio level measured by the people monitoring watermarker 210, a time of day, a geographic location, an operating state of the user device 140A, etc. In such examples, detection of a people monitoring audio watermark (e.g., by the site meter 115) can be correlated to the trigger condition(s) that would trigger the people monitoring watermarker 210 to cause the people monitoring audio watermarks to be output by the user device 140A. An example implementation of the people monitoring watermarker 210 is illustrated in
In the illustrated example of
In some examples, the user device 140B outputs (e.g., emits) people monitoring watermark(s) whenever the user device 140B presents (e.g., outputs, plays, etc.) the downloaded audio data containing the people monitoring watermark(s). For example, when the watermarked audio data downloaded from the watermarked audio downloader 215 corresponds to a ringtone or audible alert, the user device 140B outputs (e.g., emits) people monitoring watermark(s) whenever the user device 140B plays the ringtone, outputs the audio alert, etc. Similarly, when the watermarked audio data downloaded from the watermarked audio downloader 215 corresponds an audio track, movie, etc., the user device 140B outputs (e.g., emits) people monitoring watermark(s) whenever the user device 140B presents the audio track, movie, etc.
A block diagram of an example implementation of the people monitoring watermarker 210 of
In the illustrated example of
In some examples, the audio watermarker 305 generates the watermark signal to be a time domain watermark capable of conveying digital information in time domain components of an acoustic signal, such as the acoustic signal 135. In such examples, audio watermarker 305 may generate a watermark signal that is to modulate the amplitude and/or phase of an audio signal in the time domain. Example watermark generation techniques that can be implemented by the audio watermarker 305 to generate such time domain watermarks include, but are not limited to, generating a spread spectrum time domain signal modulated by the digital information, which is to then be embedded in (e.g., added to) the audio signal used to generate the acoustic signal 135.
In some examples, the people monitoring watermark(s) generated or otherwise obtained by audio watermarker 305 reside in the audible frequency range (e.g., the range of frequencies generally within the range of human hearing.) In some examples, the people monitoring watermark(s) generated or otherwise obtained by audio watermarker 305 reside outside (e.g., above and/or below) the audible frequency range. In some examples, the people monitoring watermark(s) generated or otherwise obtained by audio watermarker 305 have one or more characteristics that differentiate the people monitoring watermark(s) from other types of audio watermarks, such as audio watermarks embedded in the media presented by the media presentation device 110. For example, if the audio watermarks used for monitoring media (e.g., which are embedded in the media presented by the media presentation device 110) reside in a first range of frequencies (e.g., a first frequency band or set of bands), then the people monitoring watermark(s) may reside in a second range of frequencies (e.g., a second frequency band or set of bands) different from the first range of frequencies.
In some examples, the audio watermarker 305 embeds the people monitoring watermark(s) in another audio signal generated by the audio watermarker 305 or obtained from an example audio data store 310. For example, this other audio signal may be a pseudo-noise audio signal generated by the audio watermarker 305 or obtained from an example audio data store 310 to mask the people monitoring watermark(s). In other examples, the other audio signal in which the people monitoring watermark(s) is/are to be embedded may be a tone or melodic audio signal generated by the audio watermarker 305 or obtained from an example audio data store 310. In either of these examples, the audio watermarker 305 applies the audio signal embedded with the people monitoring watermark(s) to example audio circuitry 315 of the user device. The audio circuitry 315 of the illustrated examples processes the watermarked audio signal to generate and emit an acoustic signal, such as the acoustic signal 135, via one or more example speakers 320. The example audio circuitry 315 can be implemented by any existing and/or novel audio circuit technology capable of receiving an audio signal and emitting an appropriate acoustic signal 135 (e.g., such as one that meets one or more design specifications, etc.).
In some examples, the audio watermarker 305 provides the people monitoring watermark(s) to the audio circuitry 315 without embedding the watermark(s) in another audio signal. In such examples, the acoustic signal 135 output from the audio circuitry 315 and speaker(s) 320 may correspond to just the people monitoring watermark(s). In some examples, the audio circuitry 315 may combine the people monitoring watermark(s) provided by the audio watermarker 305 with other audio signals already being output by the user device, such as a ringtone, an audible alert, an audio track, a movie, etc. In some examples, the audio watermarker 305 obtains one or more of the people monitoring watermark(s) from the example audio data store 310 in addition to, or as an alternative to, generating the people monitoring watermark(s). The audio data store 310 can correspond to any type of memory, storage, data structure, database, etc., capable of storing audio data for subsequent retrieval. The audio data store 310 can be the same as, or different from, the audio data store 220.
The example people monitoring watermarker 210 of
In some examples, the trigger condition evaluator 325 determines whether multiple trigger conditions for emitting the people monitoring watermark(s) have been satisfied. In some such examples, the trigger condition evaluator 325 causes the audio watermarker 305 to provide the audio signal including the people monitoring watermark(s) to the audio circuitry 315 in response to determining that all trigger conditions have been satisfied, but not otherwise. In some examples, the trigger condition evaluator 325 causes the audio watermarker 305 to provide the audio signal including the people monitoring watermark(s) to the audio circuitry 315 in response to determining that at least one trigger condition has been satisfied. In some examples, the trigger condition evaluator 325 causes the audio watermarker 305 to provide the audio signal including the people monitoring watermark(s) to the audio circuitry 315 in response to determining that a combination (e.g., a majority) of the trigger conditions have been satisfied, but not otherwise.
The example people monitoring watermarker 210 of
In some examples, the audio watermarker 305 employs psychoacoustic masking to increase the likelihood that the source of the input audio signal processed by the input audio evaluator 330 (e.g., the audio in the vicinity of the user device, which may correspond to the media presented by the media presentation device 110) will be able to mask the people monitoring watermark(s) emitted by the user device. In some such examples, the audio watermarker 305 uses the input audio level determined by the input audio evaluator 330 to adjust a level of the audio signal, which includes the people monitoring watermark(s), that the audio watermarker 305 is to provide to the audio circuitry 315. For example, the audio watermarker 305 may adjust a level of the audio signal including the people monitoring watermark(s) by applying a gain factor or attenuation factor that causes the level of the audio signal including the people monitoring watermark(s) to be less than or equal to (or a fraction of, etc.) the input audio level determined by the input audio evaluator 330 for the input audio signal. In this way, the people monitoring watermark(s) may reside in the audible frequency range, but may be masked by (e.g., inaudible over) the ambient audio in the vicinity of the media presentation device 110.
The clock 335 of the illustrated example provides clock information (e.g., day and time information) to the trigger condition evaluator 325. The trigger condition evaluator 325 uses the clock information provided by the clock 335 to evaluate one or more clock trigger conditions. For example, the trigger condition evaluator 325 can determine that a clock trigger condition is satisfied when the clock information provided by the clock 335 indicates that the current time (e.g., as determined by the clock 335) is within a specified time period or set of time periods. For example, the trigger condition evaluator 325 may be configured with one or more time periods during which the output (e.g., emission) of people monitoring watermark(s) is or is not permitted. The trigger condition evaluator 325 can then limit emission of people monitoring watermark(s) to the permitted time period(s). For example, the trigger condition evaluator 325 can use the clock information provided by the clock 335 to limit emission of people monitoring watermark(s) to daytime hours when people are not expected to be at work, and not permit people monitoring watermark(s) to be emitted at nighttime (e.g., when people are expected to be asleep), during normal business hours (e.g., when people are expected to be at work), etc.
The location determiner 340 of the illustrated example provides location information (e.g., geographic positioning system (GPS) data and/or other location data, etc.) to the trigger condition evaluator 325. The trigger condition evaluator 325 uses the location information provided by the location determiner 340 to evaluate one or more location trigger conditions. For example, the trigger condition evaluator 325 can determine that a location trigger condition is satisfied when the location information provided by the location determiner 340 indicates that the current location of the user device (e.g., as specified by the location information) is within a specified geographic area or set of geographic areas. For example, the trigger condition evaluator 325 may be configured with one or more geographic areas within which the output (e.g., emission) of people monitoring watermark(s) is or is not permitted. The trigger condition evaluator 325 can then limit emission of people monitoring watermark(s) to occur when the user device is located within the permitted geographic area(s). For example, the trigger condition evaluator 325 can use the location information provided by the location determiner 340 to limit emission of people monitoring watermark(s) to occur when the user device is located at the monitored site 105, and not permit people monitoring watermark(s) to be emitted when the user device is not located at the monitored site 105.
The device state evaluator 345 of the illustrated example provides device state information to the trigger condition evaluator 325. The trigger condition evaluator 325 uses the device state information provided by the location determiner 340 to evaluate one or more device state trigger conditions. For example, the trigger condition evaluator 325 can determine that a device state trigger condition is satisfied when the device state information provided by the device state evaluator 345 indicates that the user device currently has a given operating state. For example, the trigger condition evaluator 325 may be configured with one or more user device operating states during which the output (e.g., emission) of people monitoring watermark(s) is or is not permitted. The trigger condition evaluator 325 can then limit emission of people monitoring watermark(s) to occur when the user device is operating in one or more of the permitted operating states. For example, the trigger condition evaluator 325 can use the device state information provided by the device state evaluator 345 to limit emission of people monitoring watermark(s) to occur when the user device is already outputting another audio signal (e.g., to permit the audio circuitry 315 to combine the watermark(s) with this audio signal), and not permit people monitoring watermark(s) to be emitted when the user device is not already outputting another audio signal. As another example, the trigger condition evaluator 325 can use the device state information provided by the device state evaluator 345 to limit emission of people monitoring watermark(s) to occur when the user device is in an idle operating state, and not permit people monitoring watermark(s) to be emitted when the user device is performing a native operation, such as making a phone call, etc.
A block diagram of an example implementation of the site meter 115 of
The example site meter 115 of
The example site meter 115 of
The example site meter 115 of
While example manners of implementing the audience metering system 100 are illustrated in
Flowcharts representative of example machine readable instructions for implementing the example audience metering system 100, the example site meter 115, the example acoustic sensor 120, the example user devices 140, 140A and/or 140B, the example people meter 145, the example network 150, the example data processing facility 155, the example people monitor downloader 205, the example people monitoring watermarker 210, the example watermarked audio downloader 215, the example audio data store 220, the example audio watermarker 305, the example audio data store 310, the example audio circuitry 315, the example speaker(s) 320, the example trigger condition evaluator 325, the example input audio evaluator 330, the example clock 335, the example location determiner 340, the example device state evaluator 345, the example acoustic sensor 350, the example sensor interface 405, the example watermark detector 410, the example watermark classifier 415 and/or the example data reporter 420 are shown in
As mentioned above, the example processes of
An example program 500 that may be executed by the example user devices 140 and/or 140A of
An example program 600 that may be executed by the example data processing facility 155 of
An example program 700 that may be executed by the example user devices 140 and/or 140B of
An example program 800 that may be executed by the example data processing facility 155 of
A first example program 900 that may be executed to implement the example people monitoring watermarker 210 of
At block 910, the trigger condition evaluator 325 determines whether the trigger condition(s) evaluated at block 905 have been satisfied. If the trigger condition(s) have been satisfied (block 910), then at block 915 the trigger condition evaluator 325 causes the example audio watermarker 305 of the people monitoring watermarker 210 to provide an audio signal including the people monitoring watermark(s) to the example audio circuitry 315, as described above. As also described above, the audio circuitry 315 is to process the audio signal provided by the audio watermarker 305 to generate and output (e.g., emit), from the user device 140/140A, a corresponding acoustic signal conveying the people monitoring watermark(s).
At block 920, the people monitoring watermarker 210 determines whether people monitoring is complete. If people monitoring is not complete (block 920), processing returns to block 905 and blocks subsequent thereto to enable the people monitoring watermarker 210 to cause people monitoring watermark(s) to continue to be output by (e.g., emitted from) the user device 140/140A. Otherwise, execution of the example program 900 ends.
A second example program 1000 that may be executed to implement the example people monitoring watermarker 210 of
At block 910, the trigger condition evaluator 325 determines whether the trigger condition(s) evaluated at block 905 have been satisfied. If the trigger condition(s) have been satisfied (block 910), then at block 1005 the trigger condition evaluator 325 causes the example audio watermarker 305 of the people monitoring watermarker 210 to generate or retrieve an audio signal including people monitoring watermark(s), as described above. At block 1010, the audio watermarker 305 adjusts, as described above, the level of the watermarked audio signal obtained at block 1005 based on an input audio signal level determined by the input audio evaluator 330. For example, and as described in detail above, at block 1010 the audio watermarker 305 may apply a gain factor or attenuation factor that causes the level of the audio signal obtained at block 1005, which includes the people monitoring watermark(s), to be less than or equal to (or a fraction of, etc.) the input audio level determined by the input audio evaluator 330 for the input audio signal. Such adjustments can increase the likelihood that the people monitoring watermark(s) is/are masked by the ambient audio. At block 1015, the audio watermarker 305 provides the adjusted audio signal, which includes the people monitoring watermark(s), to the example audio circuitry 315, as described above.
At block 920, the people monitoring watermarker 210 determines whether people monitoring is complete. If people monitoring is not complete (block 920), processing returns to block 905 and blocks subsequent thereto to enable the people monitoring watermarker 210 to cause people monitoring watermark(s) to continue to be output by (e.g., emitted from) the user device 140/140A. Otherwise, execution of the example program 1000 ends.
An example program 905P that may be executed to implement the example trigger condition evaluator 325 of the example people monitoring watermarker 210 of
An example program 1200 that may be executed to implement the example site meter 115 of
At block 1225, the site meter 115 determines whether monitoring is complete. If monitoring is not complete (block 1225), processing returns to block 1205 and blocks subsequent thereto to enable the site meter 115 to continue monitoring. Otherwise, execution of the example program 1200 ends.
The processor platform 1300 of the illustrated example includes a processor 1312. The processor 1312 of the illustrated example is hardware. For example, the processor 1312 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer.
The processor 1312 of the illustrated example includes a local memory 1313 (e.g., a cache). The processor 1312 of the illustrated example is in communication with a main memory including a volatile memory 1314 and a non-volatile memory 1316 via a link 1318. The link 1318 may be implemented by a bus, one or more point-to-point connections, etc., or a combination thereof. The volatile memory 1314 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory 1316 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1314, 1316 is controlled by a memory controller.
The processor platform 1300 of the illustrated example also includes an interface circuit 1320. The interface circuit 1320 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface.
In the illustrated example, one or more input devices 1322 are connected to the interface circuit 1320. The input device(s) 1322 permit(s) a user to enter data and commands into the processor 1312. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, a trackbar (such as an isopoint), a voice recognition system and/or any other human-machine interface. Also, many systems, such as the processor platform 1300, can allow the user to control the computer system and provide data to the computer using physical gestures, such as, but not limited to, hand or body movements, facial expressions, and face recognition.
One or more output devices 1324 are also connected to the interface circuit 1320 of the illustrated example. The output devices 1324 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit 1320 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor.
The interface circuit 1320 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1326 (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
The processor platform 1300 of the illustrated example also includes one or more mass storage devices 1328 for storing software and/or data. Examples of such mass storage devices 1328 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID (redundant array of independent disks) systems, and digital versatile disk (DVD) drives.
Coded instructions 1332 corresponding to the instructions of
Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
This patent arises from a continuation of U.S. patent application Ser. No. 16/426,979 (now U.S. Pat. No. 11,250,865), which is titled “AUDIO WATERMARKING FOR PEOPLE MONITORING,” and which was filed on May 30, 2019, which is a continuation of U.S. patent application Ser. No. 14/332,055 (now U.S. Pat. No. 10,410,643), which is titled “AUDIO WATERMARKING FOR PEOPLE MONITORING,” and which was filed on Jul. 15, 2014. Priority to U.S. patent application Ser. No. 16/426,979 and U.S. patent application Ser. No. 14/332,055 is hereby expressly claimed. U.S. patent application Ser. No. 16/426,979 and U.S. patent application Ser. No. 14/332,055 are hereby incorporated by reference in their respective entireties.
Number | Name | Date | Kind |
---|---|---|---|
5450490 | Jensen et al. | Sep 1995 | A |
5764763 | Jensen et al. | Jun 1998 | A |
6252522 | Hampton et al. | Jun 2001 | B1 |
6737957 | Petrovic et al. | May 2004 | B1 |
6775557 | Tsai | Aug 2004 | B2 |
6952774 | Kirovski et al. | Oct 2005 | B1 |
7006555 | Srinivasan | Feb 2006 | B1 |
7069238 | I'Anson et al. | Jun 2006 | B2 |
7181159 | Breen | Feb 2007 | B2 |
7239981 | Kolessar et al. | Jul 2007 | B2 |
7471987 | Crystal et al. | Dec 2008 | B2 |
7587067 | Schiller | Sep 2009 | B1 |
7823772 | Vawter | Nov 2010 | B2 |
7986231 | Bentley et al. | Jul 2011 | B1 |
8229458 | Busch | Jul 2012 | B2 |
8245043 | Cutler | Aug 2012 | B2 |
8300849 | Smirnov et al. | Oct 2012 | B2 |
8355910 | McMillan et al. | Jan 2013 | B2 |
8539527 | Wright et al. | Sep 2013 | B2 |
8543061 | Suhami | Sep 2013 | B2 |
8655011 | Hannigan et al. | Feb 2014 | B2 |
9635404 | McMillan | Apr 2017 | B2 |
9767823 | Villette et al. | Sep 2017 | B2 |
10410643 | Topchy et al. | Sep 2019 | B2 |
11250865 | Topchy et al. | Feb 2022 | B2 |
20020174219 | Mei et al. | Nov 2002 | A1 |
20030018784 | Lette et al. | Jan 2003 | A1 |
20030154073 | Ota et al. | Aug 2003 | A1 |
20040169581 | Petrovic et al. | Sep 2004 | A1 |
20050055214 | Kirovski et al. | Mar 2005 | A1 |
20050144006 | Oh | Jun 2005 | A1 |
20070003057 | Lemma et al. | Jan 2007 | A1 |
20070220263 | Ziener et al. | Sep 2007 | A1 |
20080002854 | Tehranchi et al. | Jan 2008 | A1 |
20080165958 | Matsushita | Jul 2008 | A1 |
20080313713 | Cutler | Dec 2008 | A1 |
20090037575 | Crystal et al. | Feb 2009 | A1 |
20090055854 | Wright et al. | Feb 2009 | A1 |
20090136081 | Mamidwar et al. | May 2009 | A1 |
20090186639 | Tsai | Jul 2009 | A1 |
20090222848 | Ramaswamy | Sep 2009 | A1 |
20090256972 | Ramaswamy et al. | Oct 2009 | A1 |
20090326690 | Turchetta et al. | Dec 2009 | A1 |
20100042843 | Brunk et al. | Feb 2010 | A1 |
20100268540 | Arshi et al. | Oct 2010 | A1 |
20100268573 | Jain et al. | Oct 2010 | A1 |
20100280641 | Harkness et al. | Nov 2010 | A1 |
20100322035 | Rhoads et al. | Dec 2010 | A1 |
20110033061 | Sakurada | Feb 2011 | A1 |
20110066437 | Luff | Mar 2011 | A1 |
20110068898 | Petrovic et al. | Mar 2011 | A1 |
20110126222 | Wright et al. | May 2011 | A1 |
20110144998 | Grill et al. | Jun 2011 | A1 |
20110164784 | Grill et al. | Jul 2011 | A1 |
20110230161 | Newman | Sep 2011 | A1 |
20110246202 | McMillan et al. | Oct 2011 | A1 |
20120116559 | Davis et al. | May 2012 | A1 |
20120203561 | Villette et al. | Aug 2012 | A1 |
20120214544 | Shivappa et al. | Aug 2012 | A1 |
20120239407 | Lynch et al. | Sep 2012 | A1 |
20120277893 | Davis et al. | Nov 2012 | A1 |
20120308071 | Ramsdell | Dec 2012 | A1 |
20120311620 | Conklin | Dec 2012 | A1 |
20130007790 | McMillan | Jan 2013 | A1 |
20130103172 | McMillan et al. | Apr 2013 | A1 |
20130119133 | Michael et al. | May 2013 | A1 |
20130152139 | Davis et al. | Jun 2013 | A1 |
20130160042 | Stokes et al. | Jun 2013 | A1 |
20130171926 | Perret et al. | Jul 2013 | A1 |
20130205311 | Ramaswamy et al. | Aug 2013 | A1 |
20130227595 | Nielsen et al. | Aug 2013 | A1 |
20130253918 | Jacobs | Sep 2013 | A1 |
20130262687 | Avery et al. | Oct 2013 | A1 |
20140026159 | Cuttner | Jan 2014 | A1 |
20140059587 | Davis et al. | Feb 2014 | A1 |
20140088742 | Srinivasan et al. | Mar 2014 | A1 |
20140108020 | Sharma et al. | Apr 2014 | A1 |
20140142958 | Sharma et al. | May 2014 | A1 |
20140150001 | McMillan | May 2014 | A1 |
20140156285 | Jax | Jun 2014 | A1 |
20140172435 | Thiergart et al. | Jun 2014 | A1 |
20140250449 | Ramaswamy | Sep 2014 | A1 |
20140253326 | Cho et al. | Sep 2014 | A1 |
20140254801 | Srinivasan et al. | Sep 2014 | A1 |
20140278933 | McMillan | Sep 2014 | A1 |
20140282664 | Lee | Sep 2014 | A1 |
20140282669 | McMillan | Sep 2014 | A1 |
20140282693 | Soundararajan et al. | Sep 2014 | A1 |
20140344033 | Driscoll | Nov 2014 | A1 |
20150016661 | Lord | Jan 2015 | A1 |
20150023546 | Strein | Jan 2015 | A1 |
20150092106 | Savare et al. | Apr 2015 | A1 |
20150149297 | Mahadevan et al. | May 2015 | A1 |
20150341890 | Corbellini et al. | Nov 2015 | A1 |
20160019901 | Topchy et al. | Jan 2016 | A1 |
20160066032 | Grant | Mar 2016 | A1 |
20170050108 | Johnson et al. | Feb 2017 | A1 |
Number | Date | Country |
---|---|---|
2014400791 | May 2018 | AU |
2018222996 | Sep 2018 | AU |
1638479 | Jul 2005 | CN |
101371472 | Feb 2009 | CN |
101536512 | Sep 2009 | CN |
102265536 | Nov 2011 | CN |
102385862 | Mar 2012 | CN |
102981418 | Mar 2013 | CN |
106537495 | Feb 2020 | CN |
2835917 | Feb 2015 | EP |
3170175 | May 2017 | EP |
3913627 | Nov 2021 | EP |
0106755 | Jan 2001 | WO |
2014137384 | Sep 2014 | WO |
2016010574 | Jan 2016 | WO |
Entry |
---|
IP Australia “Notice of Acceptance for Patent Application” dated Mar. 17, 2022, in connection with AU application No. 2020217384, 3 pages. |
Korean Intellectual Property Office, “Notice of Final Rejection,” mailed in connection with Korean Patent Application No. 10-2017-7001444, dated Mar. 19, 2019, 6 pages. |
European Patent Office, “Communication Pursuant to Article 94(3) EPC,” mailed in connection with European Patent Application No. 14897835.6, dated Mar. 14, 2019, 6 pages. |
State Intellectual Property Office of China, “Office Action,” mailed in connection with Chinese Patent Application No. 201480080654.8, dated Mar. 26, 2019, 21 pages. |
Korean Intellectual Property Office, “Notice of Allowance,” mailed in connection with Korean Patent Application No. 10-2017-7001444, dated May 1, 2019, 3 pages. |
IP Australia, “Notice of Acceptance for Patent Application,” mailed in connection with Australian Patent Application No. 2014400791, dated May 18, 2018, 3 pages. |
Korean Intellectual Property Office, “Notice of Preliminary Rejection,” mailed in connection with Korean Patent Application No. 10-2017-7001444, dated Sep. 10, 2018, 11 pages. |
Canadian Intellectual Property Office, “Examination Report,” mailed in connection with Canadian Patent Application No. 2,954,865, dated Sep. 14, 2018, 5 pages. |
European Patent Office, “European Search Report,” mailed in connection with European Patent Application No. 14897835.6, dated Nov. 28, 2017, 10 pages. |
Wang et al., “A Novel Security Mobile Payment System Based on Watermarked Voice Cheque,” 2nd International Conference on Guangzhou, China, Nov. 2005, 4 pages. |
Canadian Intellectual Property Office, “Examination Report,” mailed in connection with Canadian Patent Application No. 2,954,865, dated Oct. 26, 2017, 4 pages. |
IP Australia, “Examination Report,” mailed in connection with Australian Patent Application No. 2014400791, dated Aug. 18, 2017, 4 pages. |
International Searching Authority, “International Search Report,” mailed in connection with International Patent Application No. PCT/US2014/068176, dated Mar. 27, 2015, 3 pages. |
International Searching Authority, “Written Opinion,” mailed in connection with International Patent Application No. PCT/US2014/068176, dated Mar. 27, 2015, 7 pages. |
Ewing, “Digital Watermarking,” Forging the Frontier of Content Identification, Dec. 2006, 5 pages. Retrieved from musicalwinfo.com. |
Musictrace Gmbh, “Watermark Embedding for Audio Signals,” May 29, 2014, 3 pages. Retrieved from musictrace.com. |
United States Patent and Trademark Office, “Notice of Allowance,” mailed in connection with U.S. Appl. No. 14/332,055, dated Mar. 11, 2019, 9 pages. |
United States Patent and Trademark Office, “Non-final Office Action,” mailed in connection with U.S. Appl. No. 14/332,055, dated Jul. 6, 2018, 16 pages. |
United States Patent and Trademark Office, “Final Office Action,” mailed in connection with U.S. Appl. No. 14/332,055, dated Jan. 11, 2018, 15 pages. |
United States Patent and Trademark Office, “Non-final Office Action,” mailed in connection with U.S. Appl. No. 14/332,055, dated Jul. 14, 2017, 15 pages. |
United States Patent and Trademark Office, “Final Office Action,” mailed in connection with U.S. Appl. No. 14/332,055, dated Jan. 5, 2017, 13 pages. |
United States Patent and Trademark Office, “Non-final Office Action,” mailed in connection with U.S. Appl. No. 14/332,055, dated May 19, 2016, 12 pages. |
Canadian Intellectual Property Office, “Office Action,” mailed in connection with Canadian Patent Application No. 2,954,865, dated Jul. 11, 2019, 4 pages. |
IP Australia, “Examination Report No. 1,” mailed in connection with Australian Patent Application No. 2018222996, dated Oct. 21, 2019, 2 pages. |
European Patent Office, “Communication Pursuant to Article 94(3) EPC,” mailed in connection with European Patent Application No. 14897835.6, dated Sep. 24, 2019, 4 pages. |
China National Intellectual Property Administration, “Notification to Grant the Patent Right for Invention,” mailed in connection with Chinese Patent Application No. 201480080654.8, dated Oct. 21, 2019, 4 pages. |
Intellectual Property Office of the United Kingdom, “Examination Report,” mailed in connection with Patent Application No. GB1700906.9, dated Jul. 1, 2020, 2 pages. |
Mexican Institute of Industrial Property, “Office Action,” mailed in connection with Mexican Patent Application No. MX/a/2017/000333, dated Nov. 11, 2020, 8 pages. |
European Patent Office, “Communication under Rule 71(3) EPC,” mailed in connection with European Patent Application No. 14897835.6, dated Feb. 2, 2021, 52 pages. |
Intellectual Property Office of the United Kingdom, “Combined Search and Examination Report,” mailed in connection with GB Patent Application No. 2102386.6, dated Mar. 5, 2021, 4 pages. |
Intellectual Property Office of Great Britain, “Examination Report,” mailed in connection with GB Patent Application No. GB2102386.6, dated Jul. 22, 2021, 3 pages. |
IP Australia, “Examination Report,” mailed in connection with Australian Patent Application No. 2020217384, dated Jul. 19, 2021, 3 pages. |
Extended European Search Report mailed by the European Patent Office in corresponding European Patent Application No. 21185539.0-1210 dated Aug. 6, 2021 (10 pages). |
United States Patent and Trademark Office, “Notice of Allowance,” mailed in connection with U.S. Appl. No. 16/426,979, dated Dec. 16, 2021, 5 pages. |
United States Patent and Trademark Office, “Notice of Allowance,” mailed in connection with U.S. Appl. No. 16/426,979, dated Aug. 25, 2021, 9 pages. |
United States Patent and Trademark Office, “Final Rejection,” mailed in connection with U.S. Appl. No. 16/426,979, dated Nov. 13, 2020, 13 pages. |
United States Patent and Trademark Office, “Non-Final Rejection,” mailed in connection with U.S. Appl. No. 16/426,979, dated Apr. 6, 2020, 14 pages. |
Canadian Intellectual Property Office, “Office Action,” mailed in connection with Canadian Patent Application No. 3,100,159, dated Jan. 11, 2022, 4 pages. |
State Intellectual Property Office of China, “First Office Action,” dated Feb. 27, 2023, in connection with Chinese Patent Application No. 202010009161.2, 20 pages. (English Translation Included). |
International Searching Authority, “International Preliminary Report on Patentability”, issued in connection with PCT No. PCT/US2014/068176, dated Jan. 17, 2017, 8 pages. |
Australian Government, IP Australia,“Notice of Acceptance,” issued in connection with AU Application No. 2018222996, dated May 1, 2020, 3 pages. |
United Kingdom Intellectual Property Office, “Examination Report and Notification under Section 18 (3),” issued in connection with United Kingdom Patent Application No. 1700906.9, dated Sep. 1, 2020, 2 pages. |
United Kingdom Intellectual Property Office, “Examination Report and Notification of Intention to Grant under Section 18(4),” issued in connection with United Kingdom Patent Application No. 1700906.9, dated Jan. 20, 2021, 2 pages. |
United Kingdom Intellectual Property Office, “Notification of Grant,” issued in connection with United Kingdom Patent Application No. 1700906.9 dated Mar. 9, 2021, 2 pages. |
European Patent Office, “Decision to grant a European patent,” issued in connection with European patent application No. 14897835.6, dated Jun. 24, 2021, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20220122618 A1 | Apr 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16426979 | May 2019 | US |
Child | 17565167 | US | |
Parent | 14332055 | Jul 2014 | US |
Child | 16426979 | US |