The present disclosure relates to encoding and decoding broadcast or recorded segments such as broadcasts transmitted over the air, via cable, satellite or otherwise, and video, music or other works distributed on previously recorded media, and more specifically, processing media data within a set-top box (STB) that includes encoding/decoding, for subsequent use in media and/or market research.
There is considerable interest in monitoring and measuring the usage of media data accessed by an audience via a network or other source. In order to determine audience interest and what audiences are being presented with, a user's system may be monitored for discrete time periods while connected to a network, such as the Internet.
There is also considerable interest in providing market information to advertisers, media distributors and the like which reveal the demographic characteristics of such audiences, along with information concerning the size of the audience. Further, advertisers and media distributors would like the ability to produce custom reports tailored to reveal market information within specific parameters, such as type of media, user demographics, purchasing habits and so on. In addition, there is substantial interest in the ability to monitor media audiences on a continuous, real-time basis. This becomes very important for measuring streaming media data accurately, because a snapshot or event generation fails to capture the ongoing and continuous nature of streaming media data usage.
Based upon the receipt and identification of media data, the rating or popularity of various web sites, channels and specific media data may be estimated. It would be advantageous to determine the popularity of various web sites, channels and specific media data according to the demographics of their audiences in a way which enables precise matching of data representing media data usage with user demographic data.
As disclosed in U.S. Pat. No. 7,460,827 to Schuster, et al. and U.S. Pat. No. 7,222,071 to Neuhauser, et al., which are hereby incorporated by reference in their entirety herein, specialized technology exists where a small, pager-size, specially-designed receiving stations called Portable People Meters (PPM) allow for the tracking of media exposure for users/panelists. In these applications, the embedded audio signal or ID code is picked up by one or more PPMs, which capture the encoded identifying signal, and store the information along with a time stamp in memory for retrieval at a later time. A microphone contained within the PPM receives the audio signal, which contains within it the ID code.
One of the goals of audience measurement is to identify the audience for specific channel viewing. With the HDTV and Digital age upon us, nearly every household has a STB attached to their TV, this allows for access to viewing habits and other household penetration. Therefore it would be advantageous to integrate audio encoding technology with one or more STBs for monitoring purposes. Furthermore, due to the STB's advanced design, performance and scalability, the STB does not only supply high real-time performance affordably, but can also be easily remotely reprogrammed for new configurations, updates, upgrades and applications. The integration of audio encoding technology with STB devices would eliminate unnecessary equipment and reduce associated costs.
Under an exemplary embodiment, a detection and identification system is integrated with a Set-top box (STB), where a system for audio encoding is implemented within a STB. The encoding automatically identifies, at a minimum, the source of a particular piece of material by embedding an inaudible code within the content. This code contains information about the content that can be decoded by a machine, but is not detectable by human hearing.
An STB, for the purposes of this disclosure, may be simply defined as a computerized device that processes digital information. The STB may come in many forms and can have a variety of functions. Digital Media Adapters, Digital Media Receivers, Windows Media Extender and most video game consoles are also examples of set-top boxes. Currently the type of TV set-top box most widely used is one which receives encoded/compressed digital signals from the signal source (e.g., the content provider's headend) and decodes/decompresses those signals, converting them into analog signals that an analog (SDTV) television can understand. The STB accepts commands from the user (often via the use of remote devices such as a remote control) and transmits these commands back to the network operator through some sort of return path. The STB preferably has a return path capability for two-way communication.
STBs can make it possible to receive and display TV signals, connect to networks, play games via a game console, surf the Internet, interact with Interactive Program Guides (IPGs), virtual channels, electronic storefronts, walled gardens, send e-mail, and videoconference. Many STBs are able to communicate in real time with devices such as camcorders, DVD and CD players, portable media devices and music keyboards. Some have large dedicated hard-drives and smart card slots to insert smart cards into for purchases and identification.
For this application the following terms and definitions shall apply:
The term “data” as used herein means any indicia, signals, marks, symbols, domains, symbol sets, representations, and any other physical form or forms representing information, whether permanent or temporary, whether visible, audible, acoustic, electric, magnetic, electromagnetic or otherwise manifested. The term “data” as used to represent predetermined information in one physical form shall be deemed to encompass any and all representations of corresponding information in a different physical form or forms.
The terms “media data” and “media” as used herein mean data which is widely accessible, whether over-the-air, or via cable, satellite, network, internetwork (including the Internet), print, displayed, distributed on storage media, or by any other means or technique that is humanly perceptible, without regard to the form or content of such data, and including but not limited to audio, video, audio/video, text, images, animations, databases, broadcasts, displays (including but not limited to video displays, posters and billboards), signs, signals, web pages, print media and streaming media data.
The term “research data” as used herein means data comprising (1) data concerning usage of media data, (2) data concerning exposure to media data, and/or (3) market research data.
The term “presentation data” as used herein means media data or content other than media data to be presented to a user.
The term “ancillary code” as used herein means data encoded in, added to, combined with or embedded in media data to provide information identifying, describing and/or characterizing the media data, and/or other information useful as research data.
The terms “reading” and “read” as used herein mean a process or processes that serve to recover research data that has been added to, encoded in, combined with or embedded in, media data.
The term “database” as used herein means an organized body of related data, regardless of the manner in which the data or the organized body thereof is represented. For example, the organized body of related data may be in the form of one or more of a table, a map, a grid, a packet, a datagram, a frame, a file, an e-mail, a message, a document, a report, a list or in any other form.
The term “network” as used herein includes both networks and internetworks of all kinds, including the Internet, and is not limited to any particular network or inter-network.
The terms “first”, “second”, “primary” and “secondary” are used to distinguish one element, set, data, object, step, process, function, activity or thing from another, and are not used to designate relative position, or arrangement in time or relative importance, unless otherwise stated explicitly.
The terms “coupled”, “coupled to”, and “coupled with” as used herein each mean a relationship between or among two or more devices, apparatus, files, circuits, elements, functions, operations, processes, programs, media, components, networks, systems, subsystems, and/or means, constituting any one or more of (a) a connection, whether direct or through one or more other devices, apparatus, files, circuits, elements, functions, operations, processes, programs, media, components, networks, systems, subsystems, or means, (b) a communications relationship, whether direct or through one or more other devices, apparatus, files, circuits, elements, functions, operations, processes, programs, media, components, networks, systems, subsystems, or means, and/or (c) a functional relationship in which the operation of any one or more devices, apparatus, files, circuits, elements, functions, operations, processes, programs, media, components, networks, systems, subsystems, or means depends, in whole or in part, on the operation of any one or more others thereof.
The terms “communicate,” and “communicating” and as used herein include both conveying data from a source to a destination, and delivering data to a communications medium, system, channel, network, device, wire, cable, fiber, circuit and/or link to be conveyed to a destination and the term “communication” as used herein means data so conveyed or delivered. The term “communications” as used herein includes one or more of a communications medium, system, channel, network, device, wire, cable, fiber, circuit and link.
The term “processor” as used herein means processing devices, apparatus, programs, circuits, components, systems and subsystems, whether implemented in hardware, tangibly-embodied software or both, and whether or not programmable. The term “processor” as used herein includes, but is not limited to one or more computers, hardwired circuits, signal modifying devices and systems, devices and machines for controlling systems, central processing units, programmable devices and systems, field programmable gate arrays, application specific integrated circuits, systems on a chip, systems comprised of discrete elements and/or circuits, state machines, virtual machines, data processors, processing facilities and combinations of any of the foregoing.
The terms “storage” and “data storage” as used herein mean one or more data storage devices, apparatus, programs, circuits, components, systems, subsystems, locations and storage media serving to retain data, whether on a temporary or permanent basis, and to provide such retained data.
The present disclosure illustrates systems and methods for implementing audio encoding technology within a STB. Under various disclosed embodiments, one or more STBs are equipped with hardware and/or software to monitor an audience member's viewing and/or listening habits. The STBs are connected between a media device (e.g., television) and an external source of signal. In addition to converting a signal into content which is can be displayed on the television screen, the STB uses audio encoding technology to encode/decode the ancillary code within the source signal which can assist in producing research data.
By monitoring an audience member's media habits, the research data is manipulated where the media habits of one or more audience members can be reliably obtained to provide market information to advertisers, media distributors and the like which reveals the demographic characteristics of such audiences, along with information concerning the size of the audience. In certain embodiments, the technology may be used to simultaneously return applicable advertisements on a media device.
Various embodiments of the present invention will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.
Under an exemplary embodiment, a system is implemented in a Set Top Box (STB) for gathering research data using encoding technology (e.g., CBET) concerning exposure of a user of the STB to audio and/or visual media. The present invention also relates to encoding and decoding broadcast or recorded segments such as broadcasts transmitted over the air, via cable, satellite or otherwise, and video, music or other works distributed on previously recorded media within the STB, as well as monitoring audience exposure to any of the foregoing. An exemplary process for gathering research data comprises transducing acoustic energy to audio data, receiving media data in non-acoustic form in a STB and producing research data based on the audio data and based on the media data and/or metadata of the media data.
The STB in the present disclosure relates to any consumer electronic devices capable to receive media/video content including digital video broadcast (DVB) standards and present the content to a user. In the case of video content, the development of IP networks and broadband/ADSL allow video content of good quality to be delivered as Internet Protocol television (IPTV) in the set-top boxes. Digital television may be delivered under a variety of DVB (Digital Video Broadcast) standards, such as DVB, DVB-S, DVB-S2, DVB-C, DVB-T and DVB-T2. The STB's may accept content from terrestrial, satellite, cable and/or streaming media via IP network.
An exemplary STB comprises a frontend which includes a tuner and a DVB demodulator. The frontend receives a raw signal from antenna or cable, and the signal is converted by the frontend into transport (MPEG) stream. Satellite equipment control (SEC) may also be provided in the case of satellite antenna setup. Additionally, a conditional access (CA) module or smartcard slot is provided to perform real-time decoding of encrypted transport stream. Demuxer filters incoming DVB stream and splits a transport stream into video and audio parts. The transport stream can contain some special streams like teletext or subtitles. Separated video and audio streams are preferably
Numerous types of research operations are possible utilizing the STB technology, including, without limitation, television and radio program audience measurement wherein the broadcast signal is embedded with metadata. Because the STB is capable of monitoring any nearby encoded media, the STB may also be used to determine characteristics of received media and monitor exposure to advertising in various media, such as television, radio, internet audio, and even print advertising. For the desired type of media and/or market research operation to be conducted, particular activity of individuals is monitored, or data concerning their attitudes, awareness and/or preferences is gathered. In certain embodiments, research data relating to two or more of the foregoing are gathered, while in others only one kind of such data is gathered.
Turning to
In the embodiment of
Once the content of the message is known, a sequence of symbols is assigned to represent the specific message. The symbols are selected from a predefined set of alphabet of code symbols. In certain embodiments the symbol sequences are preassigned to corresponding predefined messages. When a message to be encoded is fixed, as in a station ID message, encoding operations may combined to define a single invariant message symbol sequence. Subsequently, a plurality of substantially single-frequency code components are assigned to each of the message symbols.
When the message is encoded, each symbol of the message is represented in the audio data by its corresponding plurality of substantially single-frequency code components. Each of such code components occupies only a narrow frequency band so that it may be distinguished from other such components as well as noise with a sufficiently low probability of error. It is recognized that the ability of an encoder or decoder to establish or resolve data in the frequency domain is limited, so that the substantially single-frequency components are represented by data within some finite or narrow frequency band. Moreover, there are circumstances in which is advantageous to regard data within a plurality of frequency bands as corresponding to a substantially single-frequency component. This technique is useful where, for example, the component may be found in any of several adjacent bands due to frequency drift, variations in the speed of a tape or disk drive, or even as the result of an incidental or intentional frequency variation inherent in the design of a system.
In addition, digitized audio signals are supplied to encoder 110 for masking evaluation, pursuant to which the digitized audio signal is separated into frequency components, for example, by Fast Fourier Transform (FFT), wavelet transform, or other time-to-frequency domain transformation, or else by digital filtering. Thereafter, the masking abilities of audio signal frequency components within frequency bins of interest are evaluated for their tonal masking ability, narrow band masking ability and broadband masking ability (and, if necessary or appropriate, for non-simultaneous masking ability). Alternatively, the masking abilities of audio signal frequency components within frequency bins of interest are evaluated with a sliding tonal analysis.
More specific information regarding the encoding process described above, along with several advantageous and suitable techniques for encoding audience measurement data in audio data are disclosed in U.S. Pat. No. 7,640,141 to Ronald S. Kolessar and U.S. Pat. No. 5,764,763 to James M. Jensen, et al., which are assigned to the assignee of the present application, and which are incorporated by reference in their entirety herein. Other appropriate encoding techniques are disclosed in U.S. Pat. No. 5,579,124 to Aijala, et al., U.S. Pat. Nos. 5,574,962, 5,581,800 and 5,787,334 to Fardeau, et al., U.S. Pat. No. 5,450,490 to Jensen, et al., and U.S. Pat. No. 6,871,180, in the names of Neuhauser, et al., each of which is assigned to the assignee of the present application and all of which are incorporated herein by reference in their entirety.
Data to be encoded is received and, for each data state corresponding to a given signal interval, its respective group of code components is produced, and subjected to level adjustment and relevant masking evaluations. Signal generation may be implemented, for example, by means of a look-up table storing each of the code components as time domain data or by interpolation of stored data. The code components can either be permanently stored or generated upon initialization of the STB 100 and then stored in memory, such as in RAM, to be output as appropriate in response to the data received. The values of the components may also be computed at the time they are generated.
Level adjustment is carried out for each of the code components based upon the relevant masking evaluations as discussed above, and the code components whose amplitude has been adjusted to ensure inaudibility are added to the digitized audio signal. Depending on the amount of time necessary to carry out the foregoing processes, it may be desirable to delay the digitized audio signal by temporary storage in memory. If the audio signal is not delayed, after an FFT and masking evaluation have been carried out for a first interval of the audio signal, the amplitude adjusted code components are added to a second interval of the audio signal following the first interval. If the audio signal is delayed, however, the amplitude adjusted code components can instead be added to the first interval and a simultaneous masking evaluation may thus be used. Moreover, if the portion of the audio signal during the first interval provides a greater masking capability for a code component added during the second interval than the portion of the audio signal during the second interval would provide to the code component during the same interval, an amplitude may be assigned to the code component based on the non-simultaneous masking abilities of the portion of audio signal within the first interval. In this fashion both simultaneous and non-simultaneous masking capabilities may be evaluated and an optimal amplitude can be assigned to each code component based on the more advantageous evaluation.
In certain applications, such as in broadcasting, or analog recording (as on a conventional tape cassette), the encoded audio signal in digital form is converted to analog form by a digital-to-analog converter (DAC) discussed below in connection with
Still other suitable encoding techniques are the subject of PCT Publication WO 00/04662 to Srinivasan, U.S. Pat. No. 5,319,735 to Preuss, et al., U.S. Pat. No. 6,175,627 to Petrovich, et al., U.S. Pat. No. 5,828,325 to Wolosewicz, et al., U.S. Pat. No. 6,154,484 to Lee, et al., U.S. Pat. No. 5,945,932 to Smith, et al., PCT Publication WO 99/59275 to Lu, et al., PCT Publication WO 98/26529 to Lu, et al., and PCT Publication WO 96/27264 to Lu, et al, all of which are incorporated herein by reference.
In certain embodiments, the encoder 110 forms a data set of frequency-domain data from the audio data and the encoder processes the frequency-domain data in the data set to embed the encoded data therein. Where the codes have been formed as in the Jensen, et al. U.S. Pat. No. 5,764,763 or U.S. Pat. No. 5,450,490, the frequency-domain data is processed by the encoder 25 to embed the encoded data in the form of frequency components with predetermined frequencies. Where the codes have been formed as in the Srinivasan PCT Publication WO 00/04662, in certain embodiments the encoder processes the frequency-domain data to embed code components distributed according to a frequency-hopping pattern. In certain embodiments, the code components comprise pairs of frequency components modified in amplitude to encode information. In certain other embodiments, the code components comprise pairs of frequency components modified in phase to encode information. Where the codes have been formed as spread spectrum codes, as in the Aijala, et al. U.S. Pat. No. 5,579,124 or the Preuss, et al. U.S. Pat. No. 5,319,735, the encoder comprises an appropriate spread spectrum encoder.
The media measurement arrangements in
Another advantage of integrating encoding in a STB is that encoding may be performed directly at the source in real-time, thus reducing or eliminating the need to encode at the station or broadcaster. This allows cable providers, satellite TV network and STB manufacturers to provide download capability of the encoding application and the encoding engine over the air to a user's STB. In such an embodiment, the STB would have access to a look up table in which a unique code is assigned for each TV channel. During Broadcast, the encoder, operating at the video decoder output level, will encode the incoming broadcast signal for that channel. It is also possible to determine which channel was being viewed by embedding a different code for each channel. Further, by embedding both the encoder and the decoder the STB allows for real time encoding. In this embodiment, the output signal to the TV may be simultaneously decoded in real time. In this embedment, data is saved in a dedicated memory/storage, and communicated from the STB to the central media monitoring server for analysis.
Since many STBs are “on” even when the audio-visual device is “off”, encoding the audio signal allows media monitoring organizations to determine whether the media device (e.g., television) is on by decoding the room audio. This can be accomplished by using either a personal people meter (PPM™) worn by a panelist, by an embedded decoder in the STB, or by having a decoder and microphone connected to the STB via USB. As an alternative embodiment, the encoder and decoder are housed in a dedicated box that is connected between the STB and the audio-visual device (e.g. a TV). The ultimate results are the same except that in this case the encoder/decoder are in their own box rather than integrated with a STB. This may be advantageous in applications where STBs are not necessary for the audience members media viewing, such as over the air TV broadcast. In all embodiments, if the source signal has been previously encoded, the decoder will identify the source and program content to complement STB's channel identification.
Accordingly, an encoder running on a STB has a number of advantages, in that the configuration can determine whether or not TV is “on”, identify person level demographics for those wearing a portable device (e.g., PPM), provides the capability to the STB manufacturer or service providers to target specific channels or programs be encoded or decoded by codes, perform real-time encoding of program segments, perform transparently to the audience member, allows for the creation of a “mega panel” due to the number of existing STBs in use, and the STB has many existing hardware and software technological advantages for gathering data (e.g., the STBs are Wi-Fi/Bluetooth enabled).
In an alternate embodiment illustrated in
Turning to the exemplary embodiment in
As a source signal is received 400, tuner 404 down-converts the incoming carrier to an intermediate frequency (IF). The IF signal is demodulated into in-phase (“I”) and quadrature phase (“Q”) carrier components which are then A-D converted into a plurality of multi-bit data streams (e.g., 6-bit) for digital demodulation 406 and subsequent processing such as forward-error correction (FEC) in which the Reed-Solomon check/correction, de-interleaving and Viterbi decoding are carried out. A resulting transport stream is then forwarded to demultiplexer 408 which has responsibility for transmitting signals to respective video and audio (MPEG) decoders (410).
Decoder 410 is responsible for composing a continuous moving picture from the received frames from demultiplexer 408. Additionally, decoder 410 performs necessary data expansion, inverse DCT, interpolation and error correction. The reconstituted frames may be built up inside the decoder's DRAM (not show), or may also use memory 422. Decoder 410 outputs a pulse train containing the necessary A/V data (e.g., Y, Cr and Cb values for the pixels in the picture), which is communicated to video DAC 412 for conversion (and possible PAL encoding, if necessary).
In addition, decoder 410 forwards audio to encoder 418, which encodes audio data prior to converting audio in Audio DAC 424 and presenting the audio (L-R) and/or video to media device 402. In certain embodiments, encoder 418 embeds audience measurement data in the audio data, and may be embodied as software running on the STB, including embodiments in which the encoding software is integrated or coupled with another player running on the system of
In certain embodiments, the encoder 418 encodes audience measurement data as a further encoded layer in already-encoded audio data, so that two or more layers of embedded data are simultaneously present in the audio data. The layers should be arranged with sufficiently diverse frequency characteristics so that they may be separately detected. In certain of these embodiments the code is superimposed on the audio data asynchronously. In other embodiments, the code is added synchronously with the preexisting audio data. In certain ones of such synchronous encoding embodiments data is encoded in portions of the audio data which have not previously been encoded. At times the user system receives both audio data (such as streaming media) and audience measurement data (such as source identification data) which, as received, is not encoded in the audio data but is separate therefrom. In certain embodiments, the STB may supply such audience measurement data to the encoder 418 which serves to encode the audio data therewith.
Under one embodiment, the audience measurement data is source identification data, content identification code, data that provides information about the received audio data, demographic data regarding the user, and/or data describing the user system or some aspect thereof, such as the user agent (e.g. player or browser type), operating system, sound card, etc. The audience measurement data can also include an identification code. In certain embodiments for measuring exposure of any audience member to audio data obtained from the Internet, such as streaming media, the audience measurement data comprises data indicating that the audio data was obtained from the Internet, the type of player and/or source identification data.
CA block 512 is communicatively coupled with main CPU 520, which in turn processes controller data provided by tuner controller 522, CA controller 524 and media controller 526. Additionally, main CPU 520 also may receive inputs from watch dog timer 530 and time stamp 532. After down-conversion from tuner 510, the incoming carrier for source signal 502 is demodulated and A-D converted into a plurality of multi-bit data streams for digital demodulation and subsequent processing. A resulting transport stream is then forwarded to demultiplexer 514 which has responsibility for transmitting signals to media decoder 518, which, in the embodiment of
Media Decoder 518 processes a stream from demultiplexer 514 is responsible for composing a continuous moving picture from the received frames from demultiplexer 408. Additionally, decoder 410 performs necessary data expansion, inverse DCT, interpolation and error correction. The reconstituted frames may be built up inside the decoder's DRAM 508 or other suitable memory. Decoder 518 outputs a pulse train containing the necessary A/V data, which is communicated to video DAC 536 for conversion and output 542 to media device 544.
Decoder 518 forwards audio to encoder 528, which encodes audio data prior to converting audio in Audio DAC 534 and presenting the audio (L-R) to media device 544. Just as described above in connection with
There are several possible embodiments of decoding techniques that can be implemented for use in the present invention. Several advantageous techniques for detecting encoded audience measurement data are disclosed in U.S. Pat. No. 5,764,763 to James M. Jensen, et al., which is assigned to the assignee of the present application, and which is incorporated by reference herein. Other appropriate decoding techniques are disclosed in U.S. Pat. No. 5,579,124 to Aijala, et al., U.S. Pat. Nos. 5,574,962, 5,581,800 and 5,787,334 to Fardeau, et al., U.S. Pat. No. 5,450,490 to Jensen, et al., and U.S. patent application Ser. No. 09/318,045, in the names of Neuhauser, et al., each of which is assigned to the assignee of the present application and all of which are incorporated herein by reference.
Still other suitable decoding techniques are the subject of PCT Publication WO 00/04662 to Srinivasan, U.S. Pat. No. 5,319,735 to Preuss, et al., U.S. Pat. No. 6,175,627 to Petrovich, et al., U.S. Pat. No. 5,828,325 to Wolosewicz, et al., U.S. Pat. No. 6,154,484 to Lee, et al., U.S. Pat. No. 5,945,932 to Smith, et al., PCT Publication WO 99/59275 to Lu, et al., PCT Publication WO 98/26529 to Lu, et al., and PCT Publication WO 96/27264 to Lu, et al., all of which are incorporated herein by reference.
In certain embodiments, decoding is carried out by forming a data set from the audio data collected by the portable monitor 100 and processing the data set to extract the audience measurement data encoded therein. Where the encoded data has been formed as in U.S. Pat. No. 5,764,763 or U.S. Pat. No. 5,450,490, the data set is processed to transform the audio data to the frequency domain. The frequency domain data is processed to extract code components with predetermined frequencies. Where the encoded data has been formed as in the Srinivasan PCT Publication WO 00/04662, in certain embodiments the remote processor 160 processes the frequency domain data to detect code components distributed according to a frequency-hopping pattern. In certain embodiments, the code components comprise pairs of frequency components modified in amplitude to encode information which are processed to detect such amplitude modifications. In certain other embodiments, the code components comprise pairs of frequency components modified in phase to encode information and are processed to detect such phase modifications. Where the codes have been formed as spread spectrum codes, as in the Aijala, et al. U.S. Pat. No. 5,579,124 or the Preuss, et al. U.S. Pat. No. 5,319,735, an appropriate spread spectrum decoder is employed to decode the audience measurement data.
Turning to
Although various embodiments of the present invention have been described with reference to a particular arrangement of parts, features and the like, these are not intended to exhaust all possible arrangements or features, and indeed many other embodiments, modifications and variations will be ascertainable to those of skill in the art.
The Abstract of the Disclosure is provided to comply with 37 C.F.R. .sctn.1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
Number | Name | Date | Kind |
---|---|---|---|
4230990 | Lert, Jr. et al. | Oct 1980 | A |
5019899 | Boles et al. | May 1991 | A |
5379345 | Greenberg | Jan 1995 | A |
5425100 | Thomas et al. | Jun 1995 | A |
5450490 | Jensen et al. | Sep 1995 | A |
5481294 | Thomas et al. | Jan 1996 | A |
5512933 | Wheatley et al. | Apr 1996 | A |
5543856 | Rosser et al. | Aug 1996 | A |
5553193 | Akagiri | Sep 1996 | A |
5574962 | Fardeau et al. | Nov 1996 | A |
5579124 | Aijala et al. | Nov 1996 | A |
5581800 | Fardeau et al. | Dec 1996 | A |
5629739 | Dougherty | May 1997 | A |
5764763 | Jensen et al. | Jun 1998 | A |
5815671 | Morrison | Sep 1998 | A |
5835030 | Tsutsui et al. | Nov 1998 | A |
5850249 | Massetti et al. | Dec 1998 | A |
5872588 | Aras et al. | Feb 1999 | A |
5907366 | Farmer et al. | May 1999 | A |
5966120 | Arazi et al. | Oct 1999 | A |
6029045 | Picco et al. | Feb 2000 | A |
6286140 | Ivanyi | Sep 2001 | B1 |
6308327 | Liu et al. | Oct 2001 | B1 |
6311161 | Anderson et al. | Oct 2001 | B1 |
6675383 | Wheeler et al. | Jan 2004 | B1 |
6735775 | Massetti | May 2004 | B1 |
6845360 | Jensen et al. | Jan 2005 | B2 |
6850619 | Hirai | Feb 2005 | B1 |
6871180 | Neuhauser et al. | Mar 2005 | B1 |
7006555 | Srinivasan | Feb 2006 | B1 |
7039932 | Eldering | May 2006 | B2 |
7239981 | Kolessar et al. | Jul 2007 | B2 |
7440674 | Plotnick et al. | Oct 2008 | B2 |
7908133 | Neuhauser | Mar 2011 | B2 |
7961881 | Jensen et al. | Jun 2011 | B2 |
7962934 | Eldering et al. | Jun 2011 | B1 |
RE42627 | Neuhauser et al. | Aug 2011 | E |
20010056573 | Kovac et al. | Dec 2001 | A1 |
20020124246 | Kaminsky et al. | Sep 2002 | A1 |
20020144262 | Plotnick et al. | Oct 2002 | A1 |
20020194592 | Tsuchida et al. | Dec 2002 | A1 |
20030039465 | Bjorgan et al. | Feb 2003 | A1 |
20030081781 | Jensen et al. | May 2003 | A1 |
20040073916 | Petrovic et al. | Apr 2004 | A1 |
20040102961 | Jensen et al. | May 2004 | A1 |
20040137929 | Jones et al. | Jul 2004 | A1 |
20050028189 | Heine et al. | Feb 2005 | A1 |
20050033758 | Baxter | Feb 2005 | A1 |
20050125820 | Nelson et al. | Jun 2005 | A1 |
20050216509 | Kolessar et al. | Sep 2005 | A1 |
20060110005 | Tapson | May 2006 | A1 |
20060222179 | Jensen et al. | Oct 2006 | A1 |
20070006275 | Wright et al. | Jan 2007 | A1 |
20070100483 | Kentish et al. | May 2007 | A1 |
20070124757 | Breen | May 2007 | A1 |
20070162927 | Ramaswamy et al. | Jul 2007 | A1 |
20080002854 | Tehranchi et al. | Jan 2008 | A1 |
20080056675 | Wright et al. | Mar 2008 | A1 |
20080086304 | Neuhauser | Apr 2008 | A1 |
20090037575 | Crystal et al. | Feb 2009 | A1 |
20090222848 | Ramaswamy | Sep 2009 | A1 |
20100037251 | Lindhult | Feb 2010 | A1 |
20100131970 | Falcon | May 2010 | A1 |
20100226494 | Lynch et al. | Sep 2010 | A1 |
20100268540 | Arshi et al. | Oct 2010 | A1 |
20100268573 | Jain et al. | Oct 2010 | A1 |
20110106587 | Lynch et al. | May 2011 | A1 |
20110246202 | McMillan et al. | Oct 2011 | A1 |
20120022879 | Srinivasan | Jan 2012 | A1 |
Number | Date | Country |
---|---|---|
03094499 | Nov 2003 | WO |
2005025217 | Mar 2005 | WO |
Entry |
---|
Critical Band Encoding Technology Audio Encoding System from Arbitron, Technical Overview, Document 1050-1054, Revision E, Feb. 2008. |
Intark Han; Hong-Shik Park; Youn-Kwae Jeong; Kwang-Roh Park; , “An integrated home server for communication, broadcast reception, and home automation,” Consumer Electronics, IEEE Transactions on , vol. 52, No. 1, pp. 104-109, Feb. 2006. |
Canadian Intellectual Property Office, Notice of Allowance, issued in connection with CA Application No. 2,574,998, dated Aug. 10, 2010, 1 page. |
Canadian Intellectual Property Office, Official Action issued in connection with CA Application No. 2,574,998, mailed Nov. 13, 2009, 10 pages. |
Canadian Intellectual Property Office, Official Action issued in connection with CA Application No. 2,574,998, mailed Aug. 26, 2008, 4 pages. |
Canadian Intellectual Property Office, Official Action issued in connection with CA Application No. 2,574,998, mailed Mar. 23, 2009, 5 pages. |
USPTO, “Office Action,” issued in connection with U.S. Appl. No. 11/618,245, dated Jul. 3, 2013, 24 pages. |
USPTO, “Office Action,” issued in connection with U.S. Appl. No. 11/618,245, dated Jul. 21, 2009, 26 pages. |
USPTO, “Advisory Action,” issued in connection with U.S. Appl. No. 11/618,245, dated Sep. 30, 2009, 3 pages. |
USPTO, “Office Action,” issued in connection with U.S. Appl. No. 11/618,245, dated Oct. 26, 2011, 33 pages. |
USPTO, “Office Action,” issued in connection with U.S. Appl. No. 11/618,245, dated Apr. 28, 2011, 37 pages. |
USPTO, “Office Action,” issued in connection with U.S. Appl. No. 11/618,245, dated Feb. 5, 2009, 35 pages. |
International Preliminary Report on Patentability, issued in PCT Application No. PCT/US2005/026426, mailed Feb. 1, 2007, 9 pages. |
Patent Cooperation Treaty, “International Search Report and the Written Opinion of the International Searching Authority,” issued in PCT Application No. PCT/US11/28440, mailed May 12, 2011, 8 pages. |
Patent Cooperation Treaty, “International Search Report and the Written Opinion of the International Searching Authority,” issued in PCT Application No. PCT/US05/26426, mailed Aug. 18, 2006, 10 pages. |
Sun, Yuanyuan, “Forensic Audio Watermarking for Digital Video Broadcasting,” Eindhoven University of Technology, Department of Mathematics and Computer Science, Aug. 2009, 57 pages. |
Whittemore, Rick, “Watermarking Video in STBs for forensic tracking with Dolby's Cinea Running Marks,” Cinea Inc. (Division of Dolby Laboratories), Sep. 19, 2008, 5 pages. |
USPTO, “Office Action,” issued in connection with U.S. Appl. No. 11/618,245, dated Dec. 31, 2013 (27 pages). |
Number | Date | Country | |
---|---|---|---|
20110224992 A1 | Sep 2011 | US |