This is an application to reissue U.S. Pat. No. 6,871,180 issued Mar. 22, 2005 from application Ser. No. 09/318,045 filed May 25, 1999.
The present invention relates to methods and apparatus for extracting an information signal from an encoded audio signal.
There are various motivations to permanently or indelibly incorporate information signals into audio signals, referred to as “watermarking.” Such an audio watermark may provide, for example, an indication of authorship, content, lineage, existence of copyright, or the like for the audio signals so marked. Alternatively, other information may be incorporated into audio signals either concerning the signal itself or unrelated to it. The information may be incorporated in an audio signal for various purposes, such as identification or as an address or command, whether or not related to the signal itself.
There is considerable interest in encoding audio signals with information to produce encoded audio signals having substantially the same perceptible characteristics as the original unencoded audio signals. Recent successful techniques exploit the psychoacoustic masking effect of the human auditory system whereby certain sounds are humanly imperceptible when received along with other sounds.
One particularly successful utilization of the psychoacoustic masking effect is described in U.S. Pat. No. 5,450,490 and U.S. Pat. No. 5,764,763 (Jensen et al.) in which information is represented by a multiple-frequency code signal which is incorporated into an audio signal based upon the masking ability of the audio signal. The encoded audio signal is suitable for broadcast transmission and reception as well as for recording and reproduction. When received the audio signal is then processed to detect the presence of the multiple-frequency code signal. Sometimes, only a portion of the multiple-frequency code signal, e.g., a number of single frequency code components, inserted into the original audio signal are detected in the received audio signal. If a sufficient quantity of code components is detected, the information signal itself may be recovered.
Generally, an acoustic signal having low amplitude levels will have only minimal capacity, if any at all, to acoustically mask an information signal. For example, such low amplitude levels can occur during a pause in a conversation, during an interlude between segments of music, or even within certain types of music. During a lengthy period of low amplitude levels, it may be difficult to incorporate a code signal in an audio signal without causing the encoded audio signal to differ from the original in an acoustically perceptible manner.
A further problem is the occurrence of burst errors during the transmission or reproduction of encoded audio signals. Burst errors may appear as temporally contiguous segments of signal error. Such errors generally are unpredictable and substantially affect the content of an encoded audio signal. Burst errors typically arise from failure in a transmission channel or reproduction device due to severe external interferences, such as an overlapping of signals from different transmission channels, an occurrence of system power spikes, an interruption in normal operations, an introduction of noise contamination (intentionally or otherwise), and the like. In a transmission system, such circumstances may cause a portion of the transmitted encoded audio signals to be entirely unreceivable or significantly altered. Absent retransmission of the encoded audio signal, the affected portion of the encoded audio may be wholly unrecoverable, while in other instances alterations to the encoded audio signal may render the embedded information signal undetectable. In many applications, such as radio and television broadcasting, real-time retransmission of encoded audio signals is simply unfeasible.
In systems for acoustically reproducing audio signals recorded on media, a variety of factors may cause burst errors in the reproduced acoustic signal. Commonly, an irregularity in the recording media, caused by damage, obstruction, or wear, results in certain portions of recorded audio signals being unreproducable or significantly altered upon reproduction. Also, misalignment of or interference with the recording or reproducing mechanism relative to the recording medium can cause burst-type errors during an acoustic reproduction of recorded audio signals. Further, the acoustic limitations of a speaker as well as the acoustic characteristics of the listening environment may result in spatial irregularities in the distribution of acoustic energy. Such irregularities may cause burst errors to occur in received acoustic signals, interfering with code recovery.
Therefore, an object of the present invention is to provide systems and methods for detecting code symbols in audio signals which alleviate the problems caused by periods of low signal levels and burst errors.
It is another object of the invention to provide such systems and methods which afford reliable operation under adverse conditions.
It is a further object of the invention to provide such systems and methods which are robust.
In accordance with an aspect of the present invention, systems and methods are provided for decoding at least one message symbol represented by a plurality of code symbols in an audio signal. The systems and methods comprise the means for and the steps of, respectively, receiving first and second code symbols representing a common message symbol, the first and second code symbols being displaced in time in the audio signal, accumulating a first signal value representing the first code symbol and a second signal value representing the second code symbol, and examining the accumulated first and second signal values to detect the common message symbol.
In accordance with another aspect of the present invention, a system is provided for decoding at least one message symbol represented by a plurality of code symbols in an audio signal. The system comprises, an input device for receiving first and second code symbols representing a common message symbol, the first and second code symbols being displaced in time in the audio signal; and a digital processor in communication with the input device to receive data therefrom representing the first and second code symbols, the digital processor being programmed to accumulate a first signal value representing the first code symbol and a second signal value representing the second code symbol, the digital processor being further programmed to examine the accumulated first and second signal values to detect the common message symbol.
In certain embodiments, the first and second signal values are accumulated by storing the values separately and the common message symbol is detected by examining both of the separately stored values. The first and second signal values may represent signal values derived from multiple other signal values, such as values of individual code frequency components, or a single signal value, such as a measure of the magnitude of a single code frequency component. Moreover, a derived value may be obtained as a linear combination of multiple signal values, such as a summation of weighted or unweighted values, or as a non-linear function thereof.
In further embodiments, the first and second signal values are accumulated by producing a third signal value derived from the first and second values. The third signal value in some embodiments is derived through a linear combination of the first and second signal values, such as a weighted or unweighted summation thereof, or as a nonlinear function thereof.
Other objects, features, and advantages according to the present invention will become apparent from the following detailed description of certain advantageous embodiments when read in conjunction with the accompanying drawings in which the same components are identified by the same reference numerals.
The present invention relates to the use of especially robust encoding which converts information into redundant sequences of code symbols. In certain embodiments, each code symbol is represented by a set of different, predetermined single-frequency code signals; however, in other embodiments different code symbols may optionally share certain single-frequency code signals or may be provided by a methodology which does not assign predetermined frequency components to a given symbol. The redundant sequence of symbols is incorporated into the audio signals to produce encoded audio signals that are unnoticed by the listener but nevertheless recoverable.
The redundant code symbol sequence is especially suited for incorporation into audio signals having low masking capacity, such as audio signals having many low amplitude portions or the like. Additionally, when incorporated into audio signals, the redundant sequence of code symbols resists degradation by burst errors which affect temporally contiguous audio signals. As described hereinabove, such errors may be the result of imperfect audio signal recording, reproduction, and/or storage processes, transmission of the audio signals through a lossy and/or noisy channel, irregularities in an acoustic environment, or the like.
To recover the encoded information in certain advantageous embodiments, the encoded audio signals are examined in an attempt to detect the presence of predetermined single-frequency code components. During the encoding process, some single-frequency code components may not have been incorporated into the audio signals in certain signal intervals due to insufficient masking capacity in the audio signals in these intervals. Burst errors which have corrupted portions of the encoded audio signals can result in the deletion of certain code signals from the encoded audio signals or in the insertion of erroneous signals, such as noise, into the encoded audio signals. Thus, examination of the encoded audio signals is likely to reveal a much distorted version of the original sequence of sets of single-frequency code signals that represented the information.
The single-frequency code components that are recovered, along with the erroneous additional signals that are mistakenly detected as code signals, are processed to discern the original sequence of code symbols, if possible. The code signal detection and processing operations are specifically adapted to exploit the strengths of the encoding methodology. As a result, the detection and processing methodology of the present invention provides improved error tolerance.
The symbol generation function 12, when employed, translates an information signal into a set of code symbols. This function may be carried out with the use of a memory device, such as a semiconductor EPROM of the computer system, which is prestored with a table of code symbols suitable for indexing with respect to an information signal. An example of a table for translating an information signal into a code symbol for certain applications is shown in
The symbol sequence generating function 14 formats the symbols produced by the symbol generating function (or input directly to the encoder 10) into a redundant sequence of code or information symbols. As part of the formatting process, in certain embodiments marker and/or synchronization symbols are added to the sequence of code symbols. The redundant sequence of code symbols is designed to be especially resistant to burst errors and audio signal encoding processes. Further explanation of redundant sequences of code symbols in accordance with certain embodiments will be provided in connection with the discussion of
As noted above, the symbol sequence generating function 14 is optional. For example, the encoding process may be carried out such that the information signal is translated directly into a predetermined symbol sequence, without implementing separate symbol generating and symbol sequence generating functions.
Each symbol of the sequence of symbols thus produced is converted by the symbol encoding function 16 into a plurality of single-frequency code signals. In certain advantageous embodiments the symbol encoding function is performed by means of a memory device of the computer system, such as a semiconductor EPROM, which is prestored with sets of single-frequency code signals that correspond to each symbol. An example of a table of symbols and corresponding sets of single-frequency code signals is shown in
Alternatively, the sets of code signals may be stored on a hard drive or other suitable storage device of the computer system. The encoding function may also be implemented by one or more discrete components, such as an EPROM and associated control devices, by a logic array, by an application specific integrated circuit or any other suitable device or combination of devices. The encoding function may also be carried out by one or more devices which also implement one or more of the remaining functions illustrated in
In the alternative, the encoded sequence may be generated directly from the information signal, without implementing the separate functions 12, 14, and 16.
The acoustic masking effect evaluation/adjustment function 18 determines the capacity of an input audio signal to mask single-frequency code signals produced by the symbol encoding function 16. Based upon a determination of the masking ability of the audio signal, the function 18 generates adjustment parameters to adjust the relative magnitudes of the single-frequency code signals so that such code signals will be rendered inaudible by a human listener when incorporated into the audio signal. Where the audio signal is determined to have low masking capacity, due to low signal amplitude or other signal characteristics, the adjustment parameters may reduce the magnitudes of certain code signals to extremely-low levels or may nullify such signals entirely. Conversely, where the audio signal is determined to have a greater masking capacity, such capacity may be utilized through the generation of adjustment parameters that increase the magnitudes of particular code signals. Code signals having increased magnitudes are generally more likely to be distinguishable from noise and thus detectable by a decoding device. Further details of certain advantageous embodiments of such evaluation/adjustment function are set forth in U.S. Pat. No. 5,764,763 and U.S. Pat. No. 5,450,490 to Jensen, et al., each entitled Apparatus and Methods for Including Codes in Audio Signals and Decoding, which are incorporated herein by reference in their entirety.
In certain embodiments, the function 18 applies the adjustment parameters to the single-frequency code signals to produce adjusted single-frequency code signals. The adjusted code signals are included in the audio signal by the function 20. Alternatively, the function 18 supplies the adjustment parameters along with the single-frequency code signals for adjustment and inclusion in the audio signal by the function 20. In still other embodiments, the function 18 is combined with one or more of the functions 12, 14, and 16 to produce magnitude-adjusted single-frequency code signals directly.
In certain embodiments, the acoustic masking effect evaluation/adjustment function 18 is implemented in a processing device, such as a microprocessor system which may also implement one or more of the additional functions illustrated in
The code inclusion function 20 combines the single-frequency code components with the audio signal to produce an encoded audio signal. In a straightforward implementation, the function 20 simply adds the single-frequency code signals directly to the audio signal. However, the function 20 may overlay the code signals upon the audio signal. Alternatively, modulator 20 may modify the amplitudes of frequencies within the audio signal according to an input from acoustic masking effect evaluation function 18 to produce an encoded audio signal that includes the adjusted code signals. Moreover, the code inclusion function may be carried out either in the time domain or in the frequency domain. The code inclusion function 20 may be implemented by means of an adding circuit, or by means of a processor. This function may also be implemented by one or more devices described above which also implement one or more of the remaining functions illustrated in
One or more of the functions 12 through 20 may be implemented by a single device. In certain advantageous embodiments, the functions 12, 14, 16 and 18 are implemented by a single processor, and in still others a single processor carries out all of the functions illustrated in
Generalizing from this example, an input set of N symbols, S1, S2, S3, . . . , SN−1, SN, is represented by the redundant symbol sequence comprising SA, S1, S2, S3, . . . SN−1, SN, followed by (P−1) repeating segments comprising SB, S1, S2, S3, . . . SN−1, SN. As in the example, this core unit may itself be repeated to increase survivability. In addition, the sequence of symbols in the message segments may be varied from segment to segment so long as the decoder is arranged to recognize corresponding symbols in the various segments. Moreover, different sequence or marker symbols and combinations thereof may be employed, and the positions of the markers with respect to the data symbols may be arranged differently. For example, the sequence can take the form, S1 . . . , S2, . . . , SA, . . . , SN or the form, S1, S2. . . , SN, SA.
Generalizing from this example, an input set of N symbols, S1, S2, S3, . . . SN−1, SN, is represented by the redundant symbol sequence comprising SA, S1, S2, S3, . . . SN−1, SN, SB, S(1+β) mod M, S(2+β) mod M, S(3+β) mod M, . . . S(N−1+β) mod M, S(N+β)mod M. That is, the same information is represented by two or more different symbols in the same core unit and recognized according to their order therein. In addition, these core units may themselves be repeated to increase survivability. Since the same information is represented by multiple different symbols, the coding is made substantially more robust. For example, the structure of an audio signal can mimic the frequency component of one of the data symbols SN, but the likelihood that the audio signal will also mimic its corresponding offset S(N+β) mod M at its predetermined occurrence is very much lower. Also, since the offset is the same for all symbols within a given segment, this information provides a further check on the validity of the detected symbols within that segment. Consequently, the encoding format of
A particular strength of the redundant sequence exemplified in
The table of
Recording facility 54 includes apparatus for receiving and encoding audio signals and recording encoded audio signals upon a storage medium. Specifically, facility 54 includes audio signal encoder 58 and audio signal recorder 62. Audio signal encoder 58 receives an audio signal feed 52 and a recording information signal 56 and encodes audio signal 52 with information signal 56 to produce an encoded audio signal 60. Audio signal feed 52 may be produced by any conventional source of audio signals such as a microphone, an apparatus for reproducing recorded audio signals, or the like. Recording information signal 56 preferably comprises information regarding audio signal feed 52, such as its authorship, content, or lineage, or the existence of copyright, or the like. Alternatively, recording information signal 56 may comprise any type of data.
Recorder 62 is a conventional device for recording encoded audio signals 60 upon a storage medium which is suitable for distribution to one or more broadcasters 66. Alternatively, audio signal recorder 62 may be omitted entirely. Encoded audio signals 60 may be distributed via distribution of the recorded storage media or via a communication link 64. Communication link 64 extends between recording facility 54 and broadcaster 66 and may comprise a broadcast channel, a microwave link, a wire or fiber optic connection, or the like.
Broadcaster 66 is a broadcasting station that receives encoded audio signals 60, further encodes such signals 60 with a broadcaster information signal 68 to produce a twice-encoded audio signal 72, and broadcasts the twice-encoded audio signal 72 along a transmission path 74. Broadcaster 66 includes an audio signal encoder 70 which receives encoded audio signal 60 from recording facility 54 and a broadcaster information signal 68. Broadcaster information signal 68 may comprise information regarding broadcaster 66, such as an identification code, or regarding the broadcasting process, such as the time, date or characteristics of the broadcast, the intended recipient(s) of the broadcast signal, or the like. Encoder 70 encodes encoded audio signal 60 with information signal 68 to produce twice-encoded audio signal 72. Transmission path 74 extends between broadcaster 66 and relay station 76 may comprise a broadcast channel, a microwave link, a wire or fiber optic connection, or the like.
Relay station 76 receives a twice-encoded audio signal 72 from broadcaster 66, further encodes that signal with a relay station information signal 78, and transmits the thrice-encoded audio signal 82 to a listener facility 86 via a transmission path 84. Relay station 76 includes an audio signal encoder 80 which receives twice-encoded audio signal 72 from broadcaster 66 and a relay station information signal 78. Relay station information signal 78 preferably comprises information regarding relay station 76, such as an identification code, or regarding the process of relaying the broadcast signal, such as the time, date or characteristics of the relay, the intended recipient(s) of the relayed signal, or the like. Encoder 80 encodes twice-encoded audio signal 72 with relay station information signal 78 to produce thrice-encoded audio signal 82. Transmission path 84 extends between relay station 76 and listener facility 86 and may comprise a broadcast channel, a microwave link, a wire or fiber optic connection, or the like. Optionally, transmission path 84 may be an acoustic transmission path.
Listener facility 86 receives thrice-encoded audio signal 82 from relay station 76. In audience estimate applications, listener facility 86 is located where a human listener may perceive an acoustic reproduction of audio signal 82. If audio signal 82 is transmitted as an electromagnetic signal, listener facility 86 preferably includes a device for acoustically reproducing that signal for the human listener. However, if audio signal 82 is stored upon a storage medium, listener facility 86 preferably includes a device for reproducing signal 82 from the storage medium.
In other applications, such as music identification and commercial monitoring, a monitoring facility is employed rather than listener 86. In such a monitoring facility, the audio signal 82 preferably is processed to receive the encoded message without acoustic reproduction.
Audio signal decoder 88 may receive thrice encoded audio signal 82 as an audio signal or, optionally, as an acoustic signal. Decoder 88 decodes audio signal 82 to recover one or more of the information signals encoded therein. Preferably, the recovered information signal(s) are processed at listener facility 86 or recorded on a storage medium for later processing.
Alternatively, the recovered information signal(s) may be converted into images for visual display to the listener.
In an alternate embodiment, recording facility 54 is omitted from system 50. Audio signal feed 52, representing, for example, a live audio performance, is provided directly to broadcaster 66 for encoding and broadcast. Accordingly, broadcaster information signal 68 may further comprise information regarding audio signal feed 52, such as its authorship, content, or lineage, or the existence of copyright, or the like.
In another alternate embodiment, relay station 76 is omitted from system 50. Broadcaster 66 provides twice-encoded audio signal 72 directly to listener 86 via transmission path 74 which is modified to extend therebetween. As a further alternative, both recording facility 54 and relay station 76 may be omitted from system 50.
In another alternate embodiment, broadcaster 66 and relay station 76 are omitted from system 50. Optionally, communication link 64 is modified to extend between recording facility 54 and listener facility 86 and to carry encoded audio signal 60 therebetween. Preferably, audio signal recorder 62 records encoded audio signal 60 upon a storage medium which is thereafter conveyed to listener facility 86. An optional reproduction device at listener facility 86 reproduces the encoded audio signal from the storage medium for decoding and/or acoustic reproduction.
A microphone 93 is within the housing 92 and serves as an acoustic transducer to transduce received acoustic energy, including encoded audio signals, to analog electrical signals. The analog signals are converted to digital by an analog to digital converter and the digital signals are then supplied to a digital signal processor (DSP) 95. The DSP 95 implements a decoder in accordance with the present invention in order to detect the presence of predetermined codes in the audio energy received by the microphone 93 indicating that the person carrying the personal portable meter 90 has been exposed to a broadcast of a certain station or channel. If so, the DSP 95 stores a signal representing such detection in its internal memory along with an associated time signal.
The meter 90 also includes a data transmitter/receiver, such as an infrared transmitter/receiver 97 coupled with the DSP 95. The transmitter/receiver 97 enables the DSP 95 to provide its data to a facility for processing such data from multiple meters 90 to produce audience estimates, as well as to receive instructions and data, for example, to set up the meter 90 for carrying out a new audience survey.
Decoders in accordance with certain advantageous embodiments of the present invention are illustrated by the functional block diagram of
For received audio signals in the time domain, the decoder 100 transforms such signals to the frequency domain by means of a function 106. The function 106 preferably is performed by a digital processor implementing a fast Fourier transform (FFT) although a direct cosine transform, a chirp transform or a Winograd transform algorithm (WFTA) may be employed in the alternative. Any other time-to-frequency-domain transformation function providing the necessary resolution may be employed in place of these. It will be appreciated that in certain implementations, the function 106 may also be carried out by analog or digital filters, by an application specific integrated circuit, or any other suitable device or combination of devices. The function 106 may also be implemented by one or more devices which also implement one or more of the remaining functions illustrated in
The frequency domain-converted audio signals are processed in a symbol values derivation function 110, to produce a stream of symbol values for each code symbol included in the received audio signal. The produced symbol values may represent, for example, signal energy, power, sound pressure level, amplitude, etc., measured instantaneously or over a period of time, on an absolute or relative scale, and may be expressed as a single value or as multiple values. Where the symbols are encoded as groups of single frequency components each having a predetermined frequency, the symbol values preferably represent either single frequency component values or one or more values based on single frequency component values.
The function 110 may be carried out by a digital processor, such as a digital signal processor (DSP) which advantageously carries out some or all of the other functions of decoder 100. However, the function 110 may also be carried out by an application specific integrated circuit, or by any other suitable device or combination of devices, and may be implemented by apparatus apart from the means which implement the remaining functions of the decoder 100.
The stream of symbol values produced by the function 110 are accumulated over time in an appropriate storage device on a symbol-by-symbol basis, as indicated by the function 116. In particular, the function 116 is advantageous for use in decoding encoded symbols which repeat periodically, by periodically accumulating symbol values for the various possible symbols. For example, if a given symbol is expected to recur every X seconds, the function 116 may serve to store a stream of symbol values for a period of nX seconds (n>1), and add to the stored values of one or more symbol value streams of nX seconds duration, so that peak symbol values accumulate over time, improving the signal-to-noise ratio of the stored values.
The function 116 may be carried out by a digital processor, such as a DSP, which advantageously carries out some or all of the other functions of decoder 100. However, the function 110 may also be carried out using a memory device separate from such a processor, or by an application specific integrated circuit, or by any other suitable device or combination of devices, and may be implemented by apparatus apart from the means which implements the remaining functions of the decoder 100.
The accumulated symbol values stored by the function 116 are then examined by the function 120 to detect the presence of an encoded message and output the detected message at an output 126. The function 120 can be carried out by matching the stored accumulated values or a processed version of such values, against stored patterns, whether by correlation or by another pattern matching technique. However, the function 120 advantageously is carried out by examining peak accumulated symbol values and their relative timing, to reconstruct their encoded message. This function may be carried out after the first stream of symbol values has been stored by the function 116 and/or after each subsequent stream has been added thereto, so that the message is detected once the signal-to-noise ratios of the stored, accumulated streams of symbol values reveal a valid message pattern.
The decoder of
In order to separate the various components, the DSP repeatedly carries out FFTs on audio signal samples falling within successive, predetermined intervals. The intervals may overlap, although this is not required. In an exemplary embodiment, ten overlapping FFT's are carried out during each second of decoder operation. Accordingly, the energy of each symbol period falls within five FFT periods. The FFT's may be windowed, although this may be omitted in order to simplify the decoder. The samples are stored and, when a sufficient number are thus available, a new FFT is performed, as indicated by steps 134 and 138.
In this embodiment, the frequency component values are produced on a relative basis. That is, each component value is represented as a signal-to-noise ratio (SNR), produced as follows. The energy within each frequency bin of the FFT in which a frequency component of any symbol can fall provides the numerator of each corresponding SNR Its denominator is determined as an average of adjacent bin values. For example, the average of seven of the eight surrounding bin energy values may be used, the largest value of the eight being ignored in order to avoid the influence of a possible large bin energy value which could result, for example, from an audio signal component in the neighborhood of the code frequency component. Also, given that a large energy value could also appear in the code component bin, for example, due to noise or an audio signal component, the SNR is appropriately limited. In this embodiment, if SNR=>6.0, then SNR is limited to 6.0, although a different maximum value may be selected.
The ten SNR's of each FFT and corresponding to each symbol which may be present, are combined to form symbol SNR's which are stored in a circular symbol SNR buffer, as indicated in step 142 and illustrated schematically in
As indicated by
When the symbol SNR buffer is filled, this is detected in a step 146. In certain advantageous embodiments, the stored SNR's are adjusted to reduce the influence of noise in a step 152, although this step is optional in many applications. In this optional step, a noise value is obtained for each symbol (row) in the buffer by obtaining the average of all stored symbol SNR's in the respective row each time the buffer is filled. Then, to compensate for the effects of noise, this average or “noise” value is subtracted from each of the stored symbol SNR values in the corresponding row. In this manner, a “symbol” appearing only briefly, and thus not a valid detection, is averaged out over time. Referring also to
After the symbol SNR's have been adjusted by subtracting the noise level, the decoder attempts to recover the message by examining the pattern of maximum SNR values in the buffer in a step 156. In certain embodiments, the maximum SNR values for each symbol are located in a process of successively combining groups of five adjacent SNR's, by weighting the values in the sequence in proportion to the sequential weighting (6 10 10 10 6) and then adding the weighted SNR's to produce a comparison SNR centered in the time period of the third SNR in the sequence. This process is carried out progressively throughout the fifty FFT periods of each symbol. For example, a first group of five SNR's for the “A” symbol in FFT periods 1 through 5 are weighted and added to produce a comparison SNR for FFT period 3. Then a further comparison SNR is produced using the SNR's from FFT periods 2-6, and so on until comparison values have been obtained centered on FFT periods 3 through 48. However, other means may be employed for recovering the message. For example, either more or less than five SNR's may be combined, they may be combined without weighing, or they may be combined in a non-linear fashion.
After the comparison SNR values have been obtained, the decoder examines the comparison SNR values for a message pattern. First, the marker code symbols SA and SB are located. Once this information is obtained, the decoder attempts to detect the peaks of the data symbols. The use of a predetermined offset between each data symbol in the first segment and the corresponding data symbol in the second segment provides a check on the validity of the detected message. That is, if both markers are detected and the same offset is observed between each data symbol in the first segment and its corresponding data symbol in the second segment, it is highly likely that a valid message has been received.
With reference both to
However, if the message is not thus found, a further fifty overlapping FFT's are carried out on the following portions of the audio signal and the symbol SNR's so produced are added to those already in the circular buffer. The noise adjustment process is carried out as before and the decoder attempts to detect the message pattern again. This process is repeated continuously until a message is detected. In the alternative, the process may be carried out a limited number of times.
It will be apparent from the foregoing to modify the operation of the decoder depending on the structure of the message, its timing, its signal path, the mode of its detection, etc., without departing from the scope of the present invention. For example, in place of storing SNR's, FFT results may be stored directly for detecting a message.
Steps employed in the decoding process illustrated in
As indicated in step 174, once the circular buffer is fill, its contents are examined in a step 178 to detect the presence of the message pattern. Once full, the buffer remains full continuously, so that the pattern search of step 178 may be carried out after every FFT.
Since each five symbol message repeats every 2½ seconds, each symbol repeats at intervals of 2½ seconds or every 25 FFT's. In order to compensate for the effects of burst errors and the like, the SNR's R1 through R150 are combined by adding corresponding values of the repeating messages to obtain 25 combined SNR values SNRn, n=1, 2 . . . 25, as follows:
Accordingly, if a burst error should result in the loss of a signal interval i, only one of the six message intervals will have been lost, and the essential characteristics of the combined SNR values are likely to be unaffected by this event.
Once the combined SNR values have been determined, the decoder detects the position of the marker symbol's peak as indicated by the combined SNR values and derives the data symbol sequence based on the marker's position and the peak values of the data symbols.
Once the message has thus been formed, as indicated in steps 182 and 183, the message is logged. However, unlike the embodiment of
As in the decoder of
In a further variation which is especially useful in audience measurement applications, a relatively large number of message intervals are separately stored to permit a retrospective analysis of their contents to detect a channel change. In another embodiment, multiple buffers are employed, each accumulating data for a different number of intervals for use in the decoding method of
Although illustrative embodiments of the present invention and modifications thereof have been described in detail herein, it is to be understood that this invention is not limited to these precise embodiments and modifications, and that other modifications and variations may be effected therein by one skilled in the art without departing from the scope and spirit of the invention as defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
2470240 | Crosby | May 1949 | A |
2573279 | Scherbatskoy | Oct 1951 | A |
2630525 | Tomberlin et al. | Mar 1953 | A |
2660511 | Scherbatskoy et al. | Nov 1953 | A |
2660662 | Scherbatskoy | Nov 1953 | A |
2662168 | Scherbatskoy | Dec 1953 | A |
2766374 | Hoffmann | Oct 1956 | A |
3004104 | Hembrooke | Oct 1961 | A |
3397402 | Schneider | Aug 1968 | A |
3492577 | Reiter et al. | Jan 1970 | A |
3760275 | Ohsawa et al. | Sep 1973 | A |
3803349 | Watanabe | Apr 1974 | A |
3845391 | Crosby | Oct 1974 | A |
3919479 | Moon et al. | Nov 1975 | A |
4025851 | Haselwood et al. | May 1977 | A |
4225967 | Miwa et al. | Sep 1980 | A |
4230990 | Lert, Jr. et al. | Oct 1980 | A |
4238849 | Gassmann | Dec 1980 | A |
4306308 | Nossen | Dec 1981 | A |
4425642 | Moses et al. | Jan 1984 | A |
4450531 | Kenyon et al. | May 1984 | A |
4547804 | Greenberg | Oct 1985 | A |
4554669 | Deman et al. | Nov 1985 | A |
4599732 | LeFever | Jul 1986 | A |
4613904 | Lurie | Sep 1986 | A |
4618995 | Kemp | Oct 1986 | A |
4626904 | Lurie | Dec 1986 | A |
4639779 | Greenberg | Jan 1987 | A |
4677466 | Lert, Jr. et al. | Jun 1987 | A |
4697209 | Kiewit et al. | Sep 1987 | A |
4703476 | Howard | Oct 1987 | A |
4718106 | Weinblatt | Jan 1988 | A |
4739398 | Thomas et al. | Apr 1988 | A |
4771455 | Hareyama et al. | Sep 1988 | A |
4805020 | Greenberg | Feb 1989 | A |
4843562 | Kenyon et al. | Jun 1989 | A |
4876617 | Best et al. | Oct 1989 | A |
4918730 | Schulze | Apr 1990 | A |
4930011 | Kiewit | May 1990 | A |
4942607 | Schroder et al. | Jul 1990 | A |
4943973 | Werner | Jul 1990 | A |
4945412 | Kramer | Jul 1990 | A |
4955070 | Welsh et al. | Sep 1990 | A |
4967273 | Greenberg | Oct 1990 | A |
4972471 | Gross et al. | Nov 1990 | A |
5023929 | Call | Jun 1991 | A |
5113437 | Best et al. | May 1992 | A |
5191593 | McDonald et al. | Mar 1993 | A |
5213337 | Sherman | May 1993 | A |
5214788 | Delaperriere et al. | May 1993 | A |
5214793 | Conway et al. | May 1993 | A |
5311541 | Sanderford, Jr. | May 1994 | A |
5319735 | Preuss et al. | Jun 1994 | A |
5379345 | Greenberg | Jan 1995 | A |
5394274 | Kahn | Feb 1995 | A |
5404377 | Moses | Apr 1995 | A |
5408496 | Ritz et al. | Apr 1995 | A |
5425100 | Thomas et al. | Jun 1995 | A |
5450490 | Jensen et al. | Sep 1995 | A |
5461390 | Hoshen | Oct 1995 | A |
5481294 | Thomas et al. | Jan 1996 | A |
5483276 | Brooks et al. | Jan 1996 | A |
5510828 | Lutterbach et al. | Apr 1996 | A |
5512933 | Wheatley et al. | Apr 1996 | A |
5526427 | Thomas et al. | Jun 1996 | A |
5541585 | Duhame et al. | Jul 1996 | A |
5574962 | Fardeau et al. | Nov 1996 | A |
5579124 | Aijala et al. | Nov 1996 | A |
5581800 | Fardeau et al. | Dec 1996 | A |
5594934 | Lu et al. | Jan 1997 | A |
5612729 | Ellis et al. | Mar 1997 | A |
5612741 | Loban et al. | Mar 1997 | A |
5687191 | Lee et al. | Nov 1997 | A |
5737025 | Dougherty et al. | Apr 1998 | A |
5758315 | Mori | May 1998 | A |
5761240 | Croucher, Jr. | Jun 1998 | A |
5764763 | Jensen et al. | Jun 1998 | A |
5768680 | Thomas | Jun 1998 | A |
5787334 | Fardeau et al. | Jul 1998 | A |
5796785 | Spiero | Aug 1998 | A |
5809013 | Kackman | Sep 1998 | A |
5828325 | Wolosewicz et al. | Oct 1998 | A |
5848129 | Baker | Dec 1998 | A |
5848391 | Bosi et al. | Dec 1998 | A |
5923252 | Sizer et al. | Jul 1999 | A |
5945932 | Smith et al. | Aug 1999 | A |
5960048 | Haartsen | Sep 1999 | A |
5966696 | Giraud | Oct 1999 | A |
6005598 | Jeong | Dec 1999 | A |
6148020 | Emi | Nov 2000 | A |
6154484 | Lee et al. | Nov 2000 | A |
6175627 | Petrovic et al. | Jan 2001 | B1 |
6252522 | Hampton et al. | Jun 2001 | B1 |
6266442 | Laumeyer et al. | Jul 2001 | B1 |
6286005 | Cannon | Sep 2001 | B1 |
6330293 | Klank et al. | Dec 2001 | B1 |
6360167 | Millington et al. | Mar 2002 | B1 |
6396413 | Hines et al. | May 2002 | B2 |
6421445 | Jensen et al. | Jul 2002 | B1 |
6424939 | Herre et al. | Jul 2002 | B1 |
6484148 | Boyd | Nov 2002 | B1 |
6507802 | Payton et al. | Jan 2003 | B1 |
6519769 | Hopple et al. | Feb 2003 | B1 |
6546257 | Stewart | Apr 2003 | B1 |
6571279 | Herz et al. | May 2003 | B1 |
6580916 | Weisshaar et al. | Jun 2003 | B1 |
6597405 | Iggulden | Jul 2003 | B1 |
6647269 | Hendrey | Nov 2003 | B2 |
6647548 | Lu et al. | Nov 2003 | B1 |
6675383 | Wheeler et al. | Jan 2004 | B1 |
6720876 | Burgess | Apr 2004 | B1 |
6735775 | Massetti | May 2004 | B1 |
6845360 | Jensen et al. | Jan 2005 | B2 |
6862355 | Kolessar et al. | Mar 2005 | B2 |
6871180 | Neuhauser et al. | Mar 2005 | B1 |
6934508 | Ceresoli et al. | Aug 2005 | B2 |
6958710 | Zhang et al. | Oct 2005 | B2 |
6996237 | Jensen et al. | Feb 2006 | B2 |
7006982 | Sorensen | Feb 2006 | B2 |
7015817 | Copley | Mar 2006 | B2 |
7222071 | Neuhauser et al. | May 2007 | B2 |
20010053190 | Srinivasan | Dec 2001 | A1 |
20020097193 | Powers | Jul 2002 | A1 |
20020107027 | O'Neil | Aug 2002 | A1 |
20030005430 | Kolessar | Jan 2003 | A1 |
20030055707 | Busche et al. | Mar 2003 | A1 |
20030097302 | Overhultz et al. | May 2003 | A1 |
20030110485 | Lu et al. | Jun 2003 | A1 |
20030122708 | Percy et al. | Jul 2003 | A1 |
20030170001 | Breen | Sep 2003 | A1 |
20030171833 | Crystal et al. | Sep 2003 | A1 |
20030171975 | Kirshenbaum | Sep 2003 | A1 |
20040019675 | Hebeler, Jr. et al. | Jan 2004 | A1 |
20040102961 | Jensen et al. | May 2004 | A1 |
20040122727 | Zhang et al. | Jun 2004 | A1 |
20040127192 | Ceresoli et al. | Jul 2004 | A1 |
20040170381 | Srinivasan | Sep 2004 | A1 |
20050035857 | Zhang et al. | Feb 2005 | A1 |
20050159863 | Howard | Jul 2005 | A1 |
20050201826 | Zhang et al. | Sep 2005 | A1 |
20060222179 | Jensen et al. | Oct 2006 | A1 |
Number | Date | Country |
---|---|---|
1 208 761 | Jul 1986 | CA |
2036205 | Dec 1991 | CA |
3806411 | Sep 1989 | DE |
0372601 | Jun 1990 | EP |
2559002 | Aug 1985 | FR |
9111062 | Jul 1991 | WO |
WO 9111062 | Jul 1991 | WO |
9307689 | Apr 1993 | WO |
WO 9307689 | Apr 1993 | WO |
9512278 | May 1995 | WO |
9627264 | Sep 1996 | WO |
9810539 | Mar 1998 | WO |
9826529 | Jun 1998 | WO |
9832251 | Jul 1998 | WO |
9959275 | Nov 1999 | WO |
0004662 | Jan 2000 | WO |
0072309 | Nov 2000 | WO |
200614362 | Feb 2006 | WO |
Number | Date | Country | |
---|---|---|---|
Parent | 09318045 | May 1999 | US |
Child | 11726762 | US |