The present invention relates, in general, to communication systems and, more particularly, to coding speech and audio signals in such communication systems.
Compression of digital speech and audio signals is well known. Compression is generally required to efficiently transmit signals over a communications channel, or to store compressed signals on a digital media device, such as a solid-state memory device or computer hard disk. Although there are many compression (or “coding”) techniques, one method that has remained very popular for digital speech coding is known as Code Excited Linear Prediction (CELP), which is one of a family of “analysis-by-synthesis” coding algorithms. Analysis-by-synthesis generally refers to a coding process by which multiple parameters of a digital model are used to synthesize a set of candidate signals that are compared to an input signal and analyzed for distortion. A set of parameters that yield the lowest distortion is then either transmitted or stored, and eventually used to reconstruct an estimate of the original input signal. CELP is a particular analysis-by-synthesis method that uses one or more codebooks that each essentially comprises sets of code-vectors that are retrieved from the codebook in response to a codebook index.
In modern CELP coders, there is a problem with maintaining high quality speech and audio reproduction at reasonably low data rates. This is especially true for music or other generic audio signals that do not fit the CELP speech model very well. In this case, the model mismatch can cause severely degraded audio quality that can be unacceptable to an end user of the equipment that employs such methods. Therefore, there remains a need for improving performance of CELP type speech coders at low bit rates, especially for music and other non-speech type inputs.
In order to address the above-mentioned need, a method and apparatus for generating an enhancement layer within an audio coding system is described herein. During operation an input signal to be coded is received and coded to produce a coded audio signal. The coded audio signal is then scaled with a plurality of gain values to produce a plurality of scaled coded audio signals, each having an associated gain value and a plurality of error values are determined existing between the input signal and each of the plurality of scaled coded audio signals. A gain value is then chosen that is associated with a scaled coded audio signal resulting in a low error value existing between the input signal and the scaled coded audio signal. Finally, the low error value is transmitted along with the gain value as part of an enhancement layer to the coded audio signal.
A prior art embedded speech/audio compression system is shown in
The primary advantage of such an embedded coding system is that a particular channel 110 may not be capable of consistently supporting the bandwidth requirement associated with high quality audio coding algorithms. An embedded coder, however, allows a partial bit-stream to be received (e.g., only the core layer bit-stream) from the channel 110 to produce, for example, only the core output audio when the enhancement layer bit-stream is lost or corrupted. However, there are tradeoffs in quality between embedded vs. non-embedded coders, and also between different embedded coding optimization objectives. That is, higher quality enhancement layer coding can help achieve a better balance between core and enhancement layers, and also reduce overall data rate for better transmission characteristics (e.g., reduced congestion), which may result in lower packet error rates for the enhancement layers.
A more detailed example of a prior art enhancement layer encoder 106 is given in
E=MDCT{W(s−sc)}, (1)
where W is a perceptual weighting matrix based on the LP (Linear Prediction) filter coefficients A(z) from the core layer decoder 104, s is a vector (i.e., a frame) of samples from the input audio signal s(n), and sc is the corresponding vector of samples from the core layer decoder 104. An example MDCT process is described in ITU-T Recommendation G.729.1. The error signal E is then processed by the error signal encoder 204 to produce codeword iE, which is subsequently transmitted to channel 110. For this example, it is important to note that error signal encoder 106 is presented with only one error signal E and outputs one associated codeword iE. The reason for this will become apparent later.
The enhancement layer decoder 116 then receives the encoded bit-stream from channel 110 and appropriately de-multiplexes the bit-stream to produce codeword iE. The error signal decoder 212 uses codeword iE to reconstruct the enhancement layer error signal Ê, which is then combined with the core layer output audio signal ŝc(n) as follows, to produce the enhanced audio output signal ŝ(n):
ŝ=sc+W−1MDCT−1{Ê}, (2)
where MDCT−1 is the inverse MDCT (including overlap-add), and W−1 is the inverse perceptual weighting matrix.
Another example of an enhancement layer encoder is shown in
Additionally, enhancement layer encoder 106 shows the input audio signal s(n) and transformed core layer output audio Sc being inputted to error signal encoder 304. These signals are used to construct a psychoacoustic model for improved coding of the enhancement layer error signal E. Codewords is and iE are then multiplexed by MUX 308, and then sent to channel 110 for subsequent decoding by enhancement layer decoder 116. The coded bit-stream is received by demux 310, which separates the bit-stream into components is and iE. Codeword iE is then used by error signal decoder 312 to reconstruct the enhancement layer error signal Ê. Signal combiner 314 scales signal ŝc(n) in some manner using scaling bits is, and then combines the result with the enhancement layer error signal Ê to produce the enhanced audio output signal ŝ(n).
A first embodiment of the present invention is given in
Sj=Gj×MDCT{Wsc}; 0≦j<M, (3)
where W may be some perceptual weighting matrix, sc is a vector of samples from the core layer decoder 104, the MDCT is an operation well known in the art, and Gj may be a gain matrix formed by utilizing a gain vector candidate gj, and where M is the number gain vector candidates. In the first embodiment, Gj uses vector gj as the diagonal and zeros everywhere else (i.e., a diagonal matrix), although many possibilities exist. For example, Gj may be a band matrix, or may even be a simple scalar quantity multiplied by the identity matrix I. Alternatively, there may be some advantage to leaving the signal Sj in the time domain or there may be cases where it is advantageous to transform the audio to a different domain, such as the Discrete Fourier Transform (DFT) domain. Many such transforms are well known in the art. In these cases, the scaling unit may output the appropriate Sj based on the respective vector domain.
But in any case, the primary reason to scale the core layer output audio is to compensate for model mismatch (or some other coding deficiency) that may cause significant differences between the input signal and the core layer codec. For example, if the input audio signal is primarily a music signal and the core layer codec is based on a speech model, then the core layer output may contain severely distorted signal characteristics, in which case, it is beneficial from a sound quality perspective to selectively reduce the energy of this signal component prior to applying supplemental coding of the signal by way of one or more enhancement layers.
The gain scaled core layer audio candidate vector Sj and input audio s(n) may then be used as input to error signal generator 402. In the preferred embodiment of the present invention, the input audio signal s(n) is converted to vector S such that S and Sj are correspondingly aligned. That is, the vector s representing s(n) is time (phase) aligned with sc, and the corresponding operations may be applied so that in the preferred embodiment:
Ej=MDCT{Ws}−Sj; 0≦j≦M. (4)
This expression yields a plurality of error signal vectors Ej that represent the weighted difference between the input audio and the gain scaled core layer output audio in the MDCT spectral domain. In other embodiments where different domains are considered, the above expression may be modified based on the respective processing domain.
Gain selector 404 is then used to evaluate the plurality of error signal vectors Ej, in accordance with the first embodiment of the present invention, to produce an optimal error vector E*, an optimal gain parameter g*, and subsequently, a corresponding gain index ig. The gain selector 404 may use a variety of methods to determine the optimal parameters, E* and g*, which may involve closed loop methods (e.g., minimization of a distortion metric), open loop methods (e.g., heuristic classification, model performance estimation, etc.), or a combination of both methods. In the preferred embodiment, a biased distortion metric may be used, which is given as the biased energy difference between the original audio signal vector S and the composite reconstructed signal vector:
where Êj may be the quantified estimate of the error signal vector Ej, and βj may be a bias term which is used to supplement the decision of choosing the perceptually optimal gain error index j*. An exemplary method for vector quantization of a signal vector is given in U.S. patent application Ser. No. 11/531,122, entitled APPARATUS AND METHOD FOR LOW COMPLEXITY COMBINATORIAL CODING OF SIGNALS, although many other methods are possible. Recognizing that Ej=S−Sj, equation (5) may be rewritten as:
In this expression, the term εj=∥Ej−Êj∥2 represents the energy of the difference between the unquantized and quantized error signals. For clarity, this quantity may be referred to as the “residual energy”, and may further be used to evaluate a “gain selection criterion”, in which the optimum gain parameter g* is selected. One such gain selection criterion is given in equation (6), although many are possible.
The need for a bias term βj may arise from the case where the error weighting function W in equations (3) and (4) may not adequately produce equally perceptible distortions across vector Êj. For example, although the error weighting function W may be used to attempt to “whiten” the error spectrum to some degree, there may be certain advantages to placing more weight on the low frequencies, due to the perception of distortion by the human ear. As a result of increased error weighting in the low frequencies, the high frequency signals may be under-modeled by the enhancement layer. In these cases, there may be a direct benefit to biasing the distortion metric towards values of gj that do not attenuate the high frequency components of Sj, such that the under-modeling of high frequencies does not result in objectionable or unnatural sounding artifacts in the final reconstructed audio signal. One such example would be the case of an unvoiced speech signal. In this case, the input audio is generally made up of mid to high frequency noise-like signals produced from turbulent flow of air from the human mouth. It may be that the core layer encoder does not code this type of waveform directly, but may use a noise model to generate a similar sounding audio signal. This may result in a generally low correlation between the input audio and the core layer output audio signals. However, in this embodiment, the error signal vector Ej is based on a difference between the input audio and core layer audio output signals. Since these signals may not be correlated very well, the energy of the error signal Ej may not necessarily be lower than either the input audio or the core layer output audio. In that case, minimization of the error in equation (6) may result in the gain scaling being too aggressive, which may result in potential audible artifacts.
In another case, the bias factors βj may be based on other signal characteristics of the input audio and/or core layer output audio signals. For example, the peak-to-average ratio of the spectrum of a signal may give an indication of that signal's harmonic content. Signals such as speech and certain types of music may have a high harmonic content and thus a high peak-to-average ratio. However, a music signal processed through a speech codec may result in a poor quality due to coding model mismatch, and as a result, the core layer output signal spectrum may have a reduced peak-to-average ratio when compared to the input signal spectrum. In this case, it may be beneficial reduce the amount of bias in the minimization process in order to allow the core layer output audio to be gain scaled to a lower energy thereby allowing the enhancement layer coding to have a more pronounced effect on the composite output audio. Conversely, certain types speech or music input signals may exhibit lower peak-to-average ratios, in which case, the signals may be perceived as being more noisy, and may therefore benefit from less scaling of the core layer output audio by increasing the error bias. An example of a function to generate the bias factors for βj, is given as:
where λ may be some threshold, and the peak-to-average ratio for vector φy may be given as:
and where yk
Once the optimum gain index j* is determined from equation (6), the associated codeword ig is generated and the optimum error vector E* is sent to error signal encoder 410, where E* is coded into a form that is suitable for multiplexing with other codewords (by MUX 408) and transmitted for use by a corresponding decoder. In the preferred embodiment, error signal encoder 408 uses Factorial Pulse Coding (FPC). This method is advantageous from a processing complexity point of view since the enumeration process associated with the coding of vector E* is independent of the vector generation process that is used to generate Êj.
Enhancement layer decoder 416 reverses these processes to produce the enhance audio output ŝ(n). More specifically, ig and iE are received by decoder 416, with iE being sent to error signal decoder 412 where the optimum error vector E* is derived from the codeword. The optimum error vector E* is passed to signal combiner 414 where the received ŝc(n) is modified as in equation (2) to produce ŝ(n).
A second embodiment of the present invention involves a multi-layer embedded coding system as shown in
E3=S−S2, (9)
where S=MDCT{Ws} is the weighted transformed input signal, and S2=MDCT{Ws2} is the weighted transformed signal generated from the layer 1/2 decoder 506. In this embodiment, layer 3 may be a low rate quantization layer, and as such, there may be relatively few bits for coding the corresponding quantized error signal Ê3=Q{E3}. In order to provide good quality under these constraints, only a fraction of the coefficients within E3 may be quantized. The positions of the coefficients to be coded may be fixed or may be variable, but if allowed to vary, it may be required to send additional information to the decoder to identify these positions. If, for example, the range of coded positions starts at ks and ends at ke, where 0≦ks<ke<N, then the quantized error signal vector E3 may contain non-zero values only within that range, and zeros for positions outside that range. The position and range information may also be implicit, depending on the coding method used. For example, it is well known in audio coding that a band of frequencies may be deemed perceptually important, and that coding of a signal vector may focus on those frequencies. In these circumstances, the coded range may be variable, and may not span a contiguous set of frequencies. But at any rate, once this signal is quantized, the composite coded output spectrum may be constructed as:
S3=Ê3+S2, (10)
which is then used as input to layer 4 encoder 512.
Layer 4 encoder 512 is similar to the enhancement layer encoder 406 of the previous embodiment. Using the gain vector candidate gj, the corresponding error vector may be described as:
E4(j)=S−GjS3, (11)
where Gj may be a gain matrix with vector gj as the diagonal component. In the current embodiment, however, the gain vector gj may be related to the quantized error signal vector Ê3 in the following manner. Since the quantized error signal vector Ê3 may be limited in frequency range, for example, starting at vector position ks and ending at vector position ke, the layer 3 output signal S3 is presumed to be coded fairly accurately within that range. Therefore, in accordance with the present invention, the gain vector gj is adjusted based on the coded positions of the layer 3 error signal vector, ks and ke. More specifically, in order to preserve the signal integrity at those locations, the corresponding individual gain elements may be set to a constant value α. That is:
where generally 0≦γj(k)≦1 and gj(k) is the gain of the k-th position of the j-th candidate vector. In the preferred embodiment, the value of the constant is one (α=1), however many values are possible. In addition, the frequency range may span multiple starting and ending positions. That is, equation (12) may be segmented into non-continuous ranges of varying gains that are based on some function of the error signal Ê3, and may be written more generally as:
For this example, a fixed gain α is used to generate gj(k) when the corresponding positions in the previously quantized error signal Ê3 are non-zero, and gain function γj(k) is used when the corresponding positions in Ê3 are zero. One possible gain function may be defined as:
where Δ is a step size (e.g., Δ≈2.2 dB), α is a constant, M is the number of candidates (e.g., M=4, which can be represented using only 2 bits), and kl and kh are the low and high frequency cutoffs, respectively, over which the gain reduction may take place. The introduction of parameters kl and kh is useful in systems where scaling is desired only over a certain frequency range. For example, in a given embodiment, the high frequencies may not be adequately modeled by the core layer, thus the energy within the high frequency band may be inherently lower than that in the input audio signal. In that case, there may be little or no benefit from scaling the layer 3 output in that region signal since the overall error energy may increase as a result.
Summarizing, the plurality of gain vector candidates gj is based on some function of the coded elements of a previously coded signal vector, in this case Ê3. This can be expressed in general terms as:
gj(k)=f(k,Ê3). (15)
The corresponding decoder operations are shown on the right hand side of
where ê2(n) is the layer 2 time domain enhancement layer signal, and Ŝ2=MDCT{Ws2} is the weighted MDCT vector corresponding to the layer 2 audio output ŝ2(n). In this expression, the overall output signal ŝ(n) may be determined from the highest level of consecutive bit-stream layers that are received. In this embodiment, it is assumed that lower level layers have a higher probability of being properly received from the channel, therefore, the codeword sets {i1}, {i1i2}, {i1i2i3}, etc., determine the appropriate level of enhancement layer decoding in equation (16).
The scaled audio Sj is output from scaling unit 601 and received by error signal generator 602. As discussed above, error signal generator 602 receives the input audio signal S and determines an error value Ej for each scaling vector utilized by scaling unit 601. These error vectors are passed to gain selector circuitry 604 along with the gain values used in determining the error vectors and a particular error E* based on the optimal gain value g*. A codeword (ig) representing the optimal gain g* is output from gain selector 604, along with the optimal error vector E*, is passed to encoder 610 where codeword iE is determined and output. Both ig and iE are output to multiplexer 608 and transmitted via channel 110 to layer 4 decoder 522.
During operation of layer 4 decoder 522, ig and iE are received and demultiplexed. Gain codeword ig and the layer 3 error vector Ê3 are used as input to the frequency selective gain generator 616 to produce gain vector g* according to the corresponding method of encoder 512. Gain vector g* is then applied to the layer 3 reconstructed audio vector Ŝ3 within scaling unit 618, the output of which is then combined with the layer 4 enhancement layer error vector E*, which was obtained from error signal decoder 612 through decoding of codeword iE, to produce the layer 4 reconstructed audio output Ŝ4.
The logic flow begins at step 701 where a core layer encoder receives an input signal to be coded and codes the input signal to produce a coded audio signal. Enhancement layer encoder 406 receives the coded audio signal (sc(n)) and scaling unit 401 scales the coded audio signal with a plurality of gain values to produce a plurality of scaled coded audio signals, each having an associated gain value. (step 703). At step 705, error signal generator 402 determines a plurality of error values existing between the input signal and each of the plurality of scaled coded audio signals. Gain selector 404 then chooses a gain value from the plurality of gain values (step 707). As discussed above, the gain value (g*) is associated with a scaled coded audio signal resulting in a low error value (E*) existing between the input signal and the scaled coded audio signal. Finally at step 709 transmitter 418 transmits the low error value (E*) along with the gain value (g*) as part of an enhancement layer to the coded audio signal. As one of ordinary skill in the art will recognize, both E* and g* are properly encoded prior to transmission.
As discussed above, at the receiver side, the coded audio signal will be received along with the enhancement layer. The enhancement layer is an enhancement to the coded audio signal that comprises the gain value (g*) and the error signal (E*) associated with the gain value.
While the invention has been particularly shown and described with reference to a particular embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. For example, while the above techniques are described in terms of transmitting and receiving over a channel in a telecommunications system, the techniques may apply equally to a system which uses the signal compression system for the purposes of reducing storage requirements on a digital media device, such as a solid-state memory device or computer hard disk. It is intended that such changes come within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
4560977 | Murakami et al. | Dec 1985 | A |
4670851 | Murakami et al. | Jun 1987 | A |
4727354 | Lindsay | Feb 1988 | A |
4853778 | Tanaka | Aug 1989 | A |
5006929 | Barbero et al. | Apr 1991 | A |
5067152 | Kisor et al. | Nov 1991 | A |
5268855 | Mason et al. | Dec 1993 | A |
5327521 | Savic et al. | Jul 1994 | A |
5394473 | Davidson | Feb 1995 | A |
5956674 | Smyth et al. | Sep 1999 | A |
5974435 | Abbott | Oct 1999 | A |
6108626 | Cellario et al. | Aug 2000 | A |
6236960 | Peng et al. | May 2001 | B1 |
6253185 | Arean et al. | Jun 2001 | B1 |
6263312 | Kolesnik et al. | Jul 2001 | B1 |
6304196 | Copeland et al. | Oct 2001 | B1 |
6453287 | Unno et al. | Sep 2002 | B1 |
6493664 | Udaya Bhaskar et al. | Dec 2002 | B1 |
6504877 | Lee | Jan 2003 | B1 |
6593872 | Makino et al. | Jul 2003 | B2 |
6658383 | Koishida et al. | Dec 2003 | B2 |
6662154 | Mittal et al. | Dec 2003 | B2 |
6691092 | Udaya Bhaskar et al. | Feb 2004 | B1 |
6704705 | Kabal et al. | Mar 2004 | B1 |
6813602 | Thyssen | Nov 2004 | B2 |
6940431 | Hayami | Sep 2005 | B2 |
6975253 | Dominic | Dec 2005 | B1 |
7031493 | Fletcher et al. | Apr 2006 | B2 |
7130796 | Tasaki | Oct 2006 | B2 |
7161507 | Tomie | Jan 2007 | B2 |
7180796 | Tanzawa et al. | Feb 2007 | B2 |
7212973 | Toyama et al. | May 2007 | B2 |
7230550 | Mittal et al. | Jun 2007 | B1 |
7231091 | Keith | Jun 2007 | B2 |
7414549 | Yang et al. | Aug 2008 | B1 |
7461106 | Mittal et al. | Dec 2008 | B2 |
7761290 | Koishida et al. | Jul 2010 | B2 |
7840411 | Hotho et al. | Nov 2010 | B2 |
7885819 | Koishida et al. | Feb 2011 | B2 |
7889103 | Mittal et al. | Feb 2011 | B2 |
20020052734 | Unno et al. | May 2002 | A1 |
20030004713 | Makino et al. | Jan 2003 | A1 |
20030009325 | Kirchherr et al. | Jan 2003 | A1 |
20030220783 | Streich et al. | Nov 2003 | A1 |
20040252768 | Suzuki et al. | Dec 2004 | A1 |
20050261893 | Toyama et al. | Nov 2005 | A1 |
20060022374 | Chen et al. | Feb 2006 | A1 |
20060173675 | Ojanpera | Aug 2006 | A1 |
20060190246 | Park | Aug 2006 | A1 |
20060241940 | Ramprashad | Oct 2006 | A1 |
20070171944 | Schuijers et al. | Jul 2007 | A1 |
20070239294 | Brueckner et al. | Oct 2007 | A1 |
20070271102 | Morii | Nov 2007 | A1 |
20080065374 | Mittal et al. | Mar 2008 | A1 |
20080120096 | Oh et al. | May 2008 | A1 |
20090024398 | Mittal et al. | Jan 2009 | A1 |
20090030677 | Yoshida | Jan 2009 | A1 |
20090076829 | Ragot et al. | Mar 2009 | A1 |
20090100121 | Mittal et al. | Apr 2009 | A1 |
20090234642 | Mittal et al. | Sep 2009 | A1 |
20090259477 | Ashley et al. | Oct 2009 | A1 |
20090306992 | Ragot et al. | Dec 2009 | A1 |
20090326931 | Ragot et al. | Dec 2009 | A1 |
20100088090 | Ramabadran | Apr 2010 | A1 |
20100169087 | Ashley et al. | Jul 2010 | A1 |
20100169099 | Ashley et al. | Jul 2010 | A1 |
20100169100 | Ashley et al. | Jul 2010 | A1 |
20100169101 | Ashley et al. | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
1483759 | Aug 2004 | EP |
1533789 | May 2005 | EP |
0932141 | Aug 2005 | EP |
1619664 | Jan 2006 | EP |
1818911 | Aug 2007 | EP |
1 845 519 | Oct 2007 | EP |
1912206 | Apr 2008 | EP |
1959431 | Jun 2010 | EP |
2137179 | Sep 1999 | RU |
9715983 | May 1997 | WO |
03073741 | Sep 2003 | WO |
2007012794 | Feb 2007 | WO |
2007063910 | Jun 2007 | WO |
2010003663 | Jan 2010 | WO |
Number | Date | Country | |
---|---|---|---|
20090112607 A1 | Apr 2009 | US |
Number | Date | Country | |
---|---|---|---|
60982566 | Oct 2007 | US |