The technology disclosed herein generally relates to audio encoding and decoding techniques, and more particularly to multi-channel audio encoding such as stereo coding.
There is a high market need to transmit and store audio signals at low bit rates while maintaining high audio quality. Particularly, in cases where transmission resources or storage is limited low bit rate operation is an essential cost factor. This is typically the case, for example, in streaming and messaging applications in mobile communication systems such as GSM, UMTS, or CDMA.
A general example of an audio transmission system using multi-channel coding and decoding is schematically illustrated in
The simplest way of stereophonic or multi-channel coding of audio signals is to encode the signals of the different channels separately as individual and independent signals, as illustrated in
Another basic way used in stereo FM radio transmission and which ensures compatibility with legacy mono radio receivers is to transmit a sum and a difference signal of the two involved channels.
State-of-the art audio codecs such as MPEG-1/2 Layer III and MPEG-2/4 AAC make use of so-called joint stereo coding. According to this technique, the signals of the different channels are processed jointly rather than separately and individually. The two most commonly used joint stereo coding techniques are known as ‘Mid/Side’ (M/S) Stereo and intensity stereo coding which usually are applied on sub-bands of the stereo or multi-channel signals to be encoded.
M/S stereo coding is similar to the described procedure in stereo FM radio, in a sense that it encodes and transmits the sum and difference signals of the channel sub-bands and thereby exploits redundancy between the channel sub-bands. The structure and operation of a coder based on M/S stereo coding is described, e.g. in reference [1].
Intensity stereo on the other hand is able to make use of stereo irrelevancy. It transmits the joint intensity of the channels (of the different sub-bands) along with some location information indicating how the intensity is distributed among the channels. Intensity stereo does only provide spectral magnitude information of the channels, while phase information is not conveyed. For this reason and since temporal inter-channel information (more specifically the inter-channel time difference) is of major psycho-acoustical relevancy particularly at lower frequencies, intensity stereo can only be used at high frequencies above e.g. 2 kHz. An intensity stereo coding method is described, e.g. in reference [2].
A recently developed stereo coding method called Binaural Cue Coding (BCC) is described in reference [3]. This method is a parametric multi-channel audio coding method. The basic principle of this kind of parametric coding technique is that at the encoding side the input signals from N channels are combined to one mono signal. The mono signal is audio encoded using any conventional monophonic audio codec. In parallel, parameters are derived from the channel signals, which describe the multi-channel image. The parameters are encoded and transmitted to the decoder, along with the audio bit stream. The decoder first decodes the mono signal and then regenerates the channel signals based on the parametric description of the multi-channel image.
The principle of the Binaural Cue Coding (BCC) method is that it transmits the encoded mono signal and so-called BCC parameters. The BCC parameters comprise coded inter-channel level differences and inter-channel time differences for sub-bands of the original multi-channel input signal. The decoder regenerates the different channel signals by applying sub-band-wise level and phase and/or delay adjustments of the mono signal based on the BCC parameters. The advantage over e.g. M/S or intensity stereo is that stereo information comprising temporal inter-channel information is transmitted at much lower bit rates. However, BCC is computationally demanding and generally not perceptually optimized.
Another technique, described in reference [4] uses the same principle of encoding of the mono signal and so-called side information. In this case, the side information consists of predictor filters and optionally a residual signal. The predictor filters, estimated by an LMS algorithm, when applied to the mono signal allow the prediction of the multi-channel audio signals. With this technique one is able to reach very low bit rate encoding of multi-channel audio sources, however at the expense of a quality drop.
The basic principles of such parametric stereo coding are illustrated in
Finally, for completeness, a technique is to be mentioned that is used in 3D audio. This technique synthesizes the right and left channel signals by filtering sound source signals with so-called head-related filters. However, this technique requires the different sound source signals to be separated and can thus not generally be applied for stereo or multi-channel coding.
The technology disclosed herein overcomes these and other drawbacks of the prior art arrangements.
It is a general object of the technology disclosed herein to provide high multi-channel audio quality at low bit rates.
In particular it is desirable to provide an efficient encoding process that is capable of accurately representing stereophonic or multi-channel information using a relatively low number of encoding bits. For stereo coding, for example, it is important that the dynamics of the stereo image are well represented so that the quality of stereo signal reconstruction is enhanced.
It is also an object of the technology disclosed herein to make efficient use of the available bit budget for a multi-stage side signal encoder.
It is a particular object of the technology disclosed herein to provide a method and apparatus for encoding a multi-channel audio signal.
Another particular object of the technology disclosed herein is to provide a method and apparatus for decoding an encoded multi-channel audio signal.
Yet another object of the technology disclosed herein is to provide an improved audio transmission system based on audio encoding and decoding techniques.
Today, there are no standardized codecs available providing high stereophonic or multi-channel audio quality at bit rates which are economically interesting for use in e.g. mobile communication systems. What is possible with available codecs is monophonic transmission and/or storage of the audio signals. To some extent also stereophonic transmission or storage is available, but bit rate limitations usually require limiting the stereo representation quite drastically.
The technology disclosed herein overcomes these problems by proposing a solution, which allows to separate stereophonic or multi-channel information from the audio signal and to accurately represent it with a low bit rate.
A basic idea of the technology disclosed herein is to provide a highly efficient technique for encoding a multi-channel audio signal. The technology disclosed herein relies on the basic principle of encoding a first signal representation of one or more of the multiple channels in a first signal encoding process and encoding a second signal representation of one or more of the multiple channels in a second, multi-stage, signal encoding process. This procedure is significantly enhanced by adaptively allocating a number of encoding bits among the different encoding stages of the second, multi-stage, signal encoding process in dependence on multi-channel audio signal characteristics.
For example, if the performance of one of the stages in the multi-stage encoding process is saturating, there is no use to increase the number of bits allocated for encoding/quantization at this particular encoding stage. Instead it may be better to allocate more bits to another encoding stage in the multi-stage encoding process so as to provide a greater overall improvement in performance. For this reason it has turned out to be particularly beneficial to perform bit allocation based on estimated performance of at least one encoding stage. The allocation of bits to a particular encoding stage may for example be based on estimated performance of that encoding stage. Alternatively, however, the encoding bits are jointly allocated among the different encoding stages based on the overall performance of a combination of encoding stages.
For example, the first encoding process may be a main encoding process and the first signal representation may be a main signal representation. The second encoding process, which is a multi-stage process, may for example be a side signal process, and the second signal representation may then be a side signal representation such as a stereo side signal.
Preferably, the bit budget available for the second, multi-stage, signal encoding process is adaptively allocated among the different encoding stages based on inter-channel correlation characteristics of the multi-channel audio signal. This is particularly useful when the second multi-stage signal encoding process includes a parametric encoding stage such as an inter-channel prediction (ICP) stage. In the event of low inter-channel correlation, the parametric (ICP) filter, as a means for multi-channel or stereo coding, will normally produce a relatively poor estimate of the target signal. Therefore, increasing the number of allocated bits for filter quantization does not lead to significantly better performance. The effect of saturation of performance of the ICP filter and in general of parametric coding makes these techniques quite inefficient in terms of bit usage. In fact, the bits could be used for different encoding in another encoding stage, such as e.g. non-parametric coding, which in turn could result in greater overall improvement in performance.
In a particular embodiment, the technology disclosed herein involves a hybrid parametric and non-parametric encoding process and overcomes the problem of parametric quality saturation by exploiting the strengths of (inter-channel prediction) parametric representations and non-parametric representations based on efficient allocation of available encoding bits among the parametric and non-parametric encoding stages.
Preferably, the procedure of allocating bits to a particular encoding stage is based on assessment of estimated performance of the encoding stage as a function of the number of bits to be allocated to the encoding stage.
In general, the bit-allocation can also be made dependent on performance of an additional stage or the overall performance of two or more stages. For example, the bit allocation can be based on the overall performance of the combination of both parametric and non-parametric representations.
For example, consider the case of a first adaptive inter-channel prediction (ICP) stage for second-signal prediction. The estimated performance of the ICP encoding stage is normally based on determining a relevant quality measure. Such a quality measure could for example be estimated based on the so-called second-signal prediction error, preferably together with an estimation of a quantization error as a function of the number of bits allocated for quantization of second signal reconstruction data generated by the inter-channel prediction. The second signal reconstruction data is typically the inter-channel prediction (ICP) filter coefficients.
In a particularly advantageous embodiment, the second, multi-stage, signal encoding process further comprises an encoding process in a second encoding stage for encoding a representation of the signal prediction error from the first stage.
The second signal encoding process normally generates output data representative of the bit allocation, as this will be needed on the decoding side to correctly interpret the encoded/quantized information in the form of second signal reconstruction data. On the decoding side, a decoder receives bit allocation information representative of how the bit budget has been allocated among the different signal encoding stages during the second signal encoding process. This bit allocation information is used for interpreting the second signal reconstruction data in a corresponding second, multi-stage, signal decoding process for the purpose of correctly decoding the second signal representation.
For further improvement of the multi-channel audio encoding mechanism, it is also possible to use an efficient variable dimension/variable-rate bit allocation based on the performance of the second encoding process or at least one of the encoding stages thereof. In practice, this normally means that a combination of number of bits to be allocated to the first encoding stage and filter dimension/length is selected so as to optimize a measure representative of the performance of the first stage or a combination of stages. The use of longer filters lead to better performance, but the quantization of a longer filter yields a larger quantization error if the bit-rate is fixed. With increased filter length, comes the possibility of increased performance, but to reach it more bits are needed. There will be a trade-off between selected filter dimension/length and the imposed quantization error, and the idea is to use a performance measure and find an optimum value by varying the filter length and the required amount of bits accordingly.
Although bit allocation and encoding/decoding is often performed on a frame-by-frame basis, it is possible to perform bit allocation and encoding/decoding on variable sized frames, allowing signal adaptive optimized frame processing.
In particular, variable filter dimension and bit-rate can be used on fixed frames but also on variable frame lengths.
For variable frame lengths, an encoding frame can generally be divided into a number of sub-frames according to various frame division configurations. The sub-frames may have different sizes, but the sum of the lengths of the sub-frames of any given frame division configuration is equal to the length of the overall encoding frame. In a preferred exemplary embodiment of the technology disclosed herein, the idea is to select a combination of frame division configuration, as well as bit allocation and filter length/dimension for each sub-frame, so as to optimize a measure representative of the performance of the considered second encoding process (i.e. at least one of the signal encoding stages thereof) over an entire encoding frame. The second signal representation is then encoded separately for each of the sub-frames of the selected frame division configuration in accordance with the selected combination of bit allocation and filter dimension. In addition to the general high-quality, low bit-rate performance offered by the signal adaptive bit allocation of the technology disclosed herein, a significant advantage of the variable frame length processing scheme is that the dynamics of the stereo or multi-channel image is very well represented.
The second signal encoding process here preferably generates output data, for transfer to the decoding side, representative of the selected frame division configuration, and for each sub-frame of the selected frame division configuration, bit allocation and filter length. However, to reduce the bit-rate requirements on signaling from the encoding side to the decoding side in an audio transmission system, the filter length, for each sub frame, is preferably selected in dependence on the length of the sub-frame. This means that an indication of frame division configuration of an encoding frame into a set of sub-frames at the same time provides an indication of selected filter dimension for each sub-frame, thereby reducing the required signaling.
The technology disclosed herein offers the following advantages:
Other advantages offered by the technology disclosed herein will be appreciated when reading the below description of embodiments of the technology disclosed herein.
The technology disclosed herein , together with further objects and advantages thereof, will be best understood by reference to the following description taken together with the accompanying drawings, in which:
Throughout the drawings, the same reference characters will be used for corresponding or similar elements.
The technology disclosed herein relates to multi-channel encoding/decoding techniques in audio applications, and particularly to stereo encoding/decoding in audio transmission systems and/or for audio storage. Examples of possible audio applications include phone conference systems, stereophonic audio transmission in mobile communication systems, various systems for supplying audio services, and multi-channel home cinema systems.
For a better understanding of the technology disclosed herein, it may be useful to begin with a brief overview and analysis of problems with existing technology. Today, there are no standardized codecs available providing high stereophonic or multi-channel audio quality at bit rates which are economically interesting for use in e.g. mobile communication systems, as mentioned previously. What is possible with available codecs is monophonic transmission and/or storage of the audio signals. To some extent also stereophonic transmission or storage is available, but bit rate limitations usually require limiting the stereo representation quite drastically.
The problem with the state-of-the-art multi-channel coding techniques is that they require high bit rates in order to provide good quality. Intensity stereo, if applied at low bit rates as low as e.g. only a few kbps suffers from the fact that it does not provide any temporal inter-channel information. As this information is perceptually important for low frequencies below e.g. 2 kHz, it is unable to provide a stereo impression at such low frequencies.
BCC on the other hand is able to reproduce the stereo or multi-channel image even at low frequencies at low bit rates of e.g. 3 kbps since it also transmits temporal inter-channel information. However, this technique requires computationally demanding time-frequency transforms on each of the channels both at the encoder and the decoder. Moreover, BCC does not attempt to find a mapping from the transmitted mono signal to the channel signals in a sense that their perceptual differences to the original channel signals are minimized.
The LMS technique, also referred to as inter-channel prediction (ICP), for multi-channel encoding, see [4], allows lower bit rates by omitting the transmission of the residual signal. To derive the channel reconstruction filter, an unconstrained error minimization procedure calculates the filter such that its output signal matches best the target signal. In order to compute the filter, several error measures may be used. The mean square error or the weighted mean square error are well known and are computationally cheap to implement.
One could say that in general, most of the state-of-the-art methods have been developed for coding of high-fidelity audio signals or pure speech. In speech coding, where the signal energy is concentrated in the lower frequency regions, sub-band coding is rarely used. Although methods as BCC allow for low bit-rate stereo speech, the sub-band transform coding processing increases both complexity and delay.
There has been a long debate on whether linear inter-channel prediction (ICP) applied to audio coding would increase the compression rate for multi-channel signals.
Research concludes that even though ICP coding techniques do not provide good results for high-quality stereo signals, for stereo signals with energy concentrated in the lower frequencies, redundancy reduction is possible [7]. The whitening effects of the ICP filtering increase the energy in the upper frequency regions, resulting in a net coding loss for perceptual transform coders. These results have been confirmed in [9] and [10] where quality enhancements have been reported only for speech signals.
The accuracy of the ICP reconstructed signal is governed by the present inter-channel correlations. Bauer et al. [11] did not find any linear relationship between left and right channels in audio signals. However, as can be seen from the cross spectrum of the mono and side signals in
In the event of low inter-channel correlations, the ICP filter, as means for stereo coding, will produce a poor estimate of the target signal. The produced estimate is poor even before quantization of the filters. Therefore increasing the number of allocated bits for filter quantization does not lead to better performance or the improvement in performance is quite small.
This effect of saturation of performance of ICP and in general of parametric methods makes these techniques quite inefficient in terms of bit usage. Some bits could be used for e.g. non-parametric coding techniques instead, which in turn could result in greater overall improvement in performance. Moreover, these parametric techniques are not asymptotically optimal since even at a high bit rate, characteristic artifacts inherent in the coding method will not disappear.
The multi-channel or polyphonic signal may be provided to the optional pre-processing unit 110, where different signal conditioning procedures may be performed. The signals of the input channels can be provided from an audio signal storage (not shown) or “live”, e.g. from a set of microphones (not shown). The audio signals are normally digitized, if not already in digital form, before entering the multi-channel encoder.
The (optionally pre-processed) signals may be provided to an optional signal combination unit 120, which includes a number of combination modules for performing different signal combination procedures, such as linear combinations of the input signals to produce at least a first signal and a second signal. For example, the first encoding process may be a main encoding process and the first signal representation may be a main signal representation. The second encoding process, which is a multi-stage process, may for example be an auxiliary (side) signal process, and the second signal representation may then be an auxiliary (side) signal representation such as a stereo side signal. In traditional stereo coding, for example, the L and R channels are summed, and the sum signal is divided by a factor of two in order to provide a traditional mono signal as the first (main) signal. The L and R channels may also be subtracted, and the difference signal is divided by a factor of two to provide a traditional side signal as the second signal. According to the technology disclosed herein, any type of linear combination, or any other type of signal combination for that matter, may be performed in the signal combination unit with weighted contributions from at least part of the various channels. The signal combination used by the technology disclosed herein is not limited to two channels but may of course involve multiple channels. It is also possible to generate more than one additional (side) signal, as indicated in
A first signal representation is provided to the first encoder 130, which encodes the first (main) signal according to any suitable encoding principles. Such principles are available in the prior art and will therefore not be further discussed here.
A second signal representation is provided to a second, multi-stage, coder 140 for encoding the second (auxiliary/side) signal.
The overall encoder also comprises a controller 150, which includes at least a bit allocation module for adaptively allocating the available bit budget for the second, multi-stage, signal encoding among the encoding stages of the multi-stage signal encoder 140. The multi-stage encoder may also be referred to as a multi-unit encoder having two or more encoding units.
For example, if the performance of one of the stages in the multi-stage encoder 140 is saturating, there is little meaning to increase the number of bits allocated to this particular encoding stage. Instead it may be better to allocate more bits to another encoding stage in the multi-stage encoder to provide a greater overall improvement in performance. For this reason it turns out to be particularly beneficial to perform bit allocation based on estimated performance of at least one encoding stage. The allocation of bits to a particular encoding stage may for example be based on estimated performance of that encoding stage. Alternatively, however, the encoding bits are jointly allocated among the different encoding stages based on the overall performance of a combination of encoding stages.
Of course, there is an overall bit budget for the entire multi-channel encoder apparatus, which overall bit budget is divided between the first encoder 130 and the multi-stage encoder 140 and possible other encoder modules according to known principles. In the following, we will mainly focus on how to allocate the bit budget available for the multi-stage encoder among the different encoding stages thereof.
Preferably, the bit budget available for the second signal encoding process is adaptively allocated among the different encoding stages of the multi-stage encoder based on predetermined characteristics of the multi-channel audio signal such as inter-channel correlation characteristics. This is particularly useful when the second multi-stage encoder includes a parametric encoding stage such as an inter-channel prediction (ICP) stage. In the event of low inter-channel correlation (e.g. between the first and second signal representations of the input channels), the parametric filter, as a means for multi-channel or stereo coding, will normally produce a relatively poor estimate of the target signal. Therefore, increasing the number of allocated bits for filter quantization does not lead to significantly better performance. The effect of saturation of the performance of the (ICP) filter, and in general of parametric coding, makes these techniques quite inefficient in terms of bit usage. In fact, the bits could be used for different encoding in another encoding stage, such as e.g. non-parametric coding, which in turn could result in greater overall improvement in performance.
In a particular embodiment, the technology disclosed herein involves a hybrid parametric and non-parametric multi-stage signal encoding process and overcomes the problem of parametric quality saturation by exploiting the strengths of parametric representations and non-parametric coding based on efficient allocation of available encoding bits among the parametric and non-parametric encoding stages.
For a particular encoding stage, bits may, as an example, be allocated based on the following procedure:
If only two stages are used, and a first amount of bits have been allocated to a first stage based on estimated performance, bits may be allocated to a second stage by simply assigning the remaining amount of encoding bits to the second encoding stage.
In general, the bit-allocation can also be made dependent on performance of an additional stage or the overall performance of two or more stages. In the former case, bits can be allocated to an additional encoding stage based on estimated performance of the additional stage. In the latter case, the bit allocation can be based for example on the overall performance of the combination of both parametric and non-parametric representations.
For example, the bit allocation may be determined as the allocation of bits among the different stages of the multi-stage encoder when a change in bit allocation does not lead to significantly better performance according to a suitable criterion. In particular, with respect to performance saturation the number of bits to be allocated to a certain stage may be determined as the number of bits when an increase of the number of allocated bits does not lead to significantly better performance of that stage according to a suitable criterion.
As discussed above, the second multi-stage encoder may include an adaptive inter-channel prediction (ICP) stage for second-signal prediction based on the first signal representation and the second signal representation, as indicated in
Preferably, the controller 150 is configured to perform bit allocation in response to the first signal representation and the second signal representation and the performance of one or more stages in the multi-stage (side) encoder 140.
As illustrated in
The output signals of the various encoders 130, 140, including bit allocation information from the controller 150, are preferably multiplexed into a single transmission (or storage) signal in the multiplexer unit 160. However, alternatively, the output signals may be transmitted (or stored) separately.
In an extension of the technology disclosed herein it may also be possible to select a combination of bit allocation and filter dimension/length to be used (e.g. for inter-channel prediction) so as to optimize a measure representative of the performance of the second signal encoding process. There will be a trade-off between selected filter dimension/length and the imposed quantization error, and the idea is to use a performance measure and find an optimum value by varying the filter length and the required amount of bits accordingly.
Although encoding/decoding and the associated bit allocation is often performed on a frame-by-frame basis, it is envisaged that encoding/decoding and bit allocation can be performed on variable sized frames, allowing signal adaptive optimized frame processing. This also enables the possibility to provide an even higher degree of freedom to optimize the performance measure, as will be explained later on.
The overall decoding process is generally quite straight forward and basically involves reading the incoming data stream, interpreting data, inverse quantization and final reconstruction of the multi-channel audio signal. More details on the decoding procedure will be given later on with reference to an exemplary embodiment of the technology disclosed herein.
Although the following description of exemplary embodiments mainly relates to stereophonic (two-channel) encoding and decoding, it should be kept in mind that the technology disclosed herein is generally applicable to multiple channels. Examples include but are not limited to encoding/decoding 5.1 (front left, front centre, front right, rear left and rear right and subwoofer) or 2.1 (left, right and center subwoofer) multi-channel sound.
For a more thorough understanding of the technology disclosed herein, the technology disclosed herein will now be described in more detail with reference to various exemplary embodiments based on parametric coding principles such as inter-channel prediction.
Parametric Stereo Coding Using Inter-channel Prediction
In general, inter-channel prediction (ICP) techniques utilize the inherent inter-channel correlation between the channels. In stereo coding, channels are usually represented by the left and the right signals l(n), r(n), an equivalent representation is the mono signal m(n) (a special case of the main signal) and the side signal s(n). Both representations are equivalent and are normally related by the traditional matrix operation:
As illustrated in
It should be noted that the same approach could be applied directly on the left and right channels.
The ICP filter derived at the encoder may for example be estimated by minimizing the mean squared error (MSE), or a related performance measure, for instance psycho-acoustically weighted mean square error, of the side signal prediction error e(n); The MSE is typically given by:
where L is the frame size and N is the length/order/dimension of the ICP filter. Simply speaking, the performance of the ICP filter, thus the magnitude of the MSE, is the main factor determining the final stereo separation. Since the side signal describes the differences between the left and right channels, accurate side signal reconstruction is essential to ensure a wide enough stereo image.
The optimal filter coefficients are found by minimizing the MSE of the prediction error over all samples and are given by:
hoptTR=rhopt=R−1r (4)
In (4) the correlations vector r and the covariance matrix R are defined as:
r=Ms
R=MMT (5)
where
Inserting (5) into (3) one gets a simplified algebraic expression for the Minimum MSE (MMSE) of the (unquantized) ICP filter:
MMSE=MSE(hopt)=PSS−rTR−1r (7)
where PSS is the power of the side signal, also expressed as sTs.
Inserting r=Rhopt into (7) yields:
MMSE=PSS−rTR−1Rhopt=PSS−rThopt (8)
LDLT factorization [12] on R gives us the equation system:
Where we first solve z in and iterative fashion:
Now we introduce a new vector q=LTh. Since the matrix D only has non-zero values in the diagonal, finding q is straightforward:
The sought filter vector h can now be calculated iteratively in the same way as (10):
Besides the computational savings compared to regular matrix inversion, this solution offers the possibility of efficiently calculating the filter coefficients corresponding to different dimensions n (filter lengths):
H={hopt(n)}n=1N (13)
The optimal ICP (FIR) filter coefficients hopt may be estimated, quantized and sent to the decoder on a frame-by-frame basis.
Multistage Hybrid Multi-channel Coding by Residual Coding
Adaptive Bit Allocation
The technology disclosed herein is based on the recognition that low inter-channel correlation may lead to bad side signal prediction. On the other hand, high inter-channel correlation usually leads to good side signal prediction.
It can be seen that high inter-channel correlation yields a good estimate of the target signal, whereas low inter-channel correlation yields a quite poor estimate of the target signal. If the produced estimate is poor even before quantization of the filter, there is usually no sense in allocating a lot of bits for filter quantization. Instead it may be more useful to use at least part of the bits for different encoding such as non-parametric encoding of the side signal prediction error, which could lead to better overall performance. In the case of higher correlation, it may sometimes be possible to quantize the filter with relatively few bits and still get a quite good result. In other instances a larger amount of bits will have to be used for quantization even if the correlation is relatively high, and it has to be decided if it is “economical” from a bit allocation perspective to use this amount of bits.
In a particular exemplary embodiment, the codec is preferably designed based on combining the strengths of both parametric stereo representation as provided by the ICP filters and non-parametric representation such as residual error coding in a way that is made adaptive in dependence on the characteristics of the stereo input signal.
As hinted above, to fully exploit the available bit budget and in order to further enhance the quality of the stereo signal reconstruction, at least a second quantizer will have to be used to prevent all bits from going to the quantization of the prediction filter. The use of a second quantizer provides an additional degree of freedom that is exploited by the present technology disclosed herein. The multi-stage encoder thus includes a first parametric stage with a filter such as an ICP filter and an associated first quantizer Q.sub.1, and a second stage based on a second quantizer Q.sub.2.
Preferably, the prediction error of the ICP filter, i.e. e(n)=s(n)−ŝ(n), is quantized by using a non-parametric coder, typically a waveform coder or a transform coder or a combination of both. It should though be understood that it is possible to use other types of coding of the prediction error such as CELP (Code Excited Linear Prediction) coding.
It is assumed that the total bit budget for the side signal encoding process is B=bICP+b2, where bICP is the number of bits for quantization of the ICP filter, and b2 is the number of bits for quantization of the residual error e(n).
Optimally, the bits are jointly allocated among the different encoding stages based on the overall performance of the encoding stages, as schematically indicated by the inputs of e(n) and e2(n) into the bit allocation module of
In a simpler and more straightforward implementation, the bit allocation module allocates bits to the first quantizer depending on the performance of the first parametric (ICP) filtering procedure, and allocates the remaining bits to the second quantizer. Performance of the parametric (ICP) filter is preferably based on a fidelity criterion such as the MSE or perceptually weighted MSE of the prediction error e(n).
The performance of the parametric (ICP) filter is typically varying with the characteristics of the different signal frames as well as the available bit-rate.
For instance, in the event of low inter-channel correlations, the ICP filtering procedure will produce a poor estimate of the target (side) signal even prior to filter quantization. Thus, allocating more bits will not lead to big performance improvement. Instead, it is better to allocate more bits to the second quantizer.
In other instances, the redundancy between the mono signal and the side signal is fully removed by the sole use of the ICP filter quantized with a certain bit-rate, and thus allocating more bits to the second quantizer would be inefficient.
The inherent limitations of the performance of ICP follow as a direct consequence of the degree of correlation between the mono and the side signal. The performance of the ICP is always limited by the maximum achievable performance provided by the un-quantized filters.
There is a minimum bit-rate bmin for which the use of ICP provides an improvement which is characterized by a value for Qsnr which is greater than 1, i.e. 0 dB. Obviously, when the bit-rate increases, the performance reaches that of the unquantized filter Qmax. On the other hand, allocating more than bmax bits for quantization would lead to quality saturation.
Typically, a lower bit-rate is selected (bopt in
For some problematic signals, where mono/side correlations is close to zero, it is better not to use any ICP filtering at all, and instead allocate the whole bit budget to the secondary quantizer. For the same type of signals, if the performance of the secondary quantizer is insufficient, then the signal may be coded using pure parametric ICP filtering.
In general, the filter coefficients are treated as vectors, which are efficiently quantized using vector quantization (VQ). The quantization of the filter coefficients is one of the most important aspects of the ICP coding procedure. As will be seen, the quantization noise introduced on the filter coefficients can be directly related to the loss in MSE.
The MMSE has previously been defined as:
MMSE=sTs−rThopt=sTs−2hoptTr+hoptTRhopt (15)
Quantizing hopt introduces a quantization error e: ĥ=hopt+e. The new MSE can now be written as:
Since Rhopt=r, the last two terms in (16) cancel out and the MSE of the quantized filter becomes:
MSE(ĥ)=sTs−rThopt+eTRe (17)
What this means is that in order to have any prediction gain at all the quantization error term has to be lower than the prediction term, i.e. rThopt>eTRe.
From
Direct quantization of the filter coefficients leads in general to bad results, rather one should quantize the filters in order to minimizing the term eTRe. An example of a desired distortion measure is given by:
This suggests the usage of a weighted vector quantization (VQ) procedure. Similar weighted quantizers have been used in [8] for speech compression algorithms.
A clear benefit could also be gained in terms of bit-rate if one uses predictive weighted vector quantization. In fact, prediction filters that result from the above-described concepts are in general correlated in time.
Returning once again to
With reference to
It is important to note that the side signal quality, and thus the stereo quality, is affected both by the accuracy of the mono reproduction and the ICP filter quantization as well as the residual error encoding.
Variable Rate—Variable Dimension Filtering
As previously mentioned, it is also possible to select a combination of bit allocation and filter dimension/length to be used (e.g. for inter-channel prediction) so as to optimize a given performance measure.
It may for example be convenient to select a combination of number of bits to be allocated to the first encoding stage and filter length to be used in the first encoding stage so as to optimize a measure representative of the performance of the first encoding stage or a combination of encoding stages in a multi-stage (auxiliary/side) encoder.
For example, given that a non-parametric coder accompanies a parametric coder, the target of the ICP filtering may be to minimize the MSE of the prediction error. Increasing the filter dimension is known to decrease the MSE. However, for some signal frames the mono and side signals only differ in amplitude and not in time alignment. Thus, one filter coefficient would suffice for this case.
As discussed earlier, it is possible to calculate the filter coefficients for the different dimensions iteratively. Since the filter is completely determined by the symmetric R matrix and r vector, it is also possible to calculate the MMSE of the different dimensions iteratively. Inserting q=L−Thopt into (8) yields:
where di≧0, ∀i. Thus increasing the filter order decreases the MMSE. Hence, it is possible to compute the provided gain of an additional filter dimension without having to re-calculate rThopt for every dimension.
For some frames, the gain of using long filters is noticeable, whereas for others the performance increase by using long filters is nearly negligible. This is explained by the fact that maximum de-correlation between the channels can be achieved without using a long filter. This holds especially true for frames where the amount of inter-channel correlation is low.
The idea of the variable rate/variable dimension scheme is to utilize the varying performance of the (ICP) filter so that accurate filter quantization is only performed for those frames where more bits results in a noticeably better performance.
MSE(ĥ(n),n)=sTs−(r(n))Thopt(n)+(e(n))TR(n)e(n) (21)
It can be seen that the performance is a trade-off between the selected filter dimension n and the imposed quantization error. This is illustrated in
Allocating the necessary bits for the (ICP) filter is efficiently performed based on the QN,max curve. This optimal performance/rate curve QN,max shows the optimum performance obtained by varying the filter dimension and the required amount of bits accordingly. It is also interesting to notice that this curve exhibits regions where the increase in bit rate (and the associated dimension) leads to a very small improvement in the performance/quality measure Qsnr. Typically, for these plateau regions, there is no noticeable gain achieved by increasing the amount of bits for the quantization of the (ICP) filter.
A simpler but suboptimal approach consists in varying the total amount of bits in proportion to the dimension, for instance to make the ratio between the total number of bits and dimension constant. The variable-rate/variable-dimension coding then involves selecting the dimension (or equivalently the bit-rate), which leads to the minimization of the MSE.
In another embodiment, the dimension is held fixed and the bit-rate is varied. A set of thresholds determine whether or not it is feasible to spend more bits on quantizing the filter, by e.g. selecting additional stages in a MSVQ [13] scheme depicted in
Variable rate coding is well motivated by the varying characteristic of the correlation between the main (mono) and the side signal. For low correlation cases, only a few bits are allocated to encode a low dimensional filter while the rest of the bit budget could be used for encoding the residual error with a non-parametric coder.
Improved Parametric Coding Based on Inter-Channel Prediction
As mentioned briefly, for cases where main/side correlations is close to zero, it may be better not to use any ICP filtering at all, and instead allocate the whole bit budget to the secondary quantizer. For the same type of signals, if the performance of the secondary quantizer is insufficient, the signal may be coded using pure parametric ICP filtering. In the latter case, it may be advantageous to make some modifications to the ICP filtering procedure to provide acceptable stereo or multi-channel reconstruction.
These modifications are intended in order to operate stereo or multi-channel coding based solely on inter-channel prediction (ICP), thus allowing low bit-rate operation. In fact, a scheme where the side signal reconstruction is based solely on ICP filtering will normally suffer from quality degradation when the correlation between mono and side signal is weak. This holds especially true after quantization of the filter coefficients.
Covariance Matrix Modification
If only a parametric representation is used, then the target is no longer minimizing the MSE alone but to combine it with smoothing and regularization in order to be able to cope with the cases where there is no correlation between the mono and the side signal.
Informal listening test reveal that coding artifacts introduced by ICP filtering are perceived as more annoying than temporary reduction in stereo width. Therefore, the stereo width, i.e. the side signal energy, is intentionally reduced whenever a problematic frame is encountered. In the worst-case scenario, i.e. no ICP filtering at all, the resulting stereo signal is reduced to pure mono.
It is possible to calculate the expected prediction gain from the covariance matrix R and the correlation vector r, without having to perform the actual filtering. It has been found that coding artifacts are mainly present in the reconstructed side signal when the anticipated prediction gain is low or equivalently when the correlation between the mono and the side signal is low. Hence, a frame classification algorithm has been constructed, which performs classification based on estimated level of prediction gain. When the prediction gain (or the correlation) falls below a certain threshold, the covariance matrix used to derive the ICP filter is modified according to:
R*=R+ρdiag(R) (22)
The value of ρ can be made adaptive to facilitate different levels of modification. The modified ICP filter is computed as h*=(R*)−1r. Evidently, the energy of the ICP filter is reduced thus reducing the energy of the reconstructed side signal. Other schemes for reducing the introduced estimation errors are also plausible.
Filter Smoothing
Rapid changes in the ICP filter characteristics between consecutive frames create disturbing aliasing artifacts and instability in the reconstructed stereo image. This comes from the fact that the predictive approach introduces large spectral variations as opposed to a fixed filtering scheme.
Similar effects are also present in BCC when spectral components of neighboring sub-bands are modified differently [5]. To circumvent this problem, BCC uses overlapping windows in both analysis and synthesis.
The use of overlapping windows solves the alising problem for ICP filtering as well. However, this comes at the expense of a rather large reduction in MSE since the filter coefficients no longer are optimal for the present frame. A modified cost function is suggested. It is defined as:
where ht and ht−1 are the ICP filters at frame t and (t−1) respectively. Calculating the partial derivative of (23) and setting it to zero yields the new smoothed ICP filter:
The smoothing factor μ determines the contribution of the previous ICP filter, thereby controlling the level of smoothing. The proposed filter smoothing effectively removes coding artifacts and stabilizes the stereo image. However this comes at the expense of a reduced stereo image.
The problem of stereo image width reduction due to smoothing can be overcome by making the smoothing factor adaptive. A large smoothing factor is used when the prediction gain of the previous filter applied to the current frame is high. However, if the previous filter leads to deterioration in the prediction gain, then the smoothing factor is gradually decreased.
Frequency Band Processing
The previously suggested algorithms benefit from frequency band processing. In fact, spatial psychoacoustics teaches that the dominant cues for sound localization in the lower frequencies are inter-channel time differences [6], while at high frequencies it is the inter-channel level differences. This suggests that the stereo or multi-channel reconstruction can benefit from coding different regions of the spectrum using different methods and different bit-rates. For example, hybrid parametric and non-parametric coding with adaptively controlled bit allocation could be performed in the low-frequency range, whereas some other coding scheme(s) could be used in higher frequency regions.
Variable-Length Optimized Frame Processing
For variable frame lengths, an encoding frame can generally be divided into a number of sub-frames according to various frame division configurations. The sub-frames may have different sizes, but the sum of the lengths of the sub-frames of any given frame division configuration is normally equal to the length of the overall encoding frame. As described in our co-pending U.S. patent application Ser. No. 11/011,765, which is incorporated herein as an example by this reference, and the corresponding International Application PCT/SE2004/001867, a number of encoding schemes is provided, where each encoding scheme is characterized by or associated with a respective set of sub-frames together constituting an overall encoding frame (also referred to as a master frame). A particular encoding scheme is selected, preferably at least to a part dependent on the signal content of the signal to be encoded, and then the signal is encoded in each of the sub-frames of the selected set of sub-frames separately.
In general, encoding is typically performed in one frame at a time, and each frame normally comprises audio samples within a pre-defined time period. The division of the samples into frames will in any case introduce some discontinuities at the frame borders. Shifting sounds will give shifting encoding parameters, changing basically at each frame border. This will give rise to perceptible errors. One way to compensate somewhat for this is to base the encoding, not only on the samples that are to be encoded, but also on samples in the absolute vicinity of the frame. In such a way, there will be a softer transfer between the different frames. As an alternative, or complement, interpolation techniques are sometimes also utilised for reducing perception artefacts caused by frame borders. However, all such procedures require large additional computational resources, and for certain specific encoding techniques, it might also be difficult to provide in with any resources.
In this view, it is beneficial to utilise as long frames as possible, since the number of frame borders will be small. Also the coding efficiency typically becomes high and the necessary transmission bit-rate will typically be minimised. However, long frames give problems with pre-echo artefacts and ghost-like sounds.
By instead utilising shorter frames, anyone skilled in the art realises that the coding efficiency may be decreased, the transmission bit-rate may have to be higher and the problems with frame border artefacts will increase. However, shorter frames suffer less from e.g. other perception artefacts, such as ghost-like sounds and pre-echoing. In order to be able to minimise the coding error as much as possible, one should use an as short frame length as possible.
Thus, there seems to be conflicting requirements on the length of the frames. Therefore, it is beneficial for the audio perception to use a frame length that is dependent on the present signal content of the signal to be encoded. Since the influence of different frame lengths on the audio perception will differ depending on the nature of the sound to be encoded, an improvement can be obtained by letting the nature of the signal itself affect the frame length that is used. In particular, this procedure has turned out to be advantageous for side signal encoding.
Due to small temporal variations, it may e.g. in some cases be beneficial to encode the side signal with use of relatively long frames. This may be the case with recordings with a great amount of diffuse sound field such as concert recordings. In other cases, such as stereo speech conversation, short frames are preferable.
For example, the lengths of the sub-frames used could be selected according to:
lsf=lf/2n,
where lsf are the lengths of the sub-frames, lf is the length of the overall encoding frame and n is an integer. However, it should be understood that this is merely an example. Any frame lengths will be possible to use as long as the total length of the set of sub-frames is kept constant.
The decision on which frame length to use can typically be performed in two basic ways: closed loop decision or open loop decision.
When a closed loop decision is used, the input signal is typically encoded by all available encoding schemes. Preferably, all possible combinations of frame lengths are tested and the encoding scheme with an associated set of sub-frames that gives the best objective quality, e.g. signal-to-noise ratio or a weighted signal-to-noise ratio, is selected.
Alternatively, the frame length decision is an open loop decision, based on the statistics of the signal. In other words, the spectral characteristics of the (side) signal will be used as a base for deciding which encoding scheme that is going to be used. As before, different encoding schemes characterised by different sets of sub-frames are available. However, in this embodiment, the input (side) signal is first analyzed and then a suitable encoding scheme is selected and utilized.
The advantage with an open loop decision is that only one actual encoding has to be performed. The disadvantage is, however, that the analysis of the signal characteristics may be very complicated indeed and it may be difficult to predict possible behaviours in advance. A lot of statistical analysis of sound has to be performed. Any small change in the encoding schemes may turn upside down on the statistical behaviour.
By using closed loop selection, encoding schemes may be exchanged without making any changes in the rest of the implementation. On the other hand, if many encoding schemes are to be investigated, the computational requirements will be high.
The benefit with such a variable frame length coding for the input (side) signal is that one can select between a fine temporal resolution and coarse frequency resolution on one side and coarse temporal resolution and fine frequency resolution on the other. The above embodiments will preserve the multi-channel or stereo image in the best possible manner.
There are also some requirements on the actual encoding utilised in the different encoding schemes. In particular when the closed loop selection is used, the computational resources to perform a number of more or less simultaneous encoding have to be large. The more complicated the encoding process is, the more computational power is needed. Furthermore, a low bit rate at transmission is also to prefer.
The Variable Length Optimized Frame Processing according to an exemplary embodiment of the technology disclosed herein takes as input a large “master-frame” and given a certain number of frame division configurations, selects the best frame division configuration with respect to a given distortion measure, e.g. MSE or weighted MSE.
Frame divisions may have different sizes but the sum of all frames divisions cover the whole length of the master-frame.
In order to illustrate an exemplary procedure, consider a master-frame of length L ms and the possible frame divisions illustrated in
In a particular exemplary embodiment of the technology disclosed herein, the idea is to select a combination of encoding scheme with associated frame division configuration, as well filter length/dimension for each sub-frame, so as to optimize a measure representative of the performance of the considered encoding process or signal encoding stage(s) thereof over an entire encoding frame (master-frame). The possibility to adjust the filter length for each sub-frame provides an added degree of freedom, and generally results in improved performance.
However, to reduce the signalling requirements during transmission from the encoding side to the decoding side, each sub-frame of a certain length is preferably associated with a predefined filter length. Usually long filters are assigned to long frames and short filters to short frames.
Possible frame configurations are listed in the following table:
in the form (m1, m2, m3, m4) where mk denotes the frame type selected for the kth (sub)frame of length L/4 ms inside the master-frame such that for example
For example, the configuration (0, 0, 1, 1) indicates that the L-ms master-frame is divided into two L/4-ms (sub)frames with filter length P, followed by an L/2-ms (sub)frame with filter length 2×P. Similarly, the configuration (2, 2, 2, 2) indicates that the L-ms frame is used with filter length 4×P. This means that frame division configuration as well as filter length information are simultaneously indicated by the information (m1, m2, m3, m4).
The optimal configuration is selected, for example, based on the MSE or equivalently maximum SNR. For instance, if the configuration (0,0,1,1) is used, then the total number of filters is 3:2 filters of length P and 1 of length 2×P.
The frame configuration, with its corresponding filters and their respective lengths, that leads to the best performance (measured by SNR or MSE) is usually selected.
The filters computation, prior to frame selection, may be either open-loop or closed-loop by including the filters quantization stages.
The advantage of using this scheme is that with this procedure, the dynamics of the stereo or multi-channel image are well represented. The transmitted parameters are the frame configuration as well as the encoded filters.
Because of the variable frame length processing that is involved, the analysis windows overlap in the encoder can be of different lengths. In the decoder, it is therefore essential for the synthesis of the channel signals to window accordingly and to overlap-add different signal lengths.
It is often the case that for stationary signals the stereo image is quite stable and the estimated channel filters are quite stationary. In this case, one would benefit from an FIR filter with longer impulse response, i.e. better modeling of the stereo image.
It has turned out to be particularly beneficial to add yet another degree of freedom by also incorporating the previously described bit allocation procedure into the variable frame length and adjustable filter length processing. In a preferred exemplary embodiment of the technology disclosed herein, the idea is to select a combination of frame division configuration, as well as bit allocation and filter length/dimension for each sub-frame, so as to optimize a measure representative of the performance of the considered encoding process or signal encoding stage(s) over an entire encoding frame. The considered signal representation is then encoded separately for each of the sub-frames of the selected frame division configuration in accordance with the selected bit allocation and filter dimension.
Preferably, the considered signal is a side signal and the encoder is a multi-stage encoder comprising a parametric (ICP) stage and an auxiliary stage such as a non-parametric stage. The bit allocation information controls how many quantization bits that should go to the parametric stage and to the auxiliary stage, and the filter length information preferably relates to the length of the parametric (ICP) filter.
The signal encoding process here preferably generates output data, for transfer to the decoding side, representative of the selected frame division configuration, and for each sub-frame of the selected frame division configuration, bit allocation and filter length.
With a higher degree of freedom, it is possible to find a truly optimal selection. However, the amount of control information to be transferred to the decoding side increases. In order to reduce the bit-rate requirements on signaling from the encoding side to the decoding side in an audio transmission system, the filter length, for each sub frame, is preferably selected in dependence on the length of the sub-frame, as described above. This means that an indication of frame division configuration of an encoding frame or master frame into a set of sub-frames at the same time provides an indication of selected filter dimension for each sub-frame, thereby reducing the required signaling.
The embodiments described above are merely given as examples, and it should be understood that the present technology disclosed herein is not limited thereto. Further modifications, changes and improvements which retain the basic underlying principles disclosed and claimed herein are within the scope of the technology disclosed herein.
[7] S-S. Kuo, J. D. Johnston, “A study why cross channel prediction is not applicable to perceptual audio coding”, IEEE Signal Processing Lett., vol. 8, pp. 245-247.
This application is the U.S. national phase of International Application No. PCT/SE2005/002033, filed 22 Dec. 2005, which designated the U.S. and claims priority to U.S. Provisional Application No. 60/654,956, filed 23 Feb. 2005, the entire contents of each of which are hereby incorporated by reference.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/SE2005/002033 | 12/22/2005 | WO | 00 | 3/18/2008 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2006/091139 | 8/31/2006 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5285498 | Johnston | Feb 1994 | A |
5394473 | Davidson | Feb 1995 | A |
5434948 | Holt et al. | Jul 1995 | A |
5694332 | Maturi | Dec 1997 | A |
5812971 | Herre | Sep 1998 | A |
5956674 | Smyth et al. | Sep 1999 | A |
5974380 | Smyth et al. | Oct 1999 | A |
6012031 | Oliver et al. | Jan 2000 | A |
6341165 | Gbur et al. | Jan 2002 | B1 |
6446037 | Fielder et al. | Sep 2002 | B1 |
6487535 | Smyth et al. | Nov 2002 | B1 |
6591241 | Absar et al. | Jul 2003 | B1 |
7263480 | Minde et al. | Aug 2007 | B2 |
7340391 | Herre et al. | Mar 2008 | B2 |
7356748 | Taleb | Apr 2008 | B2 |
7437299 | Aarts et al. | Oct 2008 | B2 |
7447629 | Breebaart | Nov 2008 | B2 |
7725324 | Bruhn et al. | May 2010 | B2 |
7809579 | Bruhn et al. | Oct 2010 | B2 |
7822617 | Taleb et al. | Oct 2010 | B2 |
20030061055 | Taori et al. | Mar 2003 | A1 |
20030115041 | Chen et al. | Jun 2003 | A1 |
20030115052 | Chen et al. | Jun 2003 | A1 |
20030231797 | Cullen et al. | Dec 2003 | A1 |
20040267543 | Ojanpera | Dec 2004 | A1 |
20050165611 | Mehrotra et al. | Jul 2005 | A1 |
Number | Date | Country |
---|---|---|
0 497 413 | Aug 1992 | EP |
0 559 383 | Sep 1993 | EP |
0 965 123 | Dec 1999 | EP |
1 391 880 | Feb 2004 | EP |
11-032399 | Feb 1999 | JP |
2000-513888 | Oct 2000 | JP |
2001-184090 | Jul 2001 | JP |
2001-255892 | Sep 2001 | JP |
2001-255899 | Sep 2001 | JP |
2002-132295 | May 2002 | JP |
2002-169598 | Jun 2002 | JP |
2003-345398 | Dec 2003 | JP |
2004-509367 | Mar 2004 | JP |
2004-301954 | Oct 2004 | JP |
WO 9747102 | Dec 1997 | WO |
0223528 | Mar 2002 | WO |
03090207 | Oct 2003 | WO |
WO 03090206 | Oct 2003 | WO |
WO 03090208 | Oct 2003 | WO |
WO 2005001813 | Jan 2005 | WO |
Entry |
---|
International Search Report for PCT/SE2005/002033, mailed Jun. 30, 2006. |
Fuchs, “Improving Joint Stereo Audio Coding by Adaptive Inter-Channel Prediction,” (1993) pp. 39-42. |
Jean et al., “Two-Stage Bit Allocation Algorithm for Stereo Audio Coder,” (1996) pp. 331-336. |
Purnhagen, “Low Complexity Parametric Stereo Coding in MPEG-4,” (2004) pp. 163-168. |
Supplementary European Search Report mailed Apr. 19, 2010 in corresponding EP Application 05822014.6. |
Chinese Office Action mailed Mar. 25, 2010 in corresponding Chinese Application 200580048503.5. |
Linde, Y., et al., “An Algorithm for Vector Quantizer Design,” IEEE Transactions on Communications, vol. Com-28, No. 1, Jan. 1980, pp. 84-95. |
Golub et al, “Matrix Computations”, second edition, chapter 4, pp. 137-138, The John Hopkins University Press, 1989. |
English Translation of Japanese Office Action issued in Japanese Application No. 2007-216374, dated Oct. 30, 2010. |
Japanese Office Action issued in Japanese Application Serial No. 2006-518596, dated May 7, 2008. |
Summary of the Japanese Office Action in Japanese Application Serial No. 2006-518596, dated May 7, 2008. |
3GPP Tech. Spec. TS 26.290, V6.1.0, 3rd Generation Partnership Project; Tech. Spec. Group Service and System Aspects; Audio Codec Processing Functions; Extended Adaptive Multi-Rate—Wideband (AMR-WB+) Codec; Transcoding Functions (Release 6), Dec. 2004. |
B. Bdler and G. Schuller; Audio Coding Using a Psychoacoustic Pre- and Post-Filter; pp. 881-884. (2000). |
D. Bauer and D. Seitzer; “Statistical Properties of High Quality Stereo Signals in the Time Domain;” pp. 2045-2048, (1989). |
Shyh-Shiaw Kuo and James D. Johnston; “A Study of Why Cross Channel Prediction is Not Applicable to Perceptual Audio Coding;” IEEE Signal Processing Letters, vol. 8, No. 9, Sep. 2001; pp. 245-247. |
4.1.2 Symmetry and the LDLT Factorization; Chapter 4 Special Linear Systems; pp. 137-138. |
Herre, Jurgen; Brandenburg, Karlheinz; Lederer, D. Intensity Stereo Coding. AES Convention:96 (Feb. 1994) Paper No. 3799 Affiliation: Fraunhofer Gesellschaft, Institut fur Integrierte Schaltungen, Erlangen, Germany. |
Bosi, Marina; Brandenburg, Karlheinz; Quackenbush, Schuyler; Fielder, Louis; Akagiri, Kenzo; Hendrik; Dietz, Martin. ISO/IEC MPEG-2 Advanced Audio Coding. JAES vol. 45 Issue 10 pp. 789-814; Oct. 1997. |
Baumgarte, Frank; Faller, Christof. Why Binaural Cue Coding is Better than Intensity Stereo Coding. Media Signal Processing Research, Agere Systems, Murray Hill, NJ. AES Convention: 112 (Apr. 2002) Paper No. 5575. |
Yang, Dai; Ai, Hongmei; Kyriakakis, Chris; Kuo, C.-C. Jay. An Inter-Channel Redundancy Removal Approach for High-Quality Multichannel Audio Compression. Affiliation: Integrated Media Systems Center, University of Southern California, Los Angeles, CA. AES Convention: 109 (Sep. 2000) Paper No. 5238. |
Oomen, W. et al.; Advances in Parametric Coding for High-Quality Audio. Philips Digital Systems Laboratories, Eindhoven, The Netherlands; Philips Research Laboratories, Eindhoven, The Netherlands, AES Convention: 114 (Mar. 2003). |
Canadian Office Action issued in Canadian Application No. 2,527,971, dated Jun. 17, 2008. |
International Search Report and Written Opinion issued in PCT Application No. PCT/SE2004/001907, dated Mar. 17, 2005. |
L.R. Rabiner and R.W. Schafer. Digital Processing of Speech Signals. Upper Saddle River, New Jersey: Prentice Hall, Inc., 1978. pp. 116-130. |
International Search Report and Written Opinion issued in PCT Application No. PCT/SE2004/001867 dated Mar. 17, 2005. |
International Search Report issued in PCT Application No. PCT/SE2006/000235 dated Jun. 30, 2006. |
Christof Faller and Frank Baumgarte; “Efficient Representation of Spatial Audio Using Perceptual Parametrization;” Applications of Signal Processing to Audio and Acoustics; 2001 IEEE Workshop on Publication date Oct. 21-24, 2001; pp. W2001-1 through W2001-4. |
European Office Action issued in European Application No. 04 809 080.7 dated Feb. 22, 2010. |
European Search Report issued in European Application No. 06 716 925.0 dated Jun. 29, 2010. |
Juang, B.H., et al., “Multiple Stage Vector Quantization for Speech Coding,” Signal Technology Inc., 15 W. De La Guerra, Santa Barbara, CA 93101, pp. 597-600. |
Faller et al., “Binaural cue coding applied to stereo and multi-channel audio compression”, 112.sup.th AES convention, May 2002, Munich, Germany. |
Faller et al, “Binaural cue coding—Part I: Psychoacoustic fundamentals and design principles”, IEEE Trans. Speech Audio Processing, vol. 11, pp. 509-519, Nov. 2003. |
Stuart, “The psychoacoustics of multichannel audio”, Meridian Audio Ltd, Jun. 1998. |
Edler, C. Faller and G. Schuller, “Perceptual audio coding using a time-varying linear pre- and post-filter”, in AES Convention, Los Angeles, Calif., Sep. 2000. |
Japanese Office Action mailed Jun. 3, 2011 in corresponding JP application 2007-552087. |
Number | Date | Country | |
---|---|---|---|
20080262850 A1 | Oct 2008 | US |
Number | Date | Country | |
---|---|---|---|
60654956 | Feb 2005 | US |