The present invention relates to coding of audio signals, and in particular to the coding of any audio signal not limited to either speech, music or a combination thereof.
In prior art there are speech coders specifically designed to code speech signals by basing the coding upon a source model of the signal, i.e. the human vocal system. These coders cannot handle arbitrary audio signals, such as music, or any other non-speech signal. Additionally, there are in prior art music-coders, commonly referred to as audio coders that base their coding on assumptions on the human auditory system, and not on the source model of the signal. These coders can handle arbitrary signals very well, albeit at low bit rates for speech signals, the dedicated speech coder gives a superior audio quality. Hence, no general coding structure exists so far for coding of arbitrary audio signals that performs as well as a speech coder for speech and as well as a music coder for music, when operated at low bit rates.
Thus, there is a need for an enhanced audio encoder and decoder with improved audio quality and/or reduced bit rates.
The present invention relates to efficiently coding arbitrary audio signals at a quality level equal or better than that of a system specifically tailored to a specific signal.
The present invention is directed at audio codec algorithms that contain both a linear prediction coding (LPC) and a transform coder part operating on a LPC processed signal.
The present invention further relates to efficiently making use of a bit reservoir in an audio encoder with a variable frame size.
The present invention further relates to the operation of long term prediction in combination with a transform coder having a variable frame size.
The present invention further relates to an encoder for encoding audio signals and generating a bitstream, and a decoder for decoding the bitstream and generating a reconstructed audio signal that is perceptually indistinguishable from the input audio signal.
The present invention provides an audio coding system that is based on a transform coder and includes fundamental prediction and shaping modules from a speech coder. The inventive system comprises a linear prediction unit for filtering an input signal based on an adaptive filter; a transformation unit for transforming a frame of the filtered input signal into a transform domain; a quantization unit for quantizing a transform domain signal; a long term prediction unit for estimating the frame of the filtered input signal based on a reconstruction of a previous segment of the filtered input signal; and a transform domain signal combination unit for combining, in the transform domain, the long term prediction estimation and the transformed input signal to generate the transform domain signal that is input to the quantization unit.
The audio coding system may further comprise an inverse quantization and inverse transformation unit for generating a time domain reconstruction of the frame of the filtered input signal. Furthermore, a long term prediction buffer for storing time domain reconstructions of previous frames of the filtered input signal may be provided. These units may be arranged in a feedback loop from the quantization unit to a long term prediction extraction unit that searches, in the long term prediction buffer, for the reconstructed segment that best matches the present frame of the filtered input signal. In addition, a long term prediction gain estimation unit may be provided that adjusts the gain of the selected segment from the long term prediction buffer so that it best matches the present frame. Preferably, the long term prediction estimation is subtracted from the transformed input signal in the transform domain. Therefore, a second transform unit for transforming the selected segment into the transform domain may be provided. The long term prediction loop may further include adding the long term prediction estimation in the transform domain to the feedback signal after inverse quantization and before inverse transformation into the time-domain. Thus, a backward adaptive long term prediction scheme may be used that predicts, in the transform domain, the present frame of the filtered input signal based on previous frames. In order to be more efficient, the long term prediction scheme may be further adapted in different ways, as set out below for some examples.
The adaptive filter for filtering the input signal is preferably based on a Linear Prediction Coding (LPC) analysis including a LPC filter producing a whitened input signal. LPC parameters for the present frame of input data may be determined by algorithms known in the art. A LPC parameter estimation unit may calculate, for the frame of input data, any suitable LPC parameter representation such as polynomials, transfer functions, reflection coefficients, line spectral frequencies, etc. The particular type of LPC parameter representation that is used for coding or other processing depends on the respective requirements. As is known to the skilled person, some representations are more suited for certain operations than others and are therefore preferred for carrying out these operations. The linear prediction unit may operate on a first frame length that is fixed, e.g. 20 msec. The linear prediction filtering may further operate on a warped frequency axis to selectively emphasize certain frequency ranges, such as low frequencies, over other frequencies.
The transformation applied to the frame of the filtered input signal is preferably a Modified Discrete Cosine Transform (MDCT) operating on a variable second frame length. The audio coding system may comprise a window sequence control unit determining, for a block of the input signal, the frame lengths for overlapping MDCT windows by minimizing a coding cost function, preferably a simplistic perceptual entropy, for the entire input signal block including several frames. Thus, an optimal segmentation of the input signal block into MDCT windows having respective second frame lengths is derived. In consequence, a transform domain coding structure is proposed, including speech coder elements, with an adaptive length MDCT frame as only basic unit for all processing except the LPC. As the MDCT frame lengths can take on many different values, an optimal sequence can be found and abrupt frame size changes can be avoided, as are common in prior art where only a small window size and a large window size is applied. In addition, transitional transform windows having sharp edges, as used in some prior art approaches for the transition between small and large window sizes, are not necessary.
Preferably, consecutive MDCT window lengths change at most by a factor of two (2) and/or the MDCT window lengths are dyadic values. More particular, the MDCT window lengths may be dyadic partitions of the input signal block. The MDCT window sequence is therefore limited to predetermined sequences which are easy to encode with a small number of bits. In addition, the window sequence has smooth transitions of frame sizes, thereby excluding abrupt frame size changes.
A window sequence encoder for jointly encoding MDCT window lengths and window shapes in a window sequence may be provided. A joint encoding may remove redundancy and require fewer bits. The window sequence encoder may consider window size constraints when encoding the window lengths and shapes of a window sequence so as to omit unnecessary information (bits) that can be reconstructed in the decoder.
The window sequence control unit may be further configured to consider long term prediction estimations, generated by the long term prediction unit, for window length candidates when searching for the sequence of MDCT window lengths that minimizes the coding cost function for the input signal block. In this embodiment, the long term prediction loop is closed when determining the MDCT window lengths which results in an improved sequence of MDCT windows applied for encoding.
Further, a time warp unit for uniformly aligning a pitch component in the frame of the filtered signal by resampling the filtered input signal according to a time-warp curve may be provided. The time-warp curve is preferably determined so as to uniformly align the pitch components in the frame. Thus, the transformation unit and/or the long term prediction unit may operate on time-warped signals having constant pitch, which improves the accuracy of the signal analysis.
The audio coding system may further comprise a LPC encoder for recursively coding, at a variable rate, line spectral frequencies or other appropriate LPC parameter representations generated by the linear prediction unit for storage and/or transmission to a decoder. According to an embodiment, a linear prediction interpolation unit is provided to interpolate linear prediction parameters generated on a rate corresponding to the first frame length so as to match the variable frame lengths of the transform domain signal.
According to an aspect of the invention, the audio coding system may comprise a perceptual modeling unit that modifies a characteristic of the adaptive filter by chirping and/or tilting a LPC polynomial generated by the linear prediction unit for a LPC frame. The perceptual model received by the modification of the adaptive filter characteristics may be used for many purposes in the system. For instance, it may be applied as perceptual weighting function in quantization or long term prediction.
Another independent aspect of the invention relates to extending the bandwidth of an audio encoder by providing separate means for encoding a highband component of the input signal. According to an embodiment, a highband encoder for encoding the highband component of the input signal is provided. Preferably, the highband encoder is a spectral band replication (SBR) encoder. The separate coding of the highband with the highband encoder allows different quantization steps, used in the quantization unit when quantizing the transform domain signal, for encoding components of the transform domain signal belonging to the highband as compared to components belonging to a lowband of the input signal. More particularly, the quantizer may apply a coarser quantization of the highband signal component that is also encoded by the highband encoder which reduces bit rate.
According to another embodiment, a frequency splitting unit for splitting the input signal into the lowband component and the highband component is provided. The highband component is then encoded by the highband encoder, and the lowband component is input to the linear prediction unit and encoded by the above proposed transform encoder. Preferably, the frequency splitting unit comprises a quadrature mirror filter bank and a quadrature mirror filter synthesis unit configured to downsample the input signal that is to be input to the linear prediction unit. The signal from the quadrature mirror filter bank may be input directly to the highband encoder. This is particularly useful when the highband encoder is a spectral band replication encoder that can be fed directly by the quadrature mirror filter bank signal. In addition, the combination of quadrature minor filter bank and quadrature mirror filter synthesis unit serves as premium downsampler for the lowband component.
The boundary between the lowband and the highband may be variable and the frequency splitting unit may dynamically determine the cross-over frequency between the lowband and the highband. This allows an adaptive frequency allocation, e.g. based on input signal properties and/or encoder bandwidth requirements.
According to another aspect, the audio coding system may comprise a second quadrature mirror filter synthesis unit that transfers the highband component into a low-pass signal. This downmodulated high frequency range can then be encoded by a second transform-based encoder, possibly with a lower resolution, i.e. larger quantization steps. This is particularly useful when the high frequency band is further encoded by other means as well, e.g. a spectral band replication encoder. Then, a combination of both ways to encode the high frequency band may be more efficient.
Different signal representations covering the same frequency range may be combined by a signal representation combination unit that exploits correlations in the signal representations in order to reduce the necessary bit rate. The signal representation combination unit may further generate signaling data indicating how the signal representations are combined. This signaling data may be stored or transmitted to the decoder for reconstructing the encoded audio signal from the different signal representations.
A spectral band replication unit may further be provided in the long term prediction unit for introducing energy into the high frequency components of the long term prediction estimations. This serves to improve the efficiency of the long term prediction.
According to an embodiment, a stereo signal having left and right input channels is input to a parametric stereo unit for calculating a parametric stereo representation of the stereo signal including a mono representation of the input signal. The mono representation may then be input to the LPC analysis unit and the subsequent transformation coder as proposed above. Thus, an efficient means to encode the stereo signal is obtained where essentially only the mono representation is waveform coded and the stereo effect is achieved with the low bit rate parametric stereo representation.
Further enhancements of the quality of the coded signal relate to the usage of a harmonic prediction analysis unit for predicting harmonic signal components in the frequency/MDCT-domain.
Another independent encoder specific aspect of the invention relates to bit reservoir handling for variable frame sizes. In an audio coding system that can code frames of variable length, the bit reservoir is controlled by distributing the available bits among the frames. Given a reasonable difficulty measure for the individual frames and a bit reservoir of a defined size, a certain deviation from a required constant bit rate allows for a better overall quality without a violation of the buffer requirements that are imposed by the bit reservoir size. The present invention extends the concept of using a bit reservoir to a bit reservoir control for a generalized audio codec with variable frame sizes. An audio coding system may therefore comprise a bit reservoir control unit for determining the number of bits granted to encode a frame of the filtered signal based on the length of the frame and a difficulty measure of the frame. Preferably, the bit reservoir control unit has separate control equations for different frame difficulty measures and/or different frame sizes. Difficulty measures for different frame sizes may be normalized so they can be compared more easily. In order to control the bit allocation for a variable rate encoder, the bit reservoir control unit preferably sets the lower allowed limit of the granted bit control algorithm to the average number of bits for the largest allowed frame size.
The present invention further relates to the aspect of quantizing MDCT lines in a transform encoder. This aspect is applicable independently of whether the encoder uses a LPC analysis or a long term prediction. The proposed quantization strategy is conditioned on input signal characteristics, e.g. transform frame-size. It is suggested that the quantization unit may decide, based on the frame size applied by the transformation unit, to encode the transform domain signal with a model-based quantizer or a non-model-based quantizer. Preferably, the quantization unit is configured to encode a transform domain signal for a frame with a frame size smaller than a threshold value by means of a model-based entropy constrained quantization. The model-based quantization may be conditioned on assorted parameters. Large frames may be quantized, e.g., by a scalar quantizer with e.g. Huffman based entropy coding, as is used in e.g. the AAC codec.
The switching between different quantization methods of the MDCT lines is another aspect of a preferred embodiment of the invention. By employing different quantization strategies for different transform sizes, the codec can do all the quantization and coding in the MDCT-domain without having the need to have a specific time domain speech coder running in parallel or serial to the transform domain codec. The present invention teaches that for speech like signals, where there is an LTP gain, the signal is preferably coded using a short transform and a model-based quantizer. The model-based quantizer is particularly suited for the short transform, and gives, as will be outlined later, the advantages of a time-domain speech specific vector quantizer (VQ), while still being operated in the MDCT-domain, and without any requirements that the input signal is a speech signal. In other words, when the model-based quantizer is used for the short transform segments in combination with the LTP, the efficiency of the dedicated time-domain speech coder VQ is retained without loss of generality and without leaving the MDCT-domain.
In addition for more stationary music signals, it is preferred to use a transform of relatively large size as is commonly used in audio codecs, and a quantization scheme that can take advantage of sparse spectral lines discriminated by the large transform. Therefore, the present invention teaches to use this kind of quantization scheme for long transforms.
Thus, the switching of quantization strategy as a function of frame size enables the codec to retain both the properties of a dedicated speech codec, and the properties of a dedicated audio codec, simply by choice of transform size. This avoids all the problems in prior art systems that strive to handle speech and audio signals equally well at low rates, since these systems inevitably run into the problems and difficulties of efficiently combining time-domain coding (the speech coder) with frequency domain coding (the audio coder).
According to another aspect of the invention, the quantization uses adaptive step sizes. Preferably, the quantization step size(s) for components of the transform domain signal is/are adapted based on linear prediction and/or long term prediction parameters. The quantization step size(s) may further be configured to be frequency depending. In embodiments of the invention, the quantization step size is determined based on at least one of: the polynomial of the adaptive filter, a coding rate control parameter, a long term prediction gain value, and an input signal variance.
Another aspect of the invention relates to long term prediction (LTP), in particular to long term prediction in the MDCT-domain, MDCT frame adapted LTP and MDCT weighted LTP search. These aspects are applicable irrespective whether a LPC analysis is present upstream of the transform coder.
According to an embodiment, the long term prediction unit comprises a long term prediction extractor for determining a lag value specifying the reconstructed segment of the filtered signal that best fits the current frame of the filtered signal. A long term prediction gain estimator may estimate a gain value applied to the signal of the selected segment of the filtered signal. Preferably, the lag value and the gain value are determined so as to minimize a distortion criterion relating to the difference, in a perceptual domain, of the long term prediction estimation to the transformed input signal. The distortion criterion may relate to the difference of the long term prediction estimation to the transformed input signal in a perceptual domain. Preferably, the distortion criterion is minimized by searching the lag value and the gain value in the perceptual domain. A modified linear prediction polynomial may be applied as MDCT-domain equalization gain curve when minimizing the distortion criterion.
The long term prediction unit may comprise a transformation unit for transforming the reconstructed signal of segments from the LTP buffer into the transform domain. For an efficient implementation of a MDCT transformation, the transformation is preferably a type-IV Discrete-Cosine Transformation.
Virtual vectors may be used to generate an extended segment of the reconstructed signal when a lag value is smaller than the MDCT frame length. The virtual vectors are preferably generated by an iterative fold-in fold-out procedure to refine the generated segment of the reconstructed signal. Thus, not yet existing segments of the reconstructed signal are generated during the lag search procedure of the long term prediction.
The reconstructed signal in the long term prediction buffer may be resampled based on a time-warp curve when the transformation unit is operating on time-warped signals. This allows a time-warped LPT extraction matching a time-warped MDCT.
According to an embodiment, a variable rate encoder to encode the long term prediction lag and gain values may be provided to achieve low bit rates. Further, the long term prediction unit may comprise a noise vector buffer and/or a pulse vector buffer to enhance the prediction accuracy, e.g., for noisy or transient signals.
A joint coding unit to jointly encode pitch related information, such as long term prediction parameters, harmonic prediction parameters and time-warp parameters, may be provided. The joint encoding can further reduce the necessary bit rate by exploiting correlations in these parameters.
Another aspect of the invention relates to an audio decoder for decoding the bitstream generated by embodiments of the above encoder. The audio decoder comprises a de-quantization unit for de-quantizing a frame of the input bitstream; an inverse transformation unit for inverse transforming a transform domain signal; a long term prediction unit for determining an estimation of the de-quantized frame; a transform domain signal combination unit for combining, in the transform domain; the long term prediction estimation and the de-quantized frame to generate the transform domain signal; and a linear prediction unit for filtering the inverse transformed transform domain signal.
In addition, the decoder may comprise many of the aspects as disclosed above for the encoder. In general, the decoder will mirror the operations of the encoder, although some operations are only performed in the encoder and will have no corresponding components in the decoder. Thus, what is disclosed for the encoder is considered to be applicable for the decoder as well, if not stated otherwise.
The above aspects of the invention may be implemented as a device, apparatus, method, or computer program operating on a programmable device. The inventive aspects may further be embodied in signals, data structures and bitstreams.
Thus, the application further discloses an audio encoding method and an audio decoding method. An exemplary audio encoding method comprises the steps of: filtering an input signal based on an adaptive filter; transforming a frame of the filtered input signal into a transform domain; quantizing a transform domain signal; estimating the frame of the filtered input signal based on a reconstruction of a previous segment of the filtered input signal; and combining, in the transform domain, the long term prediction estimation and the transformed input signal to generate the transform domain signal.
An exemplary audio decoding method comprises the steps of: de-quantizing a frame of an input bitstream; inverse transforming a transform domain signal; determining an estimation of the de-quantized frame; combining, in the transform domain; the long term prediction estimation and the de-quantized frame to generate the transform domain signal; filtering the inversely transformed transform domain signal; and outputting a reconstructed audio signal.
These are only examples of preferred audio encoding/decoding methods and computer programs that are taught by the present application and that a person skilled in the art can derive from the following description of exemplary embodiments.
The present invention will now be described by way of illustrative examples, not limiting the scope or spirit of the invention, with reference to the accompanying drawings, in which:
a is another illustration of aspects of the encoder according to an embodiment of the invention;
a illustrates how a delta-curve is derived from LPC and LTP parameters by means of a delta-adapt module;
a illustrates a another preferred embodiment of a model-based quantizer according to the present invention;
a illustrates schematically adjacent windows of an MDCT transform in an embodiment of the present invention;
b illustrates an embodiment of the present invention using four different MDCT window shapes;
c describes an example of the window sequence encoding method according to an embodiment of the present invention;
a illustrates an MDCT adapted LTP extraction process;
b illustrates an iterative refinement of an initial LTP extracted signal;
c illustrates an alternative implementation of a refinement unit;
d illustrates another alternative implementation of a refinement unit;
a illustrates the basic concept of a bit reservoir control;
b illustrates the concept of a bit reservoir control for variable frame sizes, according to the present invention;
a illustrates the effects of time-warped MDCT analysis;
The below-described embodiments are merely illustrative for the principles of the present invention for audio encoder and decoder. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the accompanying patent claims and not by the specific details presented by way of description and explanation of the embodiments herein. Similar components of embodiments are numbered by similar reference numbers.
In
In
An important aspect of the above embodiment is that the MDCT frame is the only basic unit for coding, although the LPC has its own (and in one embodiment constant) frame size and LPC parameters are coded, too. The embodiment starts from a transform coder and introduces fundamental prediction and shaping modules from a speech coder. As will be discussed later, the MDCT frame size is variable and is adapted to a block of the input signal by determining the optimal MDCT window sequence for the entire block by minimizing a simplistic perceptual entropy cost function. This allows scaling to maintain optimal time/frequency control. Further, the proposed unified structure avoids switched or layered combinations of different coding paradigms.
In
In
In
In
Frequency-warped LPC is based on non-uniform sampling of the frequency axis to allow frequency selective control of LPC error contributions when determining the LPC filter parameters. While normal LPC is based on minimizing the MSE over a linear frequency axis so that the LPC polynomial is mostly accurate in the areas of spectral peaks, frequency-warped LPC allows a frequency selective focus when determining the LPC filter parameters. For instance, when operating on a higher bandwidth such as 16 or 24 kHz sampling rate, warping the frequency axis allows focusing the accuracy of the LPC polynomial on the lower frequency band such as frequencies up to 4 kHz.
In
In
SBR (Spectral Band Replication) provides an efficient way of coding the high frequency part of a spectrum. It recreates the high frequencies of an audio signal from the low frequencies and a small amount of additional control information. Since the SBR method enables a reduction of the core coder bandwidth, and the SBR technique requires significantly lower bitrate to code the frequency range than a wave-form coder would, a coding gain can be achieved by reducing the bit rate allocated to the wave-form core coder while maintaining full audio bandwidth. Naturally, this gives the possibility to almost continuously decrease the total data rate by lowering the crossover frequency between core coder and the SBR part.
A perceptual audio coder may reduce bit rate by shaping the quantization noise so that it is always masked by the signal. This leads to a rather low signal to noise ratio, but as long as the quantization noise is put below the masking curve this does not matter. The distortion that the quantization represents is inaudible. However, when operated at low bit rates, the masking threshold will be violated, and the distortion becomes audible. One method that a perceptual audio coder can employ is to low pass filter the signal, i.e. only coding parts of the spectrum, since there is simply not enough bits to code the entire frequency range of the signal. For this situation, the SBR algorithm is very beneficial since it enables full audio bandwidth at low bit rates.
The SBR decoding concept comprises the following aspects:
In
The two new signals, representing the stereo signal in combination with the extracted parameters, may subsequently be input, e.g., to the QMF synthesis modules and SBR modules as outlined in
In more detail, the PS module compares the two input signals (left and right) for corresponding time/frequency tiles. The frequency bands of the tiles are designed to approximate a psycho-acoustically motivated scale, while the length of the segments is closely matched to known limitations of the binaural hearing system. Essentially, three parameters are extracted per time/frequency tile, representing the perceptually most important spatial properties:
Subsequent to parameter extraction, the input signals are downmixed to form a mono signal. The downmix can be made by trivial means of a summing process, but preferably more advanced methods incorporating time alignment and energy preservation techniques are incorporated to avoid potential phase cancellation in the downmix. On the decoder side, a PS decoding module is provided that basically comprises the reverse process of the corresponding encoder and reconstructs stereo output signals based on the PS parameters.
In
In addition to the earlier examples, the high subband samples may also be input to a QMF synthesis module 920 that synthesizes the higher frequency range to a low-pass signal, thus containing a down-modulated high frequency range. This signal is subsequently coded by an additional MDCT-based MDCT-based coder 930. The output from the additional MDCT-based MDCT-based coder 930 may be combined with the SBR encoder output in an optional combination unit 940. Signaling is generated and sent to the decoder indicating which part is coded with SBR, and which part is coded with the MDCT-based wave-form coder. This enables a smooth transition from SBR encoding to wave-form coding. Further, freedom of choice with regards to transform sizes used in the MDCT coding for the lower frequencies and the higher frequencies is enabled, since they are coded with separate MDCT transforms.
In
In
The decoder according to the embodiment reads the provided bitstream and produces an audio output signal, psycho-acoustically resembling the original signal.
a is another illustration of aspects of an encoder 1100 according to an embodiment of the invention. The encoder 1100 comprises an LPC module 1101, a MDCT module 1104, a LTP module 1105 (shown only simplified), a quantization module 1103 and an inverse quantization module 1104 for feeding back reconstructed signals to the LTP module 1105. Further provided are a pitch estimation module 1150 for estimating the pitch of the input signal, and a window sequence determination module 1151 for determining the optimal MDCT window sequence for a larger block of the input signal (e.g. 1 second). In this embodiment, the MDCT window sequence is determined based on an open-loop approach where sequence of MDCT window size candidates is determined that minimizes a coding cost function, e.g. a simplistic perceptual entropy. The contribution of the LTP module 1105 to the coding cost function that is minimized by the window sequence determination module 1151 may optionally be considered when searching for the optimal MDCT window sequence. Preferably, for each evaluated window size candidate, the best long term prediction contribution to the MDCT frame corresponding to the window size candidate is determined, and the respective coding cost is estimated. In general, short MDCT frame sizes are more appropriate for speech input while long transform windows having a fine spectral resolution are preferred for audio signals.
Perceptual weights or a perceptual weighting function are determined based on the LPC parameters as calculated by the LPC module 1101, which will be explained in more detail below. The perceptual weights are supplied to the LTP module 1105 and the quantization module 1103, both operating in the MDCT-domain, for weighting error or distortion contributions of frequency components according to their respective perceptual importance.
Next, the coexistence of LPC and MDCT data and the emulation of the effect of the LPC in the MDCT, both for counteraction and actual filtering omission, will be discussed.
According to an embodiment, the LP module filters the input signal so that the spectral shape of the signal is removed, and the subsequent output of the LP module is a spectrally flat signal. This is advantageous for the operation of, e.g., the LTP. However, other parts of the codec operating on the spectrally flat signal may benefit from knowing what the spectral shape of the original signal was prior to LP filtering. Since the encoder modules, after the filtering, operate on the MDCT transform of the spectrally flat signal, the present invention teaches that the spectral shape of the original signal prior to LP filtering can, if needed, be re-imposed on the MDCT representation of the spectrally flat signal by mapping the transfer function of the used LP filter (i.e. the spectral envelope of the original signal) to a gain curve, or equalization curve, that is applied on the frequency bins of the MDCT representation of the spectrally flat signal. Conversely, the LP module can omit the actual filtering, and only estimate a transfer function that is subsequently mapped to a gain curve which can be imposed on the MDCT representation of the signal, thus removing the need for time domain filtering of the input signal.
One prominent aspect of embodiments of the present invention is that an MDCT-based transform coder is operated using a flexible window segmentation, on a LPC whitened signal. This is outlined in
The coexistence of LPC and MDCT data in the encoder may be exploited, for instance, to reduce the bit requirements of encoding MDCT scalefactors by taking into account a perceptual masking curve estimated from the LPC parameters. Furthermore, LPC derived perceptual weighting may be used when determining quantization distortion. As illustrated and as will be discussed below, the quantizer operates in two modes and generates two types of frames (ECQ frames and AAC frames) depending on the frame size of received data, i.e. corresponding to the MDCT frame or window size.
Now, specifics of the LPC-based perceptual model are discussed by referring to
The MDCT coding operating on the LPC residual has, in one implementation of the invention, scalefactors to control the resolution of the quantizer or the quantization step sizes (and, thus, the noise introduced by quantization). These scalefactors are estimated by a scalefactor estimation module 1360 on the original input signal. For example, the scalefactors are derived from a perceptual masking threshold curve estimated from the original signal. In an embodiment, a separate frequency transform (having possibly a different frequency resolution) may be used to determine the masking threshold curve, but this is not always necessary. Alternatively, the masking threshold curve is estimated from the MDCT lines generated by the transformation module. The bottom right part of
If a LPC filter is connected upstream of the MDCT transformation module, a whitened signal is transformed to the MDCT-domain. As this signal has a white spectrum, it is not well suited to derive a perceptual masking curve from it. Thus, a MDCT-domain equalization gain curve generated to compensate the whitening of the spectrum may be used when estimating the masking threshold curve and/or the scalefactors. This is because the scalefactors need to be estimated on a signal that has absolute spectrum properties of the original signal, in order to correctly estimate perceptually masking.
The calculation of the MDCT-domain equalization gain curve from the LPC polynomial is discussed in more detail with reference to
Using the above outlined approach, the data transmitted between the encoder and decoder contains both the LP polynomial from which the relevant perceptual information as well as a signal model can be derived when a model-based quantizer is used, and the scalefactors commonly used in a transform codec.
In more detail, returning to
Normally, the scalefactors are transmitted to the decoder, and so is the LP polynomial. Now, given that they are both estimated from the original input signal and that they both are somewhat correlated to the absolute spectrum properties of the original input signal, it is proposed to code a delta representation between the two, in order to remove any redundancy that may occur if both were transmitted separately. According to an embodiment, this correlation is exploited as follows. Since the LPC polynomial, when correctly chirped and tilted, strives to represent a masking threshold curve, the two representations may be combined so that the transmitted scalefactors of the transform coder represent the difference between the desired scalefactors and those that can be derived from the transmitted LPC polynomial. The scalefactor adaptation module 1361 shown in
In the following, the quantization strategy conditioned on frame-size, and the model-based quantization conditioned on assorted parameters according to an embodiment of the invention will be explained. One aspect of the present invention is that it utilizes different quantization strategies for different transform sizes or frame sizes. This is illustrated in
According to an independent aspect of the present invention, it is suggested to switch between different quantization strategies as function of frame size in order to be able to use the optimal quantization strategy given a particular frame size. As an example, the window-sequence may dictate the usage of a long transform for a very stationary tonal music segment of the signal. For this particular signal type, using a long transform, it is highly beneficial to employ a quantization strategy that can take advantage of “sparse” character (i.e. well defined discrete tones) in the signal spectrum.
A quantization method as used in AAC in combination with Huffman tables and grouping of spectral lines, also as used in AAC, is very beneficial. However, and on the contrary, for speech segments, the window-sequence may, given the coding gain of the LTP, dictate the usage of short transforms. For this signal type and transform size it is beneficial to employ a quantization strategy that does not try to find or introduce sparseness in the spectrum, but instead maintains a broadband energy that, given the LTP, will retain the pulse like character of the original input signal.
A more general visualization of this concept is given in
According to another aspect of the invention, the quantizer step size is adapted as function of LPC and/or LTP data. This allows a determination of the step size depending on the difficulty of a frame and controls the number of bits that are allocated for encoding the frame. In
A preferred perceptual weighting function derived from LPC data is given in the following equation:
where A(z) is the LPC polynomial, τ is a tilting parameter, ρ controls the chirping and r1 is the first reflection coefficient calculated from the A(z) polynomial. It is to be noted that the A(z) polynomial can be re-calculate to an assortment of different representations in order to extract relevant information from the polynomial. If one is interested in the spectral slope in order to apply a “tilt” to counter the slope of the spectrum, re-calculation of the polynomial to reflection coefficients is preferred, since the first reflection coefficient represents the slope of the spectrum.
In addition, the delta values Δ may be adapted as a function of the input signal variance σ, the LTP gain g, and the first reflection coefficient r1 derived from the prediction polynomial. For instance, the adaptation may be based on the following equation:
Δ′=Δ(1+r1(1−g2))
In the following, aspects of model-based quantizers according to an embodiment of the present invention are outlined. In
A local gain of the MDCT lines may be estimated as the RMS value of the MDCT lines, and the MDCT lines normalized in gain normalization module 2120 before input to the MBMLQ encoder 2100. The local gain normalizes the MDCT lines and is a complement to the LP gain normalization. Whereas the LP gain adapts to variations in signal level on a larger time scale, the local gain adapts to variations on a smaller time scale, yielding improved quality of transient sounds and on-sets in speech. The local gain is encoded by fixed rate or variable rate coding and transmitted to the decoder.
A rate control module 2110 may be employed to control the number of bits used to encode an MDCT frame. A rate control index controls the number of bits used. The rate control index points into a list of nominal quantizer step sizes. The table may be sorted with step sizes in descending order.
The MBMLQ encoder is run with a set of different rate control indices, and the rate control index that yields a bit count which is lower than the number of granted bits given by the bit reservoir control is used for the frame. The rate control index varies slowly and this can be exploited to reduce search complexity and to encode the index efficiently. The set of indices that is tested can be reduced if testing is started around the index of the previous MDCT frame. Likewise, efficient entropy coding of the index is obtained if the probabilities peak around the previous value of the index. E.g., for a list of 32 step sizes, the rate control index can be coded using 2 bits per MDCT frame on the average.
a illustrates schematically the model-based entropy constrained encoder 2140 in more detail. The input MDCT lines are perceptually weighed by dividing them with the values of the perceptual masking curve, preferably derived from the LPC polynomial, resulting in the weighted MDCT lines vector y=(y1, . . . , yN). The aim of the subsequent coding is to introduce white quantization noise to the MDCT lines in the perceptual domain. In the decoder, the inverse of the perceptual weighting is applied which results in quantization noise that follows the perceptual masking curve.
Random offsets were discussed previously in the context of the quantizer as means for avoiding spectral holes due to coarse quantization. An additional method for avoiding spectral holes is to incorporate an SBR module 2212 in the LTP loop, as outlined in
In
In one embodiment of the present invention the SBR module in the LTP loop is a simple copy-up (i.e. low frequency lines are copied to high frequency lines) mechanism. In another embodiment a harmonic high frequency regeneration module is used. It should be noted that for harmonic signal, a SBR module that creates a high frequency spectrum that is harmonically related to the low band spectrum is preferred since the high frequencies subtracted from the input signal prior to quantization may coincide well with the original high frequencies and thus reduce the energy of the signal going into the quantizer, thus making it easier to quantize given a certain bit rate requirement. In a third embodiment, the SBR module in the LTP loop can adapt the manner in which it re-creates the high frequencies depending on the transform size and thus, implicitly, the signal characteristics.
The present invention further incorporates a new window sequence coding format. According to an embodiment of the invention, as visualized in
The hyper-frame structure is useful when operating the coder in a real-world system, where certain decoder configuration parameters need to be transmitted in order to be able to start the decoder. This data is commonly stored in a header field in the bitstream describing the coded audio signal. In order to minimize bitrate, the header is not transmitted for every frame of coded data, particularly in a system as proposed by the present invention, where the MDCT frame-sizes may vary from very short to very large. It is therefore proposed by the present invention to group a certain amount of MDCT frames together into a hyper frame, where the header data is transmitted at the beginning of the hyper frame. The hyper frame is typically defined as a specific length in time. Therefore, care needs to be taken so that the variations of MDCT frame-sizes fits into a constant length, pre-defined hyper frame length. The above outlined inventive window-sequence ensures that the selected window sequence always fits into a hyper-frame structure.
a shows a preferred compatibility requirement for adjacent windows of an MDCT transform, as given by MDCT theory. The left window accommodates a transform size L1 and the right window a transform size L2. The overlap between the windows is supported on a time interval of diameter, or duration, D. For the MDCT transform taught by an embodiment of the present invention, the transform sizes can either be equal, L1=L2 or differ in size by a factor of two, L1=2L2 or L2=2L1. The figure depicts the latter situation. Moreover, as another preferred constraint, the position of the transform size intervals must be obtained by a dyadic partition of a regular equidistant hyperframe sequence. That is, the transform interval positions must result from a succession of splitting intervals in halves, starting from a hyperframe interval. Even when the transform size intervals are given, there is some freedom left in choosing the overlap diameter D. According to an embodiment of the present invention, diameters D very much smaller than the neighboring transform sizes L1, L2 are avoided, since such sharp edges lead to poor frequency resolution of the resulting MDCT transforms.
b schematically illustrates an embodiment of the present invention using four different MDCT window shapes. The four shapes are denoted by
The MDCT windows used are re-scaled versions of these four window types, where the rescaling is by a factor equal to a power of two. The tick marks on the time axis in
c describes by an example the window sequence encoding method according to an embodiment of the present invention. The scale of the time axis is normalized to units of the smallest transform size. The hyperframe size is 11=16 of that unit, and the left edge of the hyperframe defines the origin t=0 of the time scale. Also it is assumed for simplicity that the largest transform size allowed is 4=2N with N=2. The transform size intervals form a dyadic portion of the hyperframe interval [0,16], consisting of the 7 intervals [0,4], [4,6], [6,8], [8,9], [9,10], [10,12], [12,16] having lengths 4, 2, 2, 1, 1, 2, 4, respectively. As can be seen, these lengths obey the condition of at most changing size by a factor of two between neighbors. All 7 windows are obtained by rescaling of one of the four basic shapes of
Since transform sizes are kept, doubled, or halved, a first approach to encode those recursively is to keep track of this choice with a terniary symbol along the window sequence. This would however lead to an overcoding of transform sizes and an ambiguous description of window shapes. The former since it is sometimes impossible to double transform size, due to the requirement of using a dyadic partition.
For example, after the interval [4,6] a doubling would result in the interval [6,10] which is not a dyadic subinterval of [0,16]. The latter ambiguous description of window shape holds in the example of
Instead, the principle of coding according to an embodiment is as follows: For each window, a maximum of 2 bits is defined as follows
Stated differently, the mapping from the bit vector (b1, b2) to the window type of
However, if one of the bits can be deduced from either the constraint of dyadic transform intervals or the limits on transform size, then it is not transmitted.
Returning to the specific example of
but after using information available at both encoder and decoder it is reduced to
which is 9 bits for coding 7 windows.
It is apparent for those skilled in the art that a further reduction of bit rate can be achieved by entropy coding of these purely descriptive bits.
In
According to an embodiment, virtual LTP vectors in the MDCT-domain are used, as outlined in
This iterative process is illustrated in the following: From the LTP buffer, a first extraction of a signal is performed by the LTP extraction module 2512. The result of this first extraction is refined by the refinement module 2518, the purpose of which it is to improve the quality of the LTP signal when the chosen lag T is smaller than the duration of the MDCT window of the frame to be coded. The iterative process to refine an LTP contribution for a time lag that is smaller than the analyzed frame is briefly outlined first by referring to
This iterative process is preferably done 2 to 4 times.
The MDCT adapted LTP extraction process is depicted in more detail in
Again, it is assumed that the window w(t) is zero outside the known range [t1-r1, t2+r2].
Another, but equivalent, view on the operations from x(t) to x2(t) is to perform the steps
(i) {tilde over (x)}2(t)=w(t+T)·x(t);
(ii) x2(t)={tilde over (x)}2(t−T);
where step (i) amounts to a windowing with a window supported on (t1−r1−T, t2+r2−T) and step (ii) shifts the result by the LTP lag T.
For the depicted example, the values of the signs are (ε1, ε2)=(−1, 1) corresponding to a given implementation of the MDCT transform, other possibilities are (1,−1), (1,1) or (−1,−1).
The operations from x2(t) to x4(t) can also be combined into one operation of adding or subtracting mirror images of the signal parts on the intervals [t1−r1, t1+r1] and [t2−r2, t2+r2].
It is apparent for those skilled in the art that the combined operation from x1(t) to y(t) is equivalent to an MDCT analysis followed by an MDCT synthesis, and that this realizes an orthogonal projection of the current MDCT frame subspace.
It is important to note that in the case of no overlap, that is r1=r2=0, nothing happens to x2(t) due to the operations in d) to f). The windowing then consists of a simple extraction of the signal x1(t) in the interval [t1, t2]. In this case the LTP extraction module 2512 performs exactly what a prior art LTP extractor would do.
c illustrates the iterative refinement of an initial LTP extracted signal y1(t). It consists of applying the LTP extract operation N−1 times, and adding the results to the initial signal. If S denotes the LTP extract operation, the iteration is defined by the formulas
Δo=y1;
Δk=S(Δk-1),k=1, . . . , N−1;
yk=yk-1+Δk-1,k=2, . . . N−1.
If the LTP lag T>max (2r1, 2r2), it can be seen from
In the case of no overlap, that is r1=r2=0, the method coincides with the virtual vectors creation of prior art methods.
d shows an alternative implementation of the refinement unit, which performs the iteration
yk=y1+S(yk-1),k=2 . . . N.
In both implementations the final output from the iteration can be written as
where x is the LTP buffer signal.
According to an embodiment of the present invention, the LTP lag and the LTP gain are coded in a variable rate fashion. This is advantageous since, due to the LTP effectiveness for stationary periodic signals, the LTP lag tends to be the same over somewhat long segments. Hence, this can be exploited by means of arithmetic coding, resulting in a variable rate LTP lag and LTP gain coding.
Similarly, an embodiment of the present invention takes advantage of a bit reservoir and variable rate coding also for the coding of the LP parameters. In addition, recursive LP coding is taught by the present invention.
As outlined previously, techniques that are designed to improve coding of harmonic signals may be utilized. Such techniques are, e.g., harmonic prediction, LTP, and time-warping. All the aforementioned tools rely implicitly or explicitly on some sort of pitch or pitch-related information. In an embodiment of the present invention, this different information needed by the different techniques may be efficiently coded given that a dependency or correlation exists. This is visualized in
As outlined above, the codec according to an embodiment may utilize a LTP in the MDCT-domain. In order to improve the performance of the LTP in the MDCT-domain, two additional LTP buffers 2512, 2513 may be introduced. As illustrated by
Another aspect of the present invention is the handling of a bit reservoir for variable frame sizes in the encoder. A bit reservoir control unit is taught. In addition to a difficulty measure provided as input, the bit reservoir control unit also receives information on the frame length of the current frame. An example of a difficulty measure for usage in the bit reservoir control unit is perceptual entropy, or the logarithm of the power spectrum. Bit reservoir control is important in a system where the frame lengths can vary over a set of different frame lengths. The suggested bit reservoir control unit takes the frame length into account when calculating the number of granted bits for the frame to be coded as will be outlined below.
The bit reservoir is defined here as a certain fixed amount of bits in a buffer that has to be larger than the average number of bits a frame is allowed to use for a given bit rate. If it is of the same size, no variation in the number of bits for a frame would be possible. The bit reservoir control always looks at the level of the bit reservoir before taking out bits that will be granted to the encoding algorithm as allowed number of bits for the actual frame. Thus a full bit reservoir means that the number of bits available in the bit reservoir equals the bit reservoir size. After encoding of the frame, the number of used bits will be subtracted from the buffer and the bit reservoir gets updated by adding the number of bits that represent the constant bit rate. Therefore the bit reservoir is empty, if the number of the bits in the bit reservoir before coding a frame is equal to the number of average bits per frame.
In
When calculating the number of granted bits, the limits on the lower end of the bit reservoir have to be obeyed in order not to take out more bits from the buffer than allowed. A bit reservoir control scheme including the calculation of the granted bits by a control line as shown in
For such a control mechanism being able to handle a set of variable frame sizes, this simple control algorithm has to be adapted. The difficulty measure to be used has to be normalized so that the difficulty values of different frame sizes are comparable. For every frame size, there will be a different allowed range for the granted bits, and because the average number of bits per frame is different for a variable frame size, consequently each frame size has its own control equation with its own limitations. One example is shown in
The difficulty measure may be based, e.g., a perceptual entropy (PE) calculation that is derived from masking thresholds of a psychoacoustic model as it is done in AAC, or as an alternative the bit count of a quantization with fixed step size as it is done in the ECQ part of an encoder according to an embodiment of the present invention. These values may be normalized with respect to the variable frame sizes, which may be accomplished by a simple division by the frame length, and the result will be a PE respectively a bit count per sample. Another normalization step may take place with regard to the average difficulty. For that purpose, a moving average over the past frames can be used, resulting in a difficulty value greater than 1.0 for difficult frames or less than 1.0 for easy frames. In case of a two pass encoder or of a large lookahead, also difficulty values of future frames could be taken into account for this normalization of the difficulty measure.
According to an aspect of the present invention, the time-warped MDCT is used in combination with LTP. In this case, the LTP search is done in a constant pitch segment domain in the encoder. This is particular useful for long MDCT frames comprising several pitch pulses which-due to the pitch variation-are not arranged equidistant in the MDCT frame. Thus, a constant pitch segment from the LTP buffer will not fit properly over the plurality of pitch pulses. According to an embodiment, all segments in the LTP buffer are resampled based on the warping curve of the present MDCT frame. Also in the decoder, the selected segment in the LTP buffer is resampled to the warp data of the present frame, given the warp data information. The warp information may be is transmitted to the decoder as part of the bitstream.
In the top of
Another layered SBR reconstruction approach according to an embodiment of the present invention is illustrated in
In another embodiment of the invention, the upper frequency range of the LP spectrum is quantized and coded dependent on frame size and signal properties. For certain frame sizes and signals, the frequency range is coded according to the above, and for other transform sizes sparse quantization and noise-fill techniques are employed.
While the foregoing has been disclosed with reference to particular embodiments of the present invention, it is to be understood that the inventive concept is not limited to the described embodiments. On the other hand, the disclosure presented in this application will enable a skilled person to understand and carry out the invention. It will be understood by those skilled in the art that various modifications can be made without departing from the spirit and scope of the invention as set out exclusively by the accompanying claims.
Number | Date | Country | Kind |
---|---|---|---|
0800032 | Jan 2008 | SE | national |
08009531 | May 2008 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2008/011145 | 12/30/2008 | WO | 00 | 7/1/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2009/086919 | 7/16/2009 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5553191 | Minde | Sep 1996 | A |
5717825 | Lamblin | Feb 1998 | A |
6012025 | Yin | Jan 2000 | A |
6243673 | Ohno | Jun 2001 | B1 |
6389006 | Bialik | May 2002 | B1 |
6611800 | Nishiguchi et al. | Aug 2003 | B1 |
6879955 | Rao | Apr 2005 | B2 |
7457743 | Ojanpera | Nov 2008 | B2 |
7460993 | Chen et al. | Dec 2008 | B2 |
7610195 | Ojanpera | Oct 2009 | B2 |
8032362 | Choo et al. | Oct 2011 | B2 |
20020010577 | Matsumoto | Jan 2002 | A1 |
20020040299 | Makino | Apr 2002 | A1 |
20030215013 | Budnikov | Nov 2003 | A1 |
20070100607 | Villemoes | May 2007 | A1 |
20070106502 | Kim et al. | May 2007 | A1 |
20070282599 | Choo et al. | Dec 2007 | A1 |
20080270124 | Son et al. | Oct 2008 | A1 |
20100138218 | Geiger | Jun 2010 | A1 |
Number | Date | Country |
---|---|---|
0673014 | Sep 1995 | EP |
1262956 | Dec 2002 | EP |
1278184 | Jan 2003 | EP |
9-127998 | May 1997 | JP |
2001-142499 | May 2001 | JP |
2003-044097 | Feb 2003 | JP |
2004-246038 | Sep 2004 | JP |
2007-286200 | Nov 2007 | JP |
2002-0077959 | Oct 2002 | KR |
10-2006-0121973 | Nov 2006 | KR |
2144261 | Jan 2000 | RU |
98103512 | Jan 2000 | RU |
9528699 | Oct 1995 | WO |
0241302 | May 2002 | WO |
2006008817 | Jan 2006 | WO |
Entry |
---|
Ordentlich, E.; Shoham, Y.; , “Low-delay code-excited linear-predictive coding of wideband speech at 32 kbps,” Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference on , vol., No., pp. 9-12 vol. 1, Apr. 14-17, 1991. |
Ramprashad, S.A.; , “The multimode transform predictive coding paradigm,” Speech and Audio Processing, IEEE Transactions on , vol. 11, No. 2, pp. 117-129, Mar. 2003. |
Friedrich, Tobias; Schuller, Gerald; , “Spectral Band Replication Tool for Very Low Delay Audio Coding Applications,” Applications of Signal Processing to Audio and Acoustics, 2007 IEEE Workshop on , vol., No., pp. 199-202, Oct. 21-24, 2007. |
Oger, M., et al., “Transform Audio Coding with Model-Based Bit Allocation” Proceedings of ICASSP 2007, vol. 4, Apr. 15-20, 2007, pp. 545-548. |
Juin-Hwey Chen, “A candidate coder for the ITU-T's new wideband speech coding standard”, Acoustics, Speech, and Signal Processing, 1997 IEEE International Conference on Munich , Germany, Apr. 21-24, 1997, Los Alamitos, CA, US, IEEE, Comput. Soc, US, vol. 2, Apr. 21, 1997, pp. 1359-1362. |
Juha Ojanpera, et al, “Long Term Predictor for Transform Domain Perceptual Audio Coding”, AES convention 107, No. 5036, Sep. 24, 1999-Sep. 27, 1999, pp. 1-10. |
Omar Niamut et al., “RD Optimal Time Segmentations for the Time-Varying MDCT”, Proceedings of the European Signal Processing Conference, Sep. 6, 2004, pp. 1649-1652. |
Number | Date | Country | |
---|---|---|---|
20100286990 A1 | Nov 2010 | US |
Number | Date | Country | |
---|---|---|---|
61055975 | May 2008 | US |