The present invention relates to the field of signal processing, and more particularly, to an encoding and decoding method and device which realizes analyzing the audio signals in multi-resolution and quantizing the vectors of them.
Generally, audio encoding method comprises the steps of psychological acoustic model calculating, time-frequency domain mapping, quantizing, encoding, etc., wherein time-frequency domain mapping refers to mapping the input audio signal from the time domain into the frequency domain or the time-frequency domain.
Time-frequency domain mapping is also called transforming and filtering, which is a basic operation of audio signal encoding, and can enhance encoding efficiency. Most information contained in the time domain signals can be transformed or collected into a subset of the frequency domain or time-frequency domain coefficients by such operation. One of the basic operations of the perceptual audio encoder is mapping the input audio signal from the time domain into the frequency domain or the time-frequency domain. The basic thought is: decomposing the signal into the components of each frequency band; once the input signal is expressed in the frequency domain, the psychological acoustic model could be used to eliminate; grouping the components on each frequency band; at last rationally distributing the bit number to express the frequency parameter of each group. If the audio signal shows a strong quasi-periodicity, the process could greatly decrease the data bulk and increase encoding efficiency. At present, the commonly used time-frequency mapping methods include: Discrete Fourier Transform (DFT) method, Discrete Cosine Transform (DCT) method, Quadrature Mirror Filter (QMF) method, Pseudo Quadrature Mirror Filter (PQMF) method, Cosine Modulation Filter (CMF) method, Modified Discrete Cosine Transform (MDCT) method, Discrete Wavelet (Packet) Transform (DW(P)T) method, etc. However, the above methods should either adopt a transform/filter collocation to compress and express an input signal frame, or adopt the analysis filter bank of smaller time domain interval or transform compression to express signals with violent variation in order to eliminate the effect to decoding signals made by pre-echo. When an input signal frame comprises different components of transient characteristics, single transform collocation cannot meet the essential requirement of optimizing and compression for different signal sub-frame; simply using the analysis filter bank with of smaller time domain interval or transform to process the rapidly changed signal, the frequency resolution of the obtained coefficient is low, which makes the frequency resolution of the low frequency part much higher than the critical sub-band bandwidth of human ear, and greatly influences encoding efficiency.
In the process of audio encoding, when the time domain signals are mapped into the time-frequency domain signals, using vector quantization technique can increase encoding efficiency. At present, the audio encoding method which applies vector quantization technique in audio encoding is Transform-domain Weigthed Interleave Vector Quantization (TWINVQ) encoding method. In this method, when the signals are MDCT transformed, it constructs the vector to be quantized by cross selecting signal spectrum parameter, then the quality of encoding audio with low bit rate increase obviously by using vector quantization with high efficiency. However, because it cannot effectively control the quantized noise and due to human ear masking, TWINVQ encoding method is essentially an encoding method with perpetual loss, and requires to be further improved when seeking a higher subjective audio quality. At the same time, since interlacing coefficient is adopted by TWINVQ encoding method in organizing vectors, although it could ensure the statistic coherence between the vectors, not only the phenomenon that the signal energy is concentrated in the local time-frequency domain cannot be effectively used, but also further improvement of encoding efficiency is restricted. Furthermore, since MDCT transform is substantively a kind of filter bank with equal bandwidth, it cannot divide the signals according to the signal energy's convergence in the time-frequency plane, which limits the efficiency of TWINVQ encoding method.
Therefore, how to effectively use the time-frequency local convergence of the signals and the high efficiency of the vector quantization technique is a core problem of improving encoding efficiency. In particular, it relates to two aspects: at first, the time-frequency plane should be divided effectively so that the between-class distance of the signal components is as long as possible, but the within-class distance thereof is as short as possible, which is to solve the multi-resolution filter problem of the signals; secondly, it needs to rebuild, select and quantized the vector on the basis of an effectively divided time-frequency plane so as to maximize the encoding gain, which is to solve the multi-resolution vector quantization problem of the signals.
The present invention provides a method and device of multi-resolution vector quantization for audio encoding and decoding, which can adjust the time-frequency resolution according to different types of input signals, and effectively use local convergence of the signals in the time-frequency domain to process the vector quantization in order to increase encoding efficiency.
A method of multi-resolution vector quantization for audio encoding of the present invention comprises: adaptively filtering an input audio signal so as to gain a time-frequency filter coefficient and outputting a filtered signal; dividing vectors of the filtered signal in a time-frequency plane so as to gain a vector combination; selecting vectors to be quantized; quantizing the selected vectors and calculating a residual error of quantization; and transmitting a quantized codebook information as a side-information of an encoder to an audio decoder to quantize and encode the residual error of quantization.
A method of multi-resolution vector quantization for audio decoding, of the present invention comprises the following steps of: demultiplexing a code stream to gain a side information of the multi-resolution vector quantization, an energy of a selected point and location information of vector quantization; inverse quantizing vectors to obtain a normalized vector according to the above information and calculating a normalization factor to rebuild a quantized vector in an original time-frequency plane; adding the rebuilt vector to a residual error of a corresponding time-frequency coefficient according to the location information; obtaining a rebuilt audio signal by inverse filtering in multi-resolution and mapping from frequency to time.
A device of multi-resolution vector quantization for audio encoding of the present invention comprises: a time-frequency mapper, a multi-resolution filter, a multi-resolution vector quantizer, a psychological acoustic calculation module and a quantization encoder;the time-frequency mapper for receiving an input audio signal to process mapping from time to frequency domain and output to the multi-resolution filter;the multi-resolution filter foradaptively filtering the signal, and outputting a filtered signal to the psychological acoustic calculation module and the multi-resolution vector quantizer;the multi-resolution vector quantizer for vector quantizing the filtered signal and calculating a residual error of quantization, transmitting a quantized signal as a side information to an audio decoder and outputting the residual error of quantization to the quantization encoder;the psychological acoustic calculation module for calculating a masking threshold of a psychological acoustic model according to the input audio signal, and outputting to the quantization encoder so as to control noise allowed in quantization ;the quantization encoder for quantizing and entropy coding the residual error output by the multi-resolution vector quantizer to gain an encoded code stream information under restriction of the allowed noise output by the psychological acoustic calculation module.
A device of multi-resolution vector quantization for audio decoding of the present invention comprises: a decoding and inverse—quantizing device, a multi-resolution inverse-vector quantizer, a multi-resolution inverse filter and a frequency-time mapper; the decoding and inverse-quantizing device for demultiplexing, entropy decoding and inverse-quantizing a code stream to obtain a side information and encoding data and outputting to the multi-resolution inverse-vector quantizer; the multi-resolution inverse-vector quantizer for quantizing a inverse-vector to rebuild a quantized vector, adding and outputting a rebuilt vector to a residual coefficient of a time-frequency plane to the multi-resolution inverse filter; the multi-resolution inverse filter for inverse filtering a sum signal got by adding the vector rebuilt to a residual error coefficient by the multi-resolution vector quantizer and outputting to the frequency-time mapper; the frequency-time mapper for mapping a signal from frequency to time to obtain a final rebuilt audio signal.
The audio encoding and decoding methods and devices basing on the Multi-resolution Vector Quantization (MRVQ) technique of the present invention can adaptively filter the audio signal, utilize the phenomenon that signal energy locally converges in the time-frequency area more effectively by filtering in multi-resolution, and adaptively adjust the resolutions of time and frequency according to the types of signals; the result of multi-resolution time-frequency analysis can be utilized effectively through reorganizing the filter coefficient by selecting different organization policies complying with signal's convergence feature; vector quantizing these areas may improve encoding efficiency as well as control quantizing precision simply and optimize it.
Now, the present invention will be described in details with reference to the accompanying drawings and the preferred embodiments.
The flow chart shown in
A flow chart of multi-resolution filtering for the audio signal is shown in
As above mentioned, filtering both the graded signal and the fast-varying signal is based on the technique of the cosine modulation filter bank, which comprises two filtering methods: the traditional Cosine Modulation Filter (CMF) method, and the Modified Discrete Cosine Transform (MDCT) method. The signal resource encoding/decoding system basing on Cosine Modulation Filter method is shown in
The impact response of the traditional Cosine Modulation Filter technique is:
wherein 0≦k<M−1, 0≦n<2KM−1, K is an integer bigger than 0,
Here, set the length of impact response of an analysis window (analysis prototype filter) pa(n) of M sub-band cosine modulation filter bank is Na, the length of impact response of an integrated window (or called integrated prototype filter) ps(n) of M sub-band cosine modulation filter bank is Ns, at this time, the delay D of the entire system can be limited within the scope of [M−1, Ns+Na−M+1], and the delay of the system is D=2sM+d(0≦d≦2M−1).
When the analysis window equals to the integrated window, that is:
pa(n)=ps(n), and Na=Ns (F-3)
the cosine modulation filter bank represented by formula (F-1) and (F-2) is an orthogonal filter bank, here, matrixes H and F ([H]n,k=hk(n),[F]n,k=fk(n)) are the orthogonal transform matrixes. To gain a linear phase filter bank, further define a symmetric window
pa(2KM−1−n)=pa(n) (F-4)
In order to ensure the complete reconfiguration of the orthogonal and bi-orthogonal systems, please refer to the document (P. P. Vaidynathan, “Multirate Systems and Filter Banks”, Prentice Hall, Englewood Cliffs, N.J.,1993) about the conditions that the window lo function should satisfy.
Another filter method is Modified Discrete Cosine Transform (MDCT) method, which is also called as TDAC (Time Domain Aliasing Cancellation) cosine modulation filter bank, and the impact response thereof is:
Wherein 0≦k<M−1, 0≦n<2KM−1, and K is an integer bigger than 0. Pa (n) and ps (n) respectively represent the analysis window (analysis prototype filter) and the integrated window (integrated prototype filter).
Likewise, when the analysis window equals to the integrated window, that is:
pa (n)=ps(n) (F-7)
the cosine modulation filter bank represented by formula (F-5) and (F-6) is an orthogonal filter bank, here, matrixes H and F ([H]n,k=hk(n),[F]n,k=fk(n)) are the orthogonal transform matrixes. To gain a linear phase filter bank, further define a symmetric window
pa(2KM−1−n)=pa(n) (F-8)
In order to ensure the complete reconfiguration, the analysis window and the integrated window should satisfy:
wherein s=0, . . . . , K−1, n=0, . . . M/2−1.
Relaxing the limitation condition of (F-7), i.e., canceling the limitation that the analysis window equals to the integrated window, so the cosine modulation filter bank is a bi-orthogonal filter bank.
It is proven by time domain analysis that the bi-orthogonal filter bank obtained according to (F-5) and (F-6) still satisfy the complete rebuilding performance, as long as
wherein s=0, . . . , K−1, n=0, . . . , M−1.
According to the above analysis, the analysis window and the integrated window of the cosine modulation filter bank (including MDCT) can adopt any window shape satisfying complete rebuilding condition of filter bank, such as SINE and KBD windows commonly used in audio encoding.
In addition, filtering of the cosine modulation filter bank can use Fast Fourier Transform to improve calculation efficiency. Please refer to “A New Algorithm for the Implementation of Filter Banks based on ‘Time Domain Aliasing Cancellation’ (P. Duhamel, Y. Mahieux and J. P. Petit,Proc.ICASSP, May 1991, Page 2209-2212).
Likewise, the wavelet transform technique is also a well-known technique in the field of signal processing. Please refer to the detailed discussion about the wavelet transform technique in “Sub-wave Transform Theory and Its Application In Signal Processing” (Chen Fengshi, China National Defense Industry Press, 1998).
The multi-resolution analyzed and filtered signal has the property of re-distribution and congregating the signal energy in time-frequency plane, as shown in
In the multi-resolution distribution of time-frequency, the frequency resolution of the low frequency part is high, and the frequency resolution of the intermediate and high frequency part is low. Since the components inducing the pre-echo phenomenon are mainly in the intermediate and high frequency parts, pre-echo can be effectively restricted if the encoding quality of these components can be improved. An important purpose of multi-resolution vector quantization is optimizing the error introduced in quantization aiming at these important filter coefficients. Therefore, it is very important to use the encoding policy with high efficiency for these coefficients. The important filter coefficients can be re-organized and classified effectively according to the obtained time-frequency distribution of the filter coefficients of filtered signals in mutli-resolution. It can be known from the above analysis that the energy distributions of the filtered signals in multi-resolution shows a strong orderliness, therefore introducing the vector quantization can effectively use such property to organize the coefficients. Organize the area in the time-frequency plane to be one-dimensional vector matrix form by the vector organization adopting the special method. Then vector quantize all or part of the matrix elements of the vector matrix. Transmit the quantized information to the decoder as the side information of the encoder, and the residual error of quantization and the un-quantized coefficient together form a residual system to be quantized and encoded.
After the process of vector dividing, determine which vectors are to be quantized, so as to select the vectors which can adopt two selection methods. The first method is selecting all the vectors in the entire time-frequency plane to be quantized, in which all the vectors refer to the vectors covering all the time-frequency grid points obtained according to a certain dividing, e.g. the vectors can be all the vectors obtained by I type vector array, or all the vectors obtained by II type vector array, or all the vectors obtained by III type vector array, only all the vectors in one of these arrays are necessary to be selected. Which vector aggregate should be selected is determined by the quantization gain, which is the ratio of the energy before quantization to the energy of the quantization error. Select the vectors in the vector array with large gain from the above vector array.
The second method is selecting the most important vector to be quantized. The most vectors can be the vector in the frequency direction, or the vector in the time direction or the vector in the time-frequency area. In the case where only part of the vectors is selected to be quantized, besides the quantization index is included in the side information, the serial number of these vectors is also needed to be included. The detailed vector selection methods are to be described in the followings.
Proceed to vector quantization after the vectors to be quantized are determined. Either selecting all the vectors to be quantized or selecting the important vectors to be quantized, the basic unit is quantizing the single vector. For the single D-dimension vector, considering a compromise of the dynamic scope and the size of the codebook, the vectors should be normalized before quantization to gain a normalization factor, which is the value reflecting the dynamic energy scope of different vectors and is varied. Quantizing the vectors after they are normalized includes quantization of codebook index and quantization of normalization factor. In consideration of the limitation of the coding rate and the encoding gain, the bit number occupied by quantizing quantization factor under satisfying the precision condition is as little as may be. In the present invention, the methods of curve and surface fitting, multi-resolution decomposition and prediction and the others are used to calculate an envelope of multi-resolution time-frequency coefficient to obtain the normalization factor.
In
If not quantize all the vectors, it needs to select the vector by importance. In said embodiment, the basis of selecting the vector is the energy of vector and the variance of each component of the vector. When calculating the variance, elements of the vector should be taken the absolute value to remove the effect of the symbols of numerical value. Set the aggregate V={vf}U{vt}U{vt-f}, the detailed process of selecting the vector is as the following: at first, calculate the energy of each vector in the aggregate V Evi=|vi|2 , and at the same time calculate dEvi of each vector, wherein dEvi represents the variance of each component of No. i vector. Sorting the elements in the aggregate V by energy from the biggest to the smallest; re-sorting the above sorted elements by variance from the smallest to the biggest. Determine the number Mo f vectors to be selected according to the ratio of the total energy of the signal to the total energy of the currently selected vector, and the typical value can take an integer from 3-50. Then select the first M vectors to be quantized; if the vectors in the same area are included in I type vector array, II type vector array and III type vector array at the same time, and then select according to the ordering of the variance. Select the M vectors to be quantized via the above steps.
After the M vectors are selected, complete the process of quantization search for each order difference by using Taylor Approximation Formula and different distortion measure rule respectively. For more efficient quantization, the vectors need to be normalized twice. When normalizing at the first time, adopt the global absolute maximum. When normalizing at the second time, estimate the signal envelope by the limited multipoint, and then normalize the vectors at the corresponding positions for the second time by the estimated value. The dynamic scope of the vector variation is controlled effectively after being normalized two times. The estimate method of the signal envelope is realized by Taylor Formula, which will be described in the following. Vector quantization is proceeded to the following steps: at first determine the parameters in Taylor Approximation Formula so as to use Taylor Formula to represent the approximate value of energy of any vectors in the entire time-frequency plane, and work out the maximum energy or absolute maximum thereof; then proceed to first normalization of the selected vectors; afterwards, calculate the approximate value of energy of the vector to be quantized by Taylor Formula to proceed to the second normalization; at last, quantize the normalized vectors based on the least distortion, and calculate the residual error of quantization. The above steps are herein described in details. In the time-frequency plane, the coefficient of each time-frequency grid corresponds to a certain energy value. Defining the coefficient energy of the time-frequency grid is the square or the absolute value of the coefficient; defining the vector energy is the sum of the coefficient energy of all the time-frequency girds forming the vector or the absolute maximum of these coefficient values; defining the energy of the time-frequency plane area is the sum of the coefficient energy of all the time-frequency girds forming the area or the absolute maximum of these coefficient values. In order to obtain the vector energy, it needs to calculate the energy sum or the absolute maximum of coefficients of all the time-frequency grids contained in the vector. Therefore, the dividing methods of
The M values of the Unary Function Y=f(X) form a discrete sequence {y1, y2, y3, y4, . . . , yM}, and the first-order, second-order and third-order differences can be gained by regression method, i.e., DY, D2Y and D3Y can be gained from Y.
What is shown in
Gain=Global_Gain*Local_Gain (2)
Wherein, Local_Gain does not need quantization at the encoder end. At the decoder end, Local_Gain can be obtained by the same process according to Taylor Formula (1). Multiply Global_Gain with the rebuilt normalized vector to gain the rebuilt value of the current vector. Therefore, the side information to be encoded at the encoder end is the function value, and the first-order and second-order differences of the selected round points in
f(x0+Δ)=f(x0)+f(1)(x0)Δ (3)
Therefore, that quantizing the first-order difference firstly searches a few code words with the least distortion in the corresponding codebook according to Euclidean distance, then calculates a quantization distortion in each area of a small neighborhood at the current vector x0 by using formula (3), and lastly sums the distortion to be the distortion measure, that is:
Wherein f(x+Δk) represents the true value before quantization, {circumflex over (f)}(x+Δk) represents the approximate value gained by Taylor Formula, and M represents the scope of the neighborhood. The quantization of the second-order difference can use the same process. With the above processes, finally three quantized code word indexes can be gained to be transmitted to the decoder as the side information. And the residual error of quantization should be quantized and coded.
It is very easy to expand the above methods to the situation of two dimensional surfaces.
Identical to the embodiment shown in
The B spline function of the constant (power of 0) in No. i sub-interval is
The B spline function of the power of m in the interval [xi, xi+m+1] is defined as:
Therefore, by using the B spline base function as the base, any spline can be represented as:
In this case, the function value of the spline of the given x point can be calculated according to formula (5), (6) and (7). The points for interpolation are also called guide points.
In the same way,
Gain=Global_Gain*Local_Gain (8)
Wherein, Local_Gain does not need quantization at the encoder end. Likewise, at the decoder end, Local_Gain can be obtained by the same process according to the fitting formula (7). Multiply the total gain with the rebuilt normalized vector to obtain the rebuilt value of the current vector. Therefore, the side information to be encoded at the encoder end is the function value of the selected round points shown in
The process of vector quantization is described as the following: pre-select the function value f(x) of M areas to form a M-dimensional vector Y. Vector Y can be further decomposed into several component vectors to control the size of the vectors and improve the precision of the vector quantization, and these vectors are called vectors of the selected points. Then quantize vector Y respectively. At the encoder end, the corresponding vector codebooks can be obtained by Codebook Training Algorithm. The process of quantization is the process of searching the most matched vectors, and the code word indexes gained by searching are transmitted to the decoder as the side information. And the residual error of quantization should carry on the next quantization and encoding.
It is very easy to expand the above methods to the situation of two dimensional surfaces.
As shown in
As shown in
As shown in
As shown in
As shown in
As shown in
It will be understood that the above embodiments are used only to explain but not to limit the present invention. In despite of the detailed description of the present invention with referring to above preferred embodiments, it should be understood that various modifications, changes or equivalents can be made by those skilled in the art without departing from the spirit and scope of the present invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/CN03/00790 | 9/17/2003 | WO | 7/21/2006 |