The technology disclosed herein relates to audio encoding/decoding based on an efficient representation of auto-regression (AR) coefficients.
AR analysis is commonly used in both time [1] and transform domain audio coding [2]. Different applications use AR vectors of different length. The model order is mainly dependent on the bandwidth of the coded signal; from 10 coefficients for signals with a bandwidth of 4 kHz, to 24 coefficients for signals with a bandwidth of 16 kHz. These AR coefficients are quantized with split, multistage vector quantization (VQ), which guarantees nearly transparent reconstruction. However, conventional quantization schemes are not designed for the case when AR coefficients model high audio frequencies, for example above 6 kHz, and when the quantization is operated with very limited bit-budgets (which do not allow transparent coding of the coefficients). This introduces large perceptual errors in the reconstructed signal when these conventional quantization schemes are used at non-optimal frequency ranges and with non-optimal bitrates.
An object of the disclosed technology is a more efficient quantization scheme for the auto-regressive coefficients. This objective may be achieved with several of the embodiments disclosed herein.
A first aspect of the technology described herein involves a method of encoding a parametric spectral representation of auto-regressive coefficients that partially represent an audio signal. An example method includes the following steps: encoding a low-frequency part of the parametric spectral representation by quantizing elements of the parametric spectral representation that correspond to a low-frequency part of the audio signal; and encoding a high-frequency part of the parametric spectral representation by weighted averaging based on the quantized elements flipped around a quantized mirroring frequency, which separates the low-frequency part from the high-frequency part, and a frequency grid determined from a frequency grid codebook in a closed-loop search procedure.
A second aspect of the technology described herein involves a method of decoding an encoded parametric spectral representation of auto-regressive coefficients that partially represent an audio signal. An example method includes the following steps: reconstructing elements of a low-frequency part of the parametric spectral representation corresponding to a low-frequency part of the audio signal from at least one quantization index encoding that part of the parametric spectral representation; and reconstructing elements of a high-frequency part of the parametric spectral representation by weighted averaging based on the decoded elements flipped around a decoded mirroring frequency, which separates the low-frequency part from the high-frequency part, and a decoded frequency grid.
A third aspect of the technology described herein involves an encoder for encoding a parametric spectral representation of auto-regressive coefficients that partially represent an audio signal. An example encoder includes: a low-frequency encoder configured to encode a low-frequency part of the parametric spectral representation by quantizing elements of the parametric spectral representation that correspond to a low-frequency part of the audio signal; and a high-frequency encoder configured to encode a high-frequency part of the parametric spectral representation by weighted averaging based on the quantized elements flipped around a quantized mirroring frequency, which separates the low-frequency part from the high-frequency part, and a frequency grid determined from a frequency grid codebook in a closed-loop search procedure. A fourth aspect of the technology described herein involves a UE including the encoder in accordance with the third aspect.
A fifth aspect involves a decoder for decoding an encoded parametric spectral representation of auto-regressive coefficients that partially represent an audio signal. An example decoder includes: a low-frequency decoder configured to reconstruct elements of a low-frequency part of the parametric spectral representation corresponding to a low-frequency part of the audio signal from at least one quantization index encoding that part of the parametric spectral representation; and a high-frequency decoder configured to reconstruct elements of a high-frequency part of the parametric spectral representation by weighted averaging based on the decoded elements flipped around a decoded mirroring frequency, which separates the low-frequency part from the high-frequency part, and a decoded frequency grid. A sixth aspect of the technology described herein involves a UE including the decoder in accordance with the fifth aspect.
The technology detailed below provides a low-bitrate scheme for compression or encoding of auto-regressive coefficients. In addition to perceptual improvements, the technology also has the advantage of reducing the computational complexity in comparison to full-spectrumquantization methods.
The disclosed technology, together with further objects and advantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which:
The disclosed technology requires as input a vector a of AR coefficients (another commonly used name is linear prediction (LP) coefficients). These are typically obtained by first computing the autocorrelations r(j) of the windowed audio segment s (n), n=1, . . . , N, i.e.:
where M is pre-defined model order. Then the AR coefficients a are obtained from the autocorrelation sequence r(j) through the Levinson-Durbin algorithm [3].
In an audio communication system AR coefficients have to be efficiently transmitted from the encoder to the decoder part of the system. In the disclosed technology this is achieved by quantizing only certain coefficients, and representing the remaining coefficients with only a small number of bits.
Encoder
Although the disclosed technology will be described with reference to an LSF representation, the general concepts may also be applied to an alternative implementation in which the AR vector is converted to another parametric spectral representation, such as Line Spectral Pair (LSP) or Immitance Spectral Pairs (ISP) instead of LSF.
Only the low-frequency LSF subvector fL is quantized in step S5, and its quantization indices If
In the disclosed embodiment quantization is based on a set of scalar quantizers (SQs) individually optimized on the statistical properties of the above parameters. In an alternative implementation the LSF elements could be sent to a vector quantizer (VQ) or one can even train a VQ for the combined set of parameters (LSFs, mirroring frequency, and optimal grid).
The low-frequency LSFs of subvector fL are in step S6 flipped into the space spanned by the high-frequency LSFs of subvector fH. This operation is illustrated in
{circumflex over (f)}
m
=Q(f(M/2)−{circumflex over (f)}(M/2−1))+{circumflex over (f)}(M/2−1) (2)
where f denotes the entire LSF vector, and Q(·) is the quantization of the difference between the first element in fH (namely f(M/2)) and the last quantized element in fL (namely {circumflex over (f)}(M/2−1)), and where M denotes the total number of elements in the parametric spectral representation.
Next the flipped LSFs fflip(k) are calculated in accordance with:
f
flip(k)=2{circumflex over (f)}m−{circumflex over (f)}(M/2−1−k),0≤k≤M/2−1 (3)
Then the flipped LSFs are rescaled so that they will be bound within the range [0 . . . 0.5] (as an alternative the range can be represented in radians as [0 . . . π]) in accordance with:
The frequency grids gi are rescaled to fit into the interval between the last quantized LSF element {circumflex over (f)}(M/2−1) and a maximum grid point value gmax, i.e.:
{tilde over (g)}
i(k)=gi(k)·(gmax−{circumflex over (f)}(M/2−1))+{circumflex over (f)}(M/2−1) (5)
These flipped and rescaled coefficients {tilde over (f)}flip(k) (collectively denoted {tilde over (f)}H in
f
smooth(k)=[1−λ(k)]{tilde over (f)}flip(k)+λ(k){tilde over (g)}i(k) (6)
where λ(k) and [1−λ(k)] are predefined weights.
Since equation (6) includes a free index i, this means that a vector fsmooth(k) will be generated for each {tilde over (g)}i(k). Thus, equation (6) may be expressed as:
f
smooth(k)=[1−λ(k)]{tilde over (f)}flip(k)+λ(k){tilde over (g)}i(k) (7)
The smoothing is performed step S7 in a closed loop search over all frequency grids gi, to find the one that minimizes a pre-defined criterion (described after equation (12) below). For M/2=5 the weights λ(k) in equation (7) can be chosen as:
λ={0.2,0.35,0.5,0.75,0.8} (8)
In an embodiment these constants are perceptually optimized (different sets of values are suggested, and the set that maximized quality, as reported by a panel of listeners, are finally selected). Generally the values of elements in λ increase as the index k increases. Since a higher index corresponds to a higher-frequency, the higher frequencies of the resulting spectrum are more influenced by {tilde over (g)}i(k) than by {tilde over (f)}flip (see equation (7)). This result of this smoothing or weighted averaging is a more flat spectrum towards the high frequencies (the spectrum structure potentially introduced by {tilde over (f)}flip is progressively removed towards high frequencies).
Here gmax is selected close to but less than 0.5. In this example gmax is selected equal to 0.49.
The method in this example uses 4 trained grids gi (less or more grids are possible). Template grid vectors on a range [0 . . . 1], pre-stored in memory, are of the form:
If we assume that the position of the last quantized LSF coefficient {circumflex over (f)}(M/2−1) is 0.25, the rescaled grid vectors take the form:
An example of the effect of smoothing the flipped and rescaled LSF coefficients to the grid points is illustrated in
If gmax=0.5 instead of 0.49, the frequency grid codebook may instead be formed by:
If we again assume that the position of the last quantized LSF coefficient {circumflex over (f)}(M/2−1) is 0.25, the rescaled grid vectors take the form:
It is noted that the rescaled grids {tilde over (g)}i may be different from frame to frame, since {circumflex over (f)}(M/2−1) in rescaling equation (5) may not be constant but vary with time. However, the codebook formed by the template grids gi is constant. In this sense the rescaled grids {tilde over (g)}i may be considered as an adaptive codebook formed from a fixed codebook of template grids gi.
The LSF vectors fsmoothi created by the weighted sum in (7) are compared to the target LSF vector fH, and the optimal grid gi is selected as the one that minimizes the mean-squared error (MSE) between these two vectors. The index opt of this optimal grid may mathematically be expressed as:
where fH(k) is a target vector formed by the elements of the high-frequency part of the parametric spectral representation.
In an alternative implementation one can use more advanced error measures that mimic spectral distortion (SD), e.g., inverse harmonic mean or other weighting on the LSF domain.
In an embodiment the frequency grid codebook is obtained with a K-means clustering algorithm on a large set of LSF vectors, which has been extracted from a speech database. The grid vectors in equations (9) and (11) are selected as the ones that, after rescaling in accordance with equation (5) and weighted averaging with {tilde over (f)}flip in accordance with equation (7), minimize the squared distance to fH. In other words these grid vectors, when used in equation (7), give the best representation of the high-frequency LSF coefficients.
The quantized low-frequency subvector {circumflex over (f)}L and the not yet encoded high-frequency subvector fH are forwarded to the high-frequency encoder 12. A mirroring frequency calculator 18 is configured to calculate the quantized mirroring frequency {circumflex over (f)}m in accordance with equation (2). The dashed lines indicate that only the last quantized element {circumflex over (f)}(M/2−1) in {circumflex over (f)}L and the first element f(M/2) in fH are required for this. The quantization index Im representing the quantized mirroring frequency {circumflex over (f)}m is outputted for transmission to the decoder.
The quantized mirroring frequency {circumflex over (f)}m is forwarded to a quantized low-frequency subvector flipping unit 20 configured to flip the elements of the quantized low-frequency subvector {circumflex over (f)}L around the quantized mirroring frequency {circumflex over (f)}m in accordance with equation (3). The flipped elements fflip(k) and the quantized mirroring frequency {circumflex over (f)}m are forwarded to a flipped element rescaler 22 configured to rescale the flipped elements in accordance with equation (4).
The frequency grids gi(k) are forwarded from frequency grid codebook 24 to a frequency grid rescaler 26, which also receives the last quantized element {circumflex over (f)}(M/2−1) in {circumflex over (f)}L. The rescaler 26 is configured to perform rescaling in accordance with equation (5).
The flipped and rescaled LSFs {tilde over (f)}flip(k) from flipped element rescaler 22 and the rescaled frequency grids {tilde over (g)}i(k) from frequency grid rescaler 26 are forwarded to a weighting unit 28, which is configured to perform a weighted averaging in accordance with equation (7). The resulting smoothed elements fsmoothi(k) and the high-frequency target vector fH are forwarded to a frequency grid search unit 30 configured to select a frequency grid gopt in accordance with equation (13). The corresponding index Ig is transmitted to the decoder.
Decoder
The method steps performed at the decoder are illustrated by the embodiment in
In step S13 the quantized low-frequency part {circumflex over (f)}L is reconstructed from a low-frequency codebook by using the received index If
The method steps performed at the decoder for reconstructing the high-frequency part {circumflex over (f)}H are very similar to already described encoder processing steps in equations (3)-(7).
The flipping and rescaling steps performed at the decoder (at S14) are identical to the encoder operations, and therefore described exactly by equations (3)-(4).
The steps (at S15) of rescaling the grid (equation (5)), and smoothing with it (equation (6)), require only slight modification in the decoder, because the closed loop search is not performed (search over i). This is because the decoder receives the optimal index opt from the bit stream. These equations instead take the following form:
{tilde over (g)}
opt(k)=gopt(k)·(gmax−{circumflex over (f)}(M/2−1))+{circumflex over (f)}(M/2−1) (14)
and
f
smooth(k)=[1−λ(k)]{tilde over (f)}flip(k)+λ(k){tilde over (g)}opt(k) (15)
respectively. The vector fsmooth represents the high-frequency part {circumflex over (f)}H of the decoded signal.
Finally the low- and high-frequency parts {circumflex over (f)}L, {circumflex over (f)}H of the LSF vector are combined in step S16, and the resulting vector {circumflex over (f)} is transformed to AR coefficients â in step S17.
The steps, functions, procedures and/or blocks described herein may be implemented in hardware using any conventional technology, such as discrete circuit or integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry.
Alternatively, at least some of the steps, functions, procedures and/or blocks described herein may be implemented in software for execution by suitable processing equipment. This equipment may include, for example, one or several micro processors, one or several Digital Signal Processors (DSP), one or several Application Specific Integrated Circuits (ASIC), video accelerated hardware or one or several suitable programmable logic devices, such as Field Programmable Gate Arrays (FPGA). Combinations of such processing elements are also feasible.
It should also be understood that it may be possible to reuse the general processing capabilities already present in a UE. This may, for example, be done by reprogramming of the existing software or by adding new software components.
In one example application the disclosed AR quantization-extrapolation scheme is used in a BWE context. In this case AR analysis is performed on a certain high frequency band, and AR coefficients are used only for the synthesis filter. Instead of being obtained with the corresponding analysis filter, the excitation signal for this high band is extrapolated from an independently coded low band excitation.
In another example application the disclosed AR quantization-extrapolation scheme is used in an ACELP type coding scheme. ACELP coders model a speaker's vocal tract with an AR model. An excitation signal e(n) is generated by passing a waveform s(n) through a whitening filter e(n)=A(z)s(n), where A(z)=1+aiz−1+a2z−2+ . . . +aMz−M, is the AR model of order M. On a frame-by-frame basis a set of AR coefficients a=[a1 a2 . . . aM]T, and excitation signal are quantized, and quantization indices are transmitted over the network. At the decoder, synthesized speech is generated on a frame-by-frame basis by sending the reconstructed excitation signal through the reconstructed synthesis filter A(z)−1.
In a further example application, the disclosed AR quantization-extrapolation scheme is used as an efficient way to parameterize a spectrum envelope of a transform audio codec. On short-time basis the waveform is transformed to frequency domain, and the frequency response of the AR coefficients is used to approximate the spectrum envelope and normalize transformed vector (to create a residual vector). Next the AR coefficients and the residual vector are coded and transmitted to the decoder.
It will be understood by those skilled in the art that various modifications and changes may be made to the disclosed technology without departure from the scope thereof, which is defined by the appended claims.
Number | Date | Country | |
---|---|---|---|
61554647 | Nov 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16832597 | Mar 2020 | US |
Child | 17199869 | US | |
Parent | 14994561 | Jan 2016 | US |
Child | 16832597 | US | |
Parent | 14355031 | Apr 2014 | US |
Child | 14994561 | US |