The proposed technology relates to transform encoding/decoding of audio signals, especially harmonic audio signals.
Transform encoding is the main technology used to compress and transmit audio signals. The concept of transform encoding is to first convert a signal to the frequency domain, and then to quantize and transmit the transform coefficients. The decoder uses the received transform coefficients to reconstruct the signal waveform by applying the inverse frequency transform, see
In a typical transform codec the signal waveform is transformed on a block by block basis (with 50% overlap), using the Modified Discrete Cosine Transform (MDCT). In an MDCT type transform codec a block signal waveform X(n) is transformed into an MDCT vector Y(k). The length of the waveform blocks corresponds to 20-40 ms audio segments. If the length is denoted by 2 L , the MDCT transform can be defined as:
for k=0, . . . L−1. Then the MDCT vector Y(k) is split into multiple bands (sub vectors), and the energy (or gain) G(j) in each band is calculated as:
where mj is the first coefficient in band j and Nj refers to the number of MDCT coefficients in the corresponding bands (a typical range contains 8-32 coefficients). As an example of a uniform band structure, let Nj=8 for all j, then G(0) would be the energy of the first 8 coefficients, G(1) would be the energy of the next 8 coefficients, etc.
These energy values or gains give an approximation of the spectrum envelope, which is quantized, and the quantization indices are transmitted to the decoder. Residual sub-vectors or shapes are obtained by scaling the MDCT sub-vectors with the corresponding envelope gains, e.g. the residual in each
The conventional transform encoding concept does not work well with very harmonic audio signals, e.g. single instruments. An example of such a harmonic spectrum is illustrated in
An object of the proposed technology is a transform encoding/decoding scheme that is more suited for harmonic audio signals.
The proposed technology involves a method of encoding frequency transform coefficients of a harmonic audio signal. The method includes the steps of:
The proposed technology also involves an encoder for encoding frequency transform coefficients of a harmonic audio signal. The encoder includes:
a peak locator configured to locate spectral peaks having magnitudes exceeding a predetermined frequency dependent threshold;
a peak region encoder configured to encode peak regions including and surrounding the located peaks;
a low-frequency set encoder configured to encode at least one low-frequency set of coefficients outside the peak regions and below a crossover frequency that depends on the number of bits used to encode the peak regions;
a noise-floor gain encoder configured to encode a noise-floor gain of at least one high-frequency set of not yet encoded coefficients outside the peak regions.
The proposed technology also involves a user equipment (UE) including such an encoder.
The proposed technology also involves a method of reconstructing frequency transform coefficients of an encoded frequency transformed harmonic audio signal. The method includes the steps of:
decoding spectral peak regions of the encoded frequency transformed harmonic audio signal;
decoding at least one low-frequency set of coefficients;
distributing coefficients of each low-frequency set outside the peak regions;
decoding a noise-floor gain of at least one high-frequency set of coefficients outside of the peak regions;
filling each high-frequency set with noise having the corresponding noise-floor gain.
The proposed technology also involves a decoder for reconstructing frequency transform coefficients of an encoded frequency transformed harmonic audio signal. The decoder includes:
The proposed technology also involves a user equipment (UE) including such a decoder.
The proposed harmonic audio coding encoding/decoding scheme provides better perceptual quality than the conventional coding schemes for a large class of harmonic audio signals.
The present technology, together with further objects and advantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which:
The proposed technology provides an alternative audio encoding model that handles harmonic audio signals better. The main concept is that the frequency transform vector, for example an MDCT vector, is not split into envelope and residual part, but instead spectral peaks are directly extracted and quantized, together with neighboring MDCT bins. At high frequencies, low energy coefficients outside the peaks neighborhoods are not coded, but noise-filled at the decoder. Here the signal model used in the conventional encoding, { spectrum envelope+residual} is replaced with a new model { spectral peaks+noise-floor}. At low frequencies, coefficients outside the peak neighborhoods are still coded, since they have an important perceptual role.
Encoder
Major steps on the encoder side are:
First the noise-floor is estimated, then the spectral peaks are extracted by a peak picking algorithm (the corresponding algorithms are described in more detail in APPENDIX I-II). Each peak and its surrounding 4 neighbors are normalized to unit energy at the peak position, see
In the above example each peak region includes 4 neighbors that symmetrically surround the peak. However it is also feasible to have both fewer and more neighbors surrounding the peak in either symmetrical or asymmetrical fashion.
After the peak regions have been quantized, all available remaining bits (except reserved bits for noise-floor coding, see below) are used to quantize the low frequency MDCT coefficients. This is done by grouping the remaining un-quantized MDCT coefficients into, for example, 24-dimensional bands starting from the first bin. Thus, these bands will cover the lowest frequencies up to a certain crossover frequency. Coefficients that have already been quantized in the peak coding are not included, so the bands are not necessarily made up from 24 consecutive coefficients. For this reason the bands will also be referred to as “sets” below.
The total number of LF bands or sets depends on the number of available bits, but there are always enough bits reserved to create at least one set. When more bits are available the first set gets more bits assigned until a threshold for the maximum number of bits per set is reached. If there are more bits available another set is created and bits are assigned to this set until the threshold is reached. This procedure is repeated until all available bits have been spent. This means that the crossover frequency at which this process is stopped will be frame dependent, since the number of peaks will vary from frame to frame. The crossover frequency will be determined by the number of bits that are available for LF encoding once the peak regions have been encoded.
Quantization of the LF sets can be done with any suitable vector quantization scheme, but typically some type of gain-shape encoding is used. For example, factorial pulse coding may be used for the shape vector, and scalar quantizer may be used for the gain.
A certain number of bits are always reserved for encoding a noise-floor gain of at least one high-frequency band of coefficients outside the peak regions, and above the upper frequency of the LF bands. Preferably two gains are used for this purpose. These gains may be obtained from the noise-floor algorithm described in APPENDIX I. If factorial pulse coding is used for the encoding the low-frequency bands some LF coefficients may not be encoded. These coefficients can instead be included in the high-frequency band encoding. As in the case of the LF bands, the HF bands are not necessarily made up from consecutive coefficients. For this reason the bands will also be referred to as “sets” below.
If applicable, the spectrum envelope for a bandwidth extension (BWE) region is also encoded and transmitted. The number of bands (and the transition frequency where the BWE starts) is bitrate dependent, e.g. 5.6 kHz at 24 kbps and 6.4 kHz at 32 kbps.
Decoder
Major steps on the decoder are:
The audio decoder extracts, from the bit-stream, the number of peak regions and the quantization indices {Iposition Igain Isign Ishape} in order to reconstruct the coded peak regions. These quantization indices contain information about the spectral peak position, gain and sign of the peak, as well as the index for the codebook vector that provides the best match for the peak neighborhood.
The MDCT low-frequency coefficients outside the peak regions are reconstructed from the encoded LF coefficients.
The MDCT high-frequency coefficients outside the peak regions are noise-filled at the decoder. The noise-floor level is received by the decoder, preferably in the form of two coded noise-floor gains (one for the lower and one for the upper half or part of the vector).
If applicable, the audio decoder performs a BWE from a pre-defined transition frequency with the received envelope gains for HF MDCT coefficients.
In an example embodiment the decoding of a low-frequency set is based on a gain-shape decoding scheme.
In an example embodiment the gain-shape decoding scheme is based on scalar gain decoding and factorial pulse shape decoding.
An example embodiment includes the step of decoding a noise-floor gain for each of two high-frequency sets.
The steps, functions, procedures and/or blocks described herein may be implemented in hardware using any conventional technology, such as discrete circuit or integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry.
Alternatively, at least some of the steps, functions, procedures and/or blocks described herein may be implemented in software for execution by suitable processing equipment. This equipment may include, for example, one or several microprocessors, one or several Digital Signal Processors (DSP), one or several Application Specific Integrated Circuits (ASIC), video accelerated hardware or one or several suitable programmable logic devices, such as Field Programmable Gate Arrays (FPGA). Combinations of such processing elements are also feasible.
It should also be understood that it may be possible to reuse the general processing capabilities already present in the encoder/decoder. This may, for example, be done by reprogramming of the existing software or by adding new software components.
The technology described above is intended to be used in an audio encoder/decoder, which can be used in a mobile device (e.g. mobile phone, laptop) or a stationary device, such as a personal computer. Here the term User Equipment (UE) will be used as a generic name for such devices.
The decision of the harmonic signal detector 78 is based on the noise-floor energy Ēnf and peak energy Ēp in APPENDIX I and II. The logic is as follows:
IF Ēp/Ēnf, is above a threshold AND the number of detected peaks is in a predefined range THEN the signal is classified as harmonic. Otherwise the signal is classified as non-harmonic. The classification and thus the encoding mode is explicitly signaled to the decoder.
Specific implementation details for a 24 kbps mode are given below.
This variability in how many bits are used in different stages of the coding is no problem since the low frequency band coding comes last and just uses up whatever bits remain. However the system is designed so that enough bits always remain to encode one low frequency band.
The table below presents results from a listening test performed in accordance with the procedure described in ITU-R BS.1534-1 MUSHRA (Multiple Stimuli with Hidden Reference and Anchor). The scale in a MUSHRA test is 0 to 100, where low values correspond to low perceived quality, and high values correspond to high quality. Both codecs operated at 24 kbps. Test results are averaged over 24 music items and votes from 8 listeners.
It will be understood by those skilled in the art that various modifications and changes may be made to the proposed technology without departure from the scope thereof, which is defined by the appended claims.
The noise-floor estimation algorithm operates on the absolute values of transform coefficients |Y(k)|. Instantaneous noise-floor energies Enf(k) are estimated according to the recursion:
The particular form of the weighting factor a minimizes the effect of high-energy transform coefficients and emphasizes the contribution of low-energy coefficients. Finally, the noise-floor level Ēnf is estimated by simply averaging the instantaneous energies Enf(k).
The peak-picking algorithm requires knowledge of noise-floor level and average level of spectral peaks. The peak energy estimation algorithm is similar to the noise-floor estimation algorithm, but instead of low-energy, it tracks high-spectral energies:
In this case the weighting factor β minimizes the effect of low-energy transform coefficients and emphasizes the contribution of high-energy coefficients. The overall peak energy Ēp is estimated by simply averaging the instantaneous energies.
When the peak and noise-floor levels are calculated, a threshold level θ is formed as:
with γ=0.88579. Transform coefficients are compared to the threshold, and the ones with amplitude above it, form a vector of peak candidates. Since the natural sources do not typically produce peaks that are very close, e.g., 80 Hz, the vector with peak candidates is further refined. Vector elements are extracted in decreasing order, and the neighborhood of each element is set to zero. In this way only the largest element in certain spectral region remain, and the set of these elements form the spectral peaks for the current frame.
This application is a continuation of U.S. application Ser. No. 16/737,451 filed on 8 Jan. 2020, which is a continuation of U.S. application Ser. No. 15/228,395 filed on 4 Aug. 2016, now issued as U.S. Pat. No. 10,566,003, which is a continuation of U.S. application Ser. No. 14/387,367 filed on 23 Sep. 2014, now issued as U.S. Pat. No. 9,437,204, which is a U.S. National Phase Application of PCT/SE2012/051177 filed on 30 Oct. 2012, which claims benefit of Provisional Application No. 61/617,216 filed on 29 Mar. 2012. The entire contents of each aforementioned application is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6263312 | Kolesnik | Jul 2001 | B1 |
7831434 | Mehrotra et al. | Nov 2010 | B2 |
7885819 | Koishida | Feb 2011 | B2 |
7953604 | Mehrotra | May 2011 | B2 |
7953605 | Sinha et al. | May 2011 | B2 |
8046214 | Mehrotra | Oct 2011 | B2 |
8392179 | Yu | Mar 2013 | B2 |
9437204 | Grancharov | Sep 2016 | B2 |
9626978 | Naslund et al. | Apr 2017 | B2 |
10002617 | Näslund | Jun 2018 | B2 |
10566003 | Grancharov | Feb 2020 | B2 |
11264041 | Grancharov | Mar 2022 | B2 |
20070238415 | Sinha et al. | Oct 2007 | A1 |
20080319739 | Mehrotra | Dec 2008 | A1 |
20110010168 | Yu | Jan 2011 | A1 |
20110035226 | Mehrotra | Feb 2011 | A1 |
20110178795 | Bayer | Jul 2011 | A1 |
20110196684 | Koishida | Aug 2011 | A1 |
20120029923 | Rajendran et al. | Feb 2012 | A1 |
20120046955 | Rajendran et al. | Feb 2012 | A1 |
20120259645 | Budnikov | Oct 2012 | A1 |
20120323584 | Koishida | Dec 2012 | A1 |
20150046171 | Grancharov | Feb 2015 | A1 |
20150088527 | Nslund et al. | Mar 2015 | A1 |
20160336016 | Näslund et al. | Nov 2016 | A1 |
20160343381 | Grancharov | Nov 2016 | A1 |
20170178638 | Näslund et al. | Jun 2017 | A1 |
20200143818 | Grancharov | May 2020 | A1 |
20220139408 | Grancharov | May 2022 | A1 |
Number | Date | Country |
---|---|---|
102081927 | Jun 2011 | CN |
2409874 | May 2011 | RU |
2010101881 | Jul 2011 | RU |
2436174 | Dec 2011 | RU |
2010132643 | Feb 2012 | RU |
2005027096 | Mar 2005 | WO |
2009121298 | Oct 2009 | WO |
2011063694 | Jun 2011 | WO |
2011114933 | Sep 2011 | WO |
Entry |
---|
Bartkowiak, Maciej, et al., “Harmonic Sinusoidal+Noise Modeling of Audio based on Multiple F0 Estimation”, Audio Engineering Society, Convention Paper 7510, 125th Convention, San Francisco, CA, Oct. 2-5, 2008, 1-8. |
Number | Date | Country | |
---|---|---|---|
20220139408 A1 | May 2022 | US |
Number | Date | Country | |
---|---|---|---|
61617216 | Mar 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16737451 | Jan 2020 | US |
Child | 17579968 | US | |
Parent | 15228395 | Aug 2016 | US |
Child | 16737451 | US | |
Parent | 14387367 | US | |
Child | 15228395 | US |