To improve the quality of coded tonal signals especially at low bit-rates, modern audio transform coders employ very long transforms and/or long-term prediction or pre-/post-filtering. A long transform, however, implies a long algorithmic delay, which is undesirable for low-delay communication scenarios. Hence, predictors with very low delay based on the instantaneous fundamental pitch have gained popularity recently. The IETF (Internet Engineering Task Force) Opus codec utilizes pitch-adaptive pre- and postfiltering in its frequency-domain CELT (Constrained-Energy Lapped Transform) coding path [J. M. Valin, K. Vos, and T. Terriberry, “Definition of the Opus audio codec,” 2012, IETF RFC 6716. http://tools.ietf.org/html/rfc67161.], and the 3GPP (3rd Generation Partnership Project) EVS (Enhanced Voice Services) codec provides a long-term harmonic post-filter for perceptual improvement of transform-decoded signals [3GPP TS 26.443, “Codec for Enhanced Voice Services (EVS),” Release 12, December 2014.]. Both of these approaches operate in the time domain on the fully decoded signal waveform, making it difficult and/or computationally expensive to apply them frequency-selectively (both schemes only offer a simple low-pass filter for some frequency selectivity). A welcome alternative to time-domain long-term prediction (LTP) or pre-/post-filtering (PPF) is thus provided by frequency-domain prediction (FDP) like it is supported in MPEG-2 AAC [ISO/IEC 13818-7, “Information technology—Part 7: Advanced Audio Coding (AAC),” 2006.]. This method, although facilitating frequency selectivity, has its own disadvantages, as described hereafter.
The FDP method introduced above has two drawbacks over the other tools. First, the FDP method involves high computational complexity. In detail, linear predictive coding of at least order two (i.e. from the last two frame's channel transform bins) is applied onto hundreds of spectral bins for each frame and channel in the worst case of prediction in all scale factor bands [ISO/IEC 13818-7, “Information technology—Part 7: Advanced Audio Coding (AAC),” 2006.]. Second, the FDP method comprises a limited overall prediction gain. More precisely, the efficiency of the prediction is limited because noisy components between predictable harmonic, tonal spectral parts are subjected to the prediction as well, introducing errors as these noisy parts are typically not predictable.
The high complexity is due to the backward adaptivity of the predictors. This means that the prediction coefficients for each bin have to be calculated based on previously transmitted bins. Therefore, numerical inaccuracies between encoder and decoder can lead to reconstruction errors due to diverging prediction coefficients. To overcome this problem, bit exact identical adaptation has to be guaranteed. Furthermore, even if groups of predictors are disabled in certain frames, the adaptation has to be performed in order to keep the prediction coefficients up to date.
According to an embodiment, an encoder for encoding an audio signal may be configured to encode the audio signal in a transform domain or filter-bank domain, wherein the encoder is configured to determine spectral coefficients of the audio signal for a current frame and at least one previous frame, wherein the encoder is configured to selectively apply predictive encoding to a plurality of individual spectral coefficients or groups of spectral coefficients, wherein the encoder is configured to determine a spacing value, wherein the encoder is configured to select the plurality of individual spectral coefficients or groups of spectral coefficients to which predictive encoding is applied based on the spacing value.
According to another embodiment, a decoder for decoding an encoded audio signal may be configured to decode the encoded audio signal in a transform domain or filter-bank domain, wherein the decoder is configured to parse the encoded audio signal to acquire encoded spectral coefficients of the audio signal for a current frame and at least one previous frame, and wherein the decoder is configured to selectively apply predictive decoding to a plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients, wherein the decoder is configured to acquire a spacing value, wherein the decoder is configured to select the plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients to which predictive decoding is applied based on the spacing value.
According to another embodiment, a method for encoding an audio signal in a transform domain or filter-bank domain may have the following steps: determining spectral coefficients of the audio signal for a current frame and at least one previous frame; determining a spacing value; and selectively applying predictive encoding to a plurality of individual spectral coefficients or groups of spectral coefficients, wherein the plurality of individual spectral coefficients or groups of spectral coefficients to which predictive encoding is applied are selected based on the spacing value.
According to another embodiment, a method for decoding an encoded audio signal in a transform domain or filter-bank domain may have the following steps: parsing the encoded audio signal to acquire encoded spectral coefficients of the audio signal for a current frame and at least one previous frame; acquiring a spacing value; and selectively applying predictive decoding to a plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients, wherein the plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients to which predictive decoding is applied are selected based on the spacing value.
According to another embodiment, a non-transitory digital storage medium having a computer program stored thereon to perform the method for encoding an audio signal in a transform domain or filter-bank domain, the method having the following steps: determining spectral coefficients of the audio signal for a current frame and at least one previous frame; determining a spacing value; and selectively applying predictive encoding to a plurality of individual spectral coefficients or groups of spectral coefficients, wherein the plurality of individual spectral coefficients or groups of spectral coefficients to which predictive encoding is applied are selected based on the spacing value, when said computer program is run by a computer.
According to another embodiment, a non-transitory digital storage medium having a computer program stored thereon to perform the method for decoding an encoded audio signal in a transform domain or filter-bank domain, the method having the following steps: parsing the encoded audio signal to acquire encoded spectral coefficients of the audio signal for a current frame and at least one previous frame; acquiring a spacing value; and selectively applying predictive decoding to a plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients, wherein the plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients to which predictive decoding is applied are selected based on the spacing value, when said computer program is run by a computer.
According to another embodiment, an encoder for encoding an audio signal may be configured to encode the audio signal in a transform domain or filter-bank domain, wherein the encoder is configured to determine spectral coefficients of the audio signal for a current frame and at least one previous frame, wherein the encoder is configured to selectively apply predictive encoding to a plurality of individual spectral coefficients or groups of spectral coefficients, wherein the encoder is configured to determine a spacing value, wherein the encoder is configured to select the plurality of individual spectral coefficients or groups of spectral coefficients to which predictive encoding is applied based on the spacing value; wherein the encoder is configured to select individual spectral coefficients or groups of spectral coefficients spectrally arranged according to a harmonic grid defined by the spacing value for a predictive encoding.
According to another embodiment, a decoder for decoding an encoded audio signal may be configured to decode the encoded audio signal in a transform domain or filter-bank domain, wherein the decoder is configured to parse the encoded audio signal to acquire encoded spectral coefficients of the audio signal for a current frame and at least one previous frame, and wherein the decoder is configured to selectively apply predictive decoding to a plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients, wherein the decoder is configured to acquire a spacing value, wherein the decoder is configured to select the plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients to which predictive decoding is applied based on the spacing value; wherein the decoder is configured to select individual spectral coefficients or groups of spectral coefficients spectrally arranged according to a harmonic grid defined by the spacing value for a predictive decoding.
Embodiments provide an encoder for encoding an audio signal. The encoder is configured to encode the audio signal in a transform domain or filter-bank domain, wherein the encoder is configured to determine spectral coefficients of the audio signal for a current frame and at least one previous frame, wherein the encoder is configured to selectively apply predictive encoding to a plurality of individual spectral coefficients or groups of spectral coefficients, wherein the encoder is configured to determine a spacing value, wherein the encoder is configured to select the plurality of individual spectral coefficients or groups of spectral coefficients to which predictive encoding is applied based on the spacing value which may be transmitted as side information with the encoded audio signal.
Further embodiments provide a decoder for decoding an encoded audio signal (e.g., encoded with the above described encoder). The decoder is configured to decode the encoded audio signal in a transform domain or filter-bank domain, wherein the decoder is configured to parse the encoded audio signal to obtain encoded spectral coefficients of the audio signal for a current frame and at least one previous frame, and wherein the decoder is configured to selectively apply predictive decoding to a plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients, wherein the decoder may be configured to select the plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients to which predictive decoding is applied based on a transmitted spacing value.
According to the concept of the present invention, predictive encoding is (only) applied to selected spectral coefficients. The spectral coefficients to which predictive encoding is applied can be selected in dependence on signal characteristics. For example, by not applying predictive encoding to noisy signal components the aforementioned errors introduced by predicting non-predictable noisy signal components are avoided. At the same time computational complexity can be reduced since predictive encoding is only applied to selected spectral components.
For example, perceptual coding of tonal audio signals can be performed (e.g., by the encoder) by means of transform coding with guided/adaptive spectral-domain inter-frame prediction methods. The efficiency of frequency domain prediction (FDP) can be increased, and the computational complexity can be reduced, by applying the prediction only to spectral coefficients, for example, around harmonic signal components located at integer multiples of a fundamental frequency or pitch, which can be signaled in an appropriate bit-stream from an encoder to a decoder, e.g. as a spacing value. Embodiments of the present invention can be advantageously implemented or integrated into the MPEG-H 3D audio codec, but are applicable to any audio transform coding system, such as, e.g., MPEG-2 AAC.
Further embodiments provide a method for encoding an audio signal in a transform domain or filter-bank domain, the method comprises:
Further embodiments provide a method for decoding an encoded audio signal in a transform domain or filter-bank domain, the method comprises:
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals.
In the following description, a plurality of details are set forth to provide a more thorough explanation of embodiments of the present invention. However, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form rather than in detail in order to avoid obscuring embodiments of the present invention. In addition, features of the different embodiments described hereinafter may be combined with each other unless specifically noted otherwise.
In other words, the encoder 100 is configured to selectively apply predictive encoding to a plurality of individual spectral coefficients 106_t0_f2 or groups of spectral coefficients 106_t0_f4 and 106_t0_f5 selected based on a single spacing value transmitted as side information.
This spacing value may correspond to a frequency (e.g. a fundamental frequency of a harmonic tone (of the audio signal 102)), which defines together with its integer multiples the centers of all groups of spectral coefficients for which prediction is applied: The first group can be centered around this frequency, the second group can be centered around this frequency multiplied by two, the third group can be centered around this frequency multiplied by three, and so on. The knowledge of these center frequencies enables the calculation of prediction coefficients for predicting corresponding sinusoidal signal components (e.g. fundamental and overtones of harmonic signals). Thus, complicated and error prone backward adaptation of prediction coefficients is no longer needed.
In embodiments, the encoder 100 can be configured to determine one spacing value per frame.
In embodiments, the plurality of individual spectral coefficients 106_t0_f2 or groups of spectral coefficients 106_t0_f4 and 106_t0_f5 can be separated by at least one spectral coefficient 106_t0_f3.
In embodiments, the encoder 100 can be configured to apply the predictive encoding to a plurality of individual spectral coefficients which are separated by at least one spectral coefficient, such as to two individual spectral coefficients which are separated by at least one spectral coefficient. Further, the encoder 100 can be configured to apply the predictive encoding to a plurality of groups of spectral coefficients (each of the groups comprising at least two spectral coefficients) which are separated by at least one spectral coefficient, such as to two groups of spectral coefficients which are separated by at least one spectral coefficient. Further, the encoder 100 can be configured to apply the predictive encoding to a plurality of individual spectral coefficients and/or groups of spectral coefficients which are separated by at least one spectral coefficient, such as to at least one individual spectral coefficient and at least one group of spectral coefficients which are separated by at least one spectral coefficient.
In the example shown in
Note that the term “selectively” as used herein refers to applying predictive encoding (only) to selected spectral coefficients. In other words, predictive encoding is not necessarily applied to all spectral coefficients, but rather only to selected individual spectral coefficients or groups of spectral coefficients, the selected individual spectral coefficients and/or groups of spectral coefficients which can be separated from each other by at least one spectral coefficient. In other words, predictive encoding can be disabled for at least one spectral coefficient by which the selected plurality of individual spectral coefficients or groups of spectral coefficients are separated.
In embodiments, the encoder 100 can be configured to selectively apply predictive encoding to a plurality of individual spectral coefficients 106_t0_f2 or groups of spectral coefficients 106_t0_f4 and 106_t0_f5 of the current frame 108_t0 based on at least a corresponding plurality of individual spectral coefficients 106_t-1_f2 or groups of spectral coefficients 106_t-1_f4 and 106_t-1_f5 of the previous frame 108_t-1.
For example, the encoder 100 can be configured to predictively encode the plurality of individual spectral coefficients 106_t0_f2 or the groups of spectral coefficients 106_t0_f4 and 106_t0_f5 of the current frame 108_t0, by coding prediction errors between a plurality of predicted individual spectral coefficients 110_t0_f2 or groups of predicted spectral coefficients 110_t0_f4 and 110_t0_f5 of the current frame 108_t0 and the plurality of individual spectral coefficients 106_t0_f2 or groups of spectral coefficients 106_t0_f4 and 106_t0_f5 of the current frame (or quantized versions thereof).
In
In other words, the second spectral coefficient 106_t0_f2 is coded by coding the prediction error (or difference) between the predicted second spectral coefficient 110_t0_f2 and the (actual or determined) second spectral coefficient 106_t0_f2, wherein the fourth spectral coefficient 106_t0_f4 is coded by coding the prediction error (or difference) between the predicted fourth spectral coefficient 110_t0_f4 and the (actual or determined) fourth spectral coefficient 106_t0_f4, and wherein the fifth spectral coefficient 106_t0_f5 is coded by coding the prediction error (or difference) between the predicted fifth spectral coefficient 110_t0_f5 and the (actual or determined) fifth spectral coefficient 106_t0_f5.
In an embodiment, the encoder 100 can be configured to determine the plurality of predicted individual spectral coefficients 110_t0_f2 or groups of predicted spectral coefficients 110_t0_f4 and 110_t0_f5 for the current frame 108_t0 by means of corresponding actual versions of the plurality of individual spectral coefficients 106_t-1_f2 or of the groups of spectral coefficients 106_t-1_f4 and 106_t-1_f5 of the previous frame 108_t-1.
In other words, the encoder 100 may, in the above-described determination process, use directly the plurality of actual individual spectral coefficients 106_t-1_f2 or the groups of actual spectral coefficients 106_t-1_f4 and 106_t-1_f5 of the previous frame 108_t-1, where the 106_t-1_f2, 106_t-1_f4 and 106_t-1_f5 represent the original, not yet quantized spectral coefficients or groups of spectral coefficients, respectively, as they are obtained by the encoder 100 such that said encoder may operate in the transform domain or filter-bank domain 104.
For example, the encoder 100 can be configured to determine the second predicted spectral coefficient 110_t0_f2 of the current frame 108_t0 based on a corresponding not yet quantized version of the second spectral coefficient 106_t-1_f2 of the previous frame 10108_t-1, the predicted fourth spectral coefficient 110_t0_f4 of the current frame 108_t0 based on a corresponding not yet quantized version of the fourth spectral coefficient 106_t-1_f4 of the previous frame 108_t-1, and the predicted fifth spectral coefficient 110_t0_f5 of the current frame 108_t0 based on a corresponding not yet quantized version of the fifth spectral coefficient 106_t-1_f5 of the previous frame.
By way of this approach, the predictive encoding and decoding scheme can exhibit a kind of harmonic shaping of the quantization noise, since a corresponding decoder, an embodiment of which is described later with respect to
While such harmonic noise shaping, as it is, for example, traditionally performed by long-term prediction (LTP) in the time domain, can be subjectively advantageous for predictive coding, in some cases it may be undesirable since it may lead to an unwanted, excessive amount of tonality introduced into a decoded audio signal. For this reason, an alternative predictive encoding scheme, which is fully synchronized with the corresponding decoding and, as such, only exploits any possible prediction gains but does not lead to quantization noise shaping, is described hereafter. According to this alternative encoding embodiment, the encoder 100 can be configured to determine the plurality of predicted individual spectral coefficients 110_t0_f2 or groups of predicted spectral coefficients 110_t0_f4 and 110_t0_f5 for the current frame 108_t0 using corresponding quantized versions of the plurality of individual spectral coefficients 106_t-1_f2 or the groups of spectral coefficients 106_t-1_f4 and 106_t-1_f5 of the previous frame 108_t-1.
For example, the encoder 100 can be configured to determine the second predicted spectral coefficient 110_t0_f2 of the current frame 108_t0 based on a corresponding quantized version of the second spectral coefficient 106_t-1_f2 of the previous frame 108_t-1, the predicted fourth spectral coefficient 110_t0_f4 of the current frame 108_t0 based on a corresponding quantized version of the fourth spectral coefficient 106_t-1_f4 of the previous frame 108_t-1, and the predicted fifth spectral coefficient 110_t0_f5 of the current frame 108_t0 based on a corresponding quantized version of the fifth spectral coefficient 106_t-1_f5 of the previous frame.
Further, the encoder 100 can be configured to derive prediction coefficients 112_f2, 114_f2, 112_f4, 114_f4, 112_f5 and 114_f5 from the spacing value, and to calculate the plurality of predicted individual spectral coefficients 110_t0_f2 or groups of predicted spectral coefficients 110_t0_f4 and 110_t0_f5 for the current frame 108_t0 using corresponding quantized versions of the plurality of individual spectral coefficients 106_t-1_f2 and 106_t-2_f2 or groups of spectral coefficients 106_t-1_f4, 106_t-2_f4, 106_t-1_f5, and 106_t-2_f5 of at least two previous frames 108_t-1 and 108_t-2 and using the derived prediction coefficients 112_f2, 114_f2, 112_f4, 114_f4, 112_f5 and 114_f5.
For example, the encoder 100 can be configured to derive prediction coefficients 112_f2 and 114_f2 for the second spectral coefficient 106_t0_f2 from the spacing value, to derive prediction coefficients 112_f4 and 114_f4 for the fourth spectral coefficient 106_t0_f4 from the spacing value, and to derive prediction coefficients 112_f5 and 114_f5 for the fifth spectral coefficient 106_t0_f5 from the spacing value.
For example, the derivation of prediction coefficients can be derived the following way: If the spacing value corresponds to a frequency f0 or a coded version thereof, the center frequency of the K-th group of spectral coefficients for which prediction is enabled is fc=K*f0. If the sampling frequency is fs and the transform hop size (shift between successive frames) is N, the ideal predictor coefficients in the K-th group assuming a sinusoidal signal with frequency fc are:
p1=2*cos(N*2*pr*fc/fs) and p2=−1.
If, for example, both spectral coefficients 106_t0_f4 and 106_t0_f5 are within this group, the prediction coefficients are:
112_f4=112_f5=2*cos(N*2*pi*fc/fs) and 114_f4=114_f5=−1.
For stability reasons, a damping factor d can be introduced leading to modified prediction coefficients:
112_f4′=112_f5′=d*2*cos(N*2*pi*fc/fs), 114_f4′=114_f5′=d2.
Since the spacing value is transmitted in the coded audio signal 120, the decoder can derive exactly the same prediction coefficients 212_f4=212_f5=2*cos(N*2*pi*fc/fs) and 114_f4=114_f5=−1. If a damping factor is used, the coefficients can be modified accordingly.
As indicated in
Thus, the encoder 100 may only use the prediction coefficients 112_f2 to 114_f5 for calculating the plurality of predicted individual spectral coefficients 110_t0_f2 or groups of predicted spectral coefficients 110_t0_f4 and 110_t0_f5 and therefrom the prediction errors between the predicted individual spectral coefficient 110_t0_f2 or group of predicted spectral coefficients 110_t0_f4 and 110_t0_f5 and the individual spectral coefficient 106_t0_f2 or group of predicted spectral coefficients 110_t0_f4 and 110_t0_f5 of the current frame, but will neither provide the individual spectral coefficients 106_t0_f4 (or a quantized version thereof) or groups of spectral coefficients 106_t0_f4 and 106_t0_f5 (or quantized versions thereof) nor the prediction coefficients 112_f2 to 114_f5 in the encoded audio signal 120. Hence, a decoder, an embodiment of which is described later with respect to
In other words, the encoder 100 can be configured to provide the encoded audio signal 120 including quantized versions of the prediction errors instead of quantized versions of the plurality of individual spectral coefficients 106_t0_f2 or of the groups of spectral coefficients 106_t0_f4 and 106_t0_f5 for the plurality of individual spectral coefficients 106_t0_f2 or groups of spectral coefficients 106_t0_f4 and 106_t0_f5 to which predictive encoding is applied.
Further, the encoder 100 can be configured to provide the encoded audio signal 102 including quantized versions of the spectral coefficients 106_t0_f3 by which the plurality of individual spectral coefficients 106_t0_f2 or groups of spectral coefficients 106_t0_f4 and 106_t0_f5 are separated, such that there is an alternation of spectral coefficients 106_t0_f2 or groups of spectral coefficients 106_t0_f4 and 106_t0_f5 for which quantized versions of the prediction errors are included in the encoded audio signal 120 and spectral coefficients 106_t0_f3 or groups of spectral coefficients for which quantized versions are provided without using predictive encoding.
In embodiments, the encoder 100 can be further configured to entropy encode the quantized versions of the prediction errors and the quantized versions of the spectral coefficients 106_t0_f3 by which the plurality of individual spectral coefficients 106_t0_f2 or groups of spectral coefficients 106_t0_f4 and 106_t0_f5 are separated, and to include the entropy encoded versions in the encoded audio signal 120 (instead of the non-entropy encoded versions thereof).
As shown in
In other words, as indicated in
In embodiments, the encoder 100 can be configured to determine a spacing value (indicated in
The spacing value can be, for example, a spacing (or distance) between two characteristic frequencies of the audio signal 102, such as the peaks 124_1 and 124_2 of the audio signal. Further, the spacing value can be a an integer number of spectral coefficients (or indices of spectral coefficients) approximating the spacing between the two characteristic frequencies of the audio signal. Naturally, the spacing value can also be a real number or a fraction or multiple of the integer number of spectral coefficients describing the spacing between the two characteristic frequencies of the audio signal.
In embodiments, the encoder 100 can be configured to determine an instantaneous fundamental frequency of the audio signal (102) and to derive the spacing value from the instantaneous fundamental frequency or a fraction or a multiple thereof.
For example, the first peak 124_1 of the audio signal 102 can be an instantaneous fundamental frequency (or pitch, or first harmonic) of the audio signal 102. Therefore, the encoder 100 can be configured to determine the instantaneous fundamental frequency of the audio signal 102 and to derive the spacing value from the instantaneous fundamental frequency or a fraction or a multiple thereof. In that case, the spacing value can be an integer number (or a fraction, or a multiple thereof) of spectral coefficients approximating the spacing between the instantaneous fundamental frequency 124_1 and a second harmonic 124_2 of the audio signal 102.
Naturally, the audio signal 102 may comprise more than two harmonics. For example, the audio signal 102 shown in
In embodiments, the encoder 100 can be configured to select groups 116_1 to 116_6 of spectral coefficients (or individual spectral coefficients) spectrally arranged according to a harmonic grid defined by the spacing value for a predictive encoding. Thereby, the harmonic grid defined by the spacing value describes the periodic spectral distribution (equidistant spacing) of harmonics in the audio signal 102. In other words, the harmonic grid defined by the spacing value can be a sequence of spacing values describing the equidistant spacing of harmonics of the audio signal.
Further, the encoder 100 can be configured to select spectral coefficients (e.g. only those spectral coefficients), spectral indices of which are equal to or lie within a range (e.g. predetermined or variable) around a plurality of spectral indices derived on the basis of the spacing value, for a predictive encoding.
From the spacing value the indices (or numbers) of the spectral coefficients which represent the harmonics of the audio signal 102 can be derived. For example, assuming that a fourth spectral coefficient 106_t0_f4 represents the instantaneous fundamental frequency of the audio signal 102 and assuming that the spacing value is five, the spectral coefficient having the index nine can be derived on the basis of the spacing value. As can be seen in
Further, the encoder 100 can be configured to select the groups 116_1 to 116_6 of spectral coefficients (or plurality of individual spectral coefficients) to which predictive encoding is applied such that there is a periodic alternation, periodic with a tolerance of +/−1 spectral coefficient, between groups 116_1 to 116_6 of spectral coefficients (or the plurality of individual spectral coefficients) to which predictive encoding is applied and the spectral coefficients by which groups of spectral coefficients (or the plurality of individual spectral coefficients) to which predictive encoding is applied are separated. The tolerance of +/−1 spectral coefficient may be used when a distance between two harmonics of the audio signal 102 is not equal to an integer spacing value (integer with respect to indices or numbers of spectral coefficients) but rather to a fraction or multiple thereof. This can also be seen in
In other words, the audio signal 102 can comprise at least two harmonic signal components 124_1 to 124_6, wherein the encoder 100 can be configured to selectively apply predictive encoding to those plurality of groups 116_1 to 116_6 of spectral coefficients (or individual spectral coefficients) which represent the at least two harmonic signal components 124_1 to 124_6 or spectral environments around the at least two harmonic signal components 124_1 to 124_6 of the audio signal 102. The spectral environments around the at least two harmonic signal components 124_1 to 124_6 can be, for example, +/−1, 2, 3, 4 or 5 spectral components.
Thereby, the encoder 100 can be configured to not apply predictive encoding to those groups 118_1 to 118_5 of spectral coefficients (or plurality of individual spectral coefficients) which do not represent the at least two harmonic signal components 124_1 to 124_6 or spectral environments of the at least two harmonic signal components 124_1 to 124_6 of the audio signal 102. In other words, the encoder 100 can be configured to not apply predictive encoding to those plurality of groups 118_1 to 118_5 of spectral coefficients (or individual spectral coefficients) which belong to a non-tonal background noise between signal harmonics 124_1 to 124_6.
Further, the encoder 100 can be configured to determine a harmonic spacing value indicating a spectral spacing between the at least two harmonic signal components 124_1 to 124_6 of the audio signal 102, the harmonic spacing value indicating those plurality of individual spectral coefficients or groups of spectral coefficients which represent the at least two harmonic signal components 124_1 to 124_6 of the audio signal 102.
Furthermore, the encoder 100 can be configured to provide the encoded audio signal 120 such that the encoded audio signal 120 includes the spacing value (e.g., one spacing value per frame) or (alternatively) a parameter from which the spacing value can be directly derived.
Embodiments of the present invention address the abovementioned two issues of the FDP method by introducing a harmonic spacing value into the FDP process, signaled from the encoder (transmitter) 100 to a respective decoder (receiver) such that both can operate in a fully synchronized fashion. Said harmonic spacing value may serve as an indicator of an instantaneous fundamental frequency (or pitch) of one or more spectra associated with a frame to be coded and identifies which spectral bins (spectral coefficients) shall be predicted. More specifically, only those spectral coefficients around harmonic signal components located (in terms of their indexing) at integer multiples of the fundamental pitch (as defined by the harmonic spacing value) shall be subjected to the prediction.
Comparing
Note that the harmonic spacing value does not necessarily need to correspond to the actual instantaneous pitch of the input signal but that it could represent a fraction or multiple of the true pitch if this yields an overall improvement of the efficiency of the prediction process. In addition, it may be emphasized that the harmonic spacing value does not have to reflect an integer multiple of the bin indexing or bandwidth unit but may include a fraction of said units.
Subsequently, an advantageous implementation into an MPEG-style audio coder is described.
The pitch-adaptive prediction is advantageously integrated into the MPEG-2 AAC [ISO/IEC 13818-7, “Information technology—Part 7: Advanced Audio Coding (AAC),” 2006.] or, utilizing a similar predictor as in AAC, into the MPEG-H 3D audio codec [ISO/IEC 23008-3, “Information technology—High efficiency coding, part 3: 3D audio,” 2015.]. In particular, a one-bit flag can be written to, and read from, a respective bit-stream for each frame and channel which is not independently coded (for independent frame channels, the flag may not be transmitted since prediction can be disabled to ensure the independence). If the flag is set to one, another 8 bits can be written and read. These 8 bits represent a quantized version of (e.g. an index to) the harmonic spacing value for the given frame and channel. Employing the harmonic spacing value derived from the quantized version using either a linear or non-linear mapping function, the prediction process can be carried out in the manner according to an embodiment shown in
In embodiments, the decoder 200 can be configured to apply the predictive decoding to a plurality of individual encoded spectral coefficients which are separated by at least one encoded spectral coefficient, such as to two individual encoded spectral coefficients which are separated by at least one encoded spectral coefficient. Further, the decoder 200 can be configured to apply the predictive decoding to a plurality of groups of encoded spectral coefficients (each of the groups comprising at least two encoded spectral coefficients) which are separated by at least one encoded spectral coefficients, such as to two groups of encoded spectral coefficients which are separated by at least one encoded spectral coefficient. Further, the decoder 200 can be configured to apply the predictive decoding to a plurality of individual encoded spectral coefficients and/or groups of encoded spectral coefficients which are separated by at least one encoded spectral coefficient, such as to at least one individual encoded spectral coefficient and at least one group of encoded spectral coefficients which are separated by at least one encoded spectral coefficient.
In the example shown in
Note that the term “selectively” as used herein refers to applying predictive decoding (only) to selected encoded spectral coefficients. In other words, predictive decoding is not applied to all encoded spectral coefficients, but rather only to selected individual encoded spectral coefficients or groups of encoded spectral coefficients, the selected individual encoded spectral coefficients and/or groups of encoded spectral coefficients being separated from each other by at least one encoded spectral coefficient. In other words, predictive decoding is not applied to the at least one encoded spectral coefficient by which the selected plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients are separated.
In embodiments the decoder 200 can be configured to not apply the predictive decoding to the at least one encoded spectral coefficient 206_t0_f3 by which the individual encoded spectral coefficients 206_t0_f2 or the group of spectral coefficients 206_t0_f4 and 206_t0_f5 are separated.
The decoder 200 can be configured to entropy decode the encoded spectral coefficients, to obtain quantized prediction errors for the spectral coefficients 206_t0_f2, 2016_t0_f4 and 206_t0_f5 to which predictive decoding is to be applied and quantized spectral coefficients 206_t0_f3 for the at least one spectral coefficient to which predictive decoding is not to be applied. Thereby, the decoder 200 can be configured to apply the quantized prediction errors to a plurality of predicted individual spectral coefficients 210_t0_f2 or groups of predicted spectral coefficients 210_t0_f4 and 210_t0_f5, to obtain, for the current frame 208_t0, decoded spectral coefficients associated with the encoded spectral coefficients 206_t0_f2, 206_t0_f4 and 206_t0_f5 to which predictive decoding is applied.
For example, the decoder 200 can be configured to obtain a second quantized prediction error for a second quantized spectral coefficient 206_t0_f2 and to apply the second quantized prediction error to the predicted second spectral coefficient 210_t0_f2, to obtain a second decoded spectral coefficient associated with the second encoded spectral coefficient 206_t0_f2, wherein the decoder 200 can be configured to obtain a fourth quantized prediction error for a fourth quantized spectral coefficient 206_t0_f4 and to apply the fourth quantized prediction error to the predicted fourth spectral coefficient 210_t0_f4, to obtain a fourth decoded spectral coefficient associated with the fourth encoded spectral coefficient 206_t0_f4, and wherein the decoder 200 can be configured to obtain a fifth quantized prediction error for a fifth quantized spectral coefficient 206_t0_f5 and to apply the fifth quantized prediction error to the predicted fifth spectral coefficient 210_t0_f5, to obtain a fifth decoded spectral coefficient associated with the fifth encoded spectral coefficient 206_t0_f5.
Further, the decoder 200 can be configured to determine the plurality of predicted individual spectral coefficients 210_t0_f2 or groups of predicted spectral coefficients 210_t0_f4 and 210_t0_f5 for the current frame 208_t0 based on a corresponding plurality of the individual encoded spectral coefficients 206_t-1_f2 (e.g., using a plurality of previously decoded spectral coefficients associated with the plurality of the individual encoded spectral coefficients 206_t-1_f2) or groups of encoded spectral coefficients 206_t-1_f4 and 206_t-1_f5 (e.g., using groups of previously decoded spectral coefficients associated with the groups of encoded spectral coefficients 206_t-1_f4 and 206_t-1_f5) of the previous frame 208_t-1.
For example, the decoder 200 can be configured to determine the second predicted spectral coefficient 210_t0_f2 of the current frame 208_t0 using a previously decoded (quantized) second spectral coefficient associated with the second encoded spectral coefficient 206_t-1_f2 of the previous frame 208_t-1, the fourth predicted spectral coefficient 210_t0_f4 of the current frame 208_t0 using a previously decoded (quantized) fourth spectral coefficient associated with the fourth encoded spectral coefficient 206_t-1_f4 of the previous frame 208_t-1, and the fifth predicted spectral coefficient 210_t0_f5 of the current frame 208_t0 using a previously decoded (quantized) fifth spectral coefficient associated with the fifth encoded spectral coefficient 206_t-1_f5 of the previous frame 208_t-1.
Furthermore, the decoder 200 can be configured to derive prediction coefficients from the spacing value, and wherein the decoder 200 can be configured to calculate the plurality of predicted individual spectral coefficients 210_t0_f2 or groups of predicted spectral coefficients 210_t0_f4 and 210_t0_f5 for the current frame 208_t0 using a corresponding plurality of previously decoded individual spectral coefficients or groups of previously decoded spectral coefficients of at least two previous frames 208_t-1 and 208_t-2 and using the derived prediction coefficients.
For example, the decoder 200 can be configured to derive prediction coefficients 212_f2 and 214_f2 for the second encoded spectral coefficient 206_t0_f2 from the spacing value, to derive prediction coefficients 212_f4 and 214_f4 for the fourth encoded spectral coefficient 206_t0_f4 from the spacing value, and to derive prediction coefficients 212_f5 and 214_f5 for the fifth encoded spectral coefficient 206_t0_f5 from the spacing value.
Note that the decoder 200 can be configured to decode the encoded audio signal 120 in order to obtain quantized prediction errors instead of a plurality of individual quantized spectral coefficients or groups of quantized spectral coefficients for the plurality of individual encoded spectral coefficients or groups of encoded spectral coefficients to which predictive decoding is applied.
Further, the decoder 200 can be configured to decode the encoded audio signal 120 in order to obtain quantized spectral coefficients by which the plurality of individual spectral coefficients or groups of spectral coefficients are separated, such that there is an alternation of encoded spectral coefficients 206_t0_f2 or groups of encoded spectral coefficients 206_t0_f4 and 206_t0_f5 for which quantized prediction errors are obtained and encoded spectral coefficients 206_t0_f3 or groups of encoded spectral coefficients for which quantized spectral coefficients are obtained.
The decoder 200 can be configured to provide a decoded audio signal 220 using the decoded spectral coefficients associated with the encoded spectral coefficients 206_t0_f2, 206_t0_f4 and 206_t0_f5 to which predictive decoding is applied, and using entropy decoded spectral coefficients associated with the encoded spectral coefficients 206_t0_f1, 206_t0_f3 and 206_t0_f6 to which predictive decoding is not applied.
In embodiments, the decoder 200 can be configured to obtain a spacing value, wherein the decoder 200 can be configured to select the plurality of individual encoded spectral coefficients 206_t0_f2 or groups of encoded spectral coefficients 206_t0_f4 and 206_t0_f5 to which predictive decoding is applied based on the spacing value.
As already mentioned above with respect to the description of the corresponding encoder 100, the spacing value can be, for example, a spacing (or distance) between two characteristic frequencies of the audio signal. Further, the spacing value can be a an integer number of spectral coefficients (or indices of spectral coefficients) approximating the spacing between the two characteristic frequencies of the audio signal. Naturally, the spacing value can also be a fraction or multiple of the integer number of spectral coefficients describing the spacing between the two characteristic frequencies of the audio signal.
The decoder 200 can be configured to select individual spectral coefficients or groups of spectral coefficients spectrally arranged according to a harmonic grid defined by the spacing value for a predictive decoding. The harmonic grid defined by the spacing value may describe the periodic spectral distribution (equidistant spacing) of harmonics in the audio signal 102. In other words, the harmonic grid defined by the spacing value can be a sequence of spacing values describing the equidistant spacing of harmonics of the audio signal 102.
Furthermore, the decoder 200 can be configured to select spectral coefficients (e.g. only those spectral coefficients), spectral indices of which are equal to or lie within a range (e.g. predetermined or variable range) around a plurality of spectral indices derived on the basis of the spacing value, for a predictive decoding. Thereby, the decoder 200 can be configured to set a width of the range in dependence on the spacing value.
In embodiments, the encoded audio signal can comprise the spacing value or an encoded version thereof (e.g., a parameter from which the spacing value can be directly derived), wherein the decoder 200 can be configured to extract the spacing value or the encoded version thereof from the encoded audio signal to obtain the spacing value.
Alternatively, the decoder 200 can be configured to determine the spacing value by itself, i.e. the encoded audio signal does not include the spacing value. In that case, the decoder 200 can be configured to determine an instantaneous fundamental frequency (of the encoded audio signal 120 representing the audio signal 102) and to derive the spacing value from the instantaneous fundamental frequency or a fraction or a multiple thereof.
In embodiments, the decoder 200 can be configured to select the plurality of individual spectral coefficients or groups of spectral coefficients to which predictive decoding is applied such that there is a periodic alternation, periodic with a tolerance of +/−1 spectral coefficient, between the plurality of individual spectral coefficients or groups of spectral coefficients to which predictive decoding is applied and the spectral coefficients by which the plurality of individual spectral coefficients or groups of spectral coefficients to which predictive decoding is applied are separated.
In embodiments, the audio signal 102 represented by the encoded audio signal 120 comprises at least two harmonic signal components, wherein the decoder 200 is configured to selectively apply predictive decoding to those plurality of individual encoded spectral coefficients 206_t0_f2 or groups of encoded spectral coefficients 206_t0_f4 and 206_t0_f5 which represent the at least two harmonic signal components or spectral environments around the at least two harmonic signal components of the audio signal 102. The spectral environments around the at least two harmonic signal components can be, for example, +/−1, 2, 3, 4 or 5 spectral components.
Thereby, the decoder 200 can be configured to identify the at least two harmonic signal components, and to selectively apply predictive decoding to those plurality of individual encoded spectral coefficients 206_t0_f2 or groups of encoded spectral coefficients 206_t0_f4 and 206_t0_f5 which are associated with the identified harmonic signal components, e.g., which represent the identified harmonic signal components or which surround the identified harmonic signal components).
Alternatively, the encoded audio signal 120 may comprise an information (e.g., the spacing value) identifying the at least two harmonic signal components. In that case, the decoder 200 can be configured to selectively apply predictive decoding to those plurality of individual encoded spectral coefficients 206_t0_f2 or groups of encoded spectral coefficients 206_t0_f4 and 206_t0_f5 which are associated with the identified harmonic signal components, e.g., which represent the identified harmonic signal components or which surround the identified harmonic signal components).
In both of the aforementioned alternatives, the decoder 200 can be configured to not apply predictive decoding to those plurality of individual encoded spectral coefficients 206_t0_f3, 206_t0_f1 and 206_t0_f6 or groups of encoded spectral coefficients which do not represent the at least two harmonic signal components or spectral environments of the at least two harmonic signal components of the audio signal 102.
In other words, the decoder 200 can be configured to not apply predictive decoding to those plurality of individual encoded spectral coefficients 206_t0_f3, 206_t0_f1, 206_t0_f6 or groups of encoded spectral coefficients which belong to a non-tonal background noise between signal harmonics of the audio signal 102.
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus.
The inventive encoded audio signal can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are advantageously performed by any hardware apparatus.
The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
15158253 | Mar 2015 | EP | regional |
PCT/EP2015/063658 | Jun 2015 | WO | international |
This application is a continuation of copending U.S. patent application Ser. No. 15/697,042, filed on Sep. 6, 2017, which in turn is a continuation of copending International Application No. PCT/EP2016/054831, filed Mar. 7, 2016, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No. EP 15158253.3, filed Mar. 9, 2015, and WO Application No. PCT/EP2015/063658, filed Jun. 17, 2015, all of which are incorporated herein by reference in their entirety. Embodiments relate to audio coding, in particular, to a method and apparatus for encoding an audio signal using predictive encoding and to a method and apparatus for decoding an encoded audio signal using predictive decoding. Advantageous embodiments relate to methods and apparatuses for pitch-adaptive spectral prediction. Further advantageous embodiments relate to perceptual coding of tonal audio signals by means of transform coding with spectral-domain inter-frame prediction tools.
Number | Name | Date | Kind |
---|---|---|---|
3959592 | Ehrat | May 1976 | A |
4757517 | Yatsuzuka | Jul 1988 | A |
4885790 | McAulay | Dec 1989 | A |
5235670 | Lin | Aug 1993 | A |
5502713 | Lagerqvist et al. | Mar 1996 | A |
5619566 | Fogel | Apr 1997 | A |
5717821 | Tsutsui et al. | Feb 1998 | A |
5781888 | Herre | Jul 1998 | A |
5926788 | Nishiguchi | Jul 1999 | A |
5950153 | Ohmori et al. | Sep 1999 | A |
5956672 | Serizawa | Sep 1999 | A |
5978759 | Tsushima et al. | Nov 1999 | A |
6041295 | Hinderks | Mar 2000 | A |
6061555 | Bultman et al. | May 2000 | A |
6104321 | Akagiri | Aug 2000 | A |
6161089 | Hardwick | Dec 2000 | A |
6289308 | Lokhoff | Sep 2001 | B1 |
6301265 | Kleider | Oct 2001 | B1 |
6424939 | Herre et al. | Jul 2002 | B1 |
6502069 | Grill et al. | Dec 2002 | B1 |
6636829 | Benyassine | Oct 2003 | B1 |
6680972 | Liljeryd et al. | Jan 2004 | B1 |
6708145 | Liljeryd et al. | Mar 2004 | B1 |
6799164 | Araki | Sep 2004 | B1 |
6826526 | Norimatsu et al. | Nov 2004 | B1 |
6963405 | Wheel et al. | Nov 2005 | B1 |
7206740 | Thyssen et al. | Apr 2007 | B2 |
7246065 | Tanaka et al. | Jul 2007 | B2 |
7318027 | Lennon et al. | Jan 2008 | B2 |
7328161 | Oh | Feb 2008 | B2 |
7447317 | Herre et al. | Nov 2008 | B2 |
7447631 | Truman et al. | Nov 2008 | B2 |
7460990 | Mehrotra et al. | Dec 2008 | B2 |
7483758 | Liljeryd et al. | Jan 2009 | B2 |
7502743 | Thumpudi et al. | Mar 2009 | B2 |
7539612 | Thumpudi et al. | May 2009 | B2 |
7739119 | Venkatesha Rao et al. | Jun 2010 | B2 |
7756713 | Chong et al. | Jul 2010 | B2 |
7761290 | Koishida et al. | Jul 2010 | B2 |
7761303 | Pang et al. | Jul 2010 | B2 |
7801735 | Thumpudi et al. | Sep 2010 | B2 |
7917369 | Chen et al. | Mar 2011 | B2 |
7930171 | Chen et al. | Apr 2011 | B2 |
7945449 | Vinton et al. | May 2011 | B2 |
8078474 | Vos et al. | Dec 2011 | B2 |
8112284 | Kjörling et al. | Feb 2012 | B2 |
8135047 | Rajendran et al. | Mar 2012 | B2 |
8214202 | Bruhn | Jul 2012 | B2 |
8255229 | Koishida et al. | Aug 2012 | B2 |
8412365 | Liljeryd et al. | Apr 2013 | B2 |
8428957 | Garudadri et al. | Apr 2013 | B2 |
8473301 | Chen et al. | Jun 2013 | B2 |
8484020 | Krishnan et al. | Jul 2013 | B2 |
8489403 | Griffin et al. | Jul 2013 | B1 |
8554569 | Chen et al. | Oct 2013 | B2 |
8655670 | Purnhagen et al. | Feb 2014 | B2 |
8892448 | Vos et al. | Nov 2014 | B2 |
9015041 | Bayer et al. | Apr 2015 | B2 |
9047875 | Gao | Jun 2015 | B2 |
9111427 | Know et al. | Aug 2015 | B2 |
9111535 | Yang et al. | Aug 2015 | B2 |
9390717 | Yamamoto et al. | Jul 2016 | B2 |
9646624 | Disch et al. | May 2017 | B2 |
10600428 | Edler et al. | Mar 2020 | B2 |
20020065648 | Amano | May 2002 | A1 |
20020128839 | Lindgren et al. | Sep 2002 | A1 |
20030009327 | Nilsson et al. | Jan 2003 | A1 |
20030014136 | Wang et al. | Jan 2003 | A1 |
20030074191 | Byrnes et al. | Apr 2003 | A1 |
20030115042 | Chen et al. | Jun 2003 | A1 |
20030138046 | Base et al. | Jul 2003 | A1 |
20030220800 | Budnikov et al. | Nov 2003 | A1 |
20040008615 | Oh | Jan 2004 | A1 |
20040024588 | Watson et al. | Feb 2004 | A1 |
20040028244 | Tsushima et al. | Feb 2004 | A1 |
20040054525 | Sekiguchi | Mar 2004 | A1 |
20040153316 | Hardwick | Aug 2004 | A1 |
20050004793 | Ojala et al. | Jan 2005 | A1 |
20050021329 | Lin | Jan 2005 | A1 |
20050036633 | Jeon et al. | Feb 2005 | A1 |
20050074127 | Herre et al. | Apr 2005 | A1 |
20050078754 | Liang | Apr 2005 | A1 |
20050096917 | Kjorling et al. | May 2005 | A1 |
20050141721 | Aarts et al. | Jun 2005 | A1 |
20050157891 | Johansen | Jul 2005 | A1 |
20050163234 | Taleb | Jul 2005 | A1 |
20050165611 | Mehrotra et al. | Jul 2005 | A1 |
20050216262 | Fejzo | Sep 2005 | A1 |
20050278171 | Suppappola et al. | Dec 2005 | A1 |
20060006103 | Sirota et al. | Jan 2006 | A1 |
20060031075 | Oh et al. | Feb 2006 | A1 |
20060095269 | Smith et al. | May 2006 | A1 |
20060122828 | Lee et al. | Jun 2006 | A1 |
20060171685 | Chen | Aug 2006 | A1 |
20060206334 | Kapoor et al. | Sep 2006 | A1 |
20060210180 | Geiger et al. | Sep 2006 | A1 |
20060265210 | Ramakrishnan et al. | Nov 2006 | A1 |
20060282263 | Vos et al. | Dec 2006 | A1 |
20070016402 | Schuller et al. | Jan 2007 | A1 |
20070016403 | Schuller et al. | Jan 2007 | A1 |
20070016406 | Thumpudi | Jan 2007 | A1 |
20070016411 | Kim et al. | Jan 2007 | A1 |
20070016415 | Thumpudi | Jan 2007 | A1 |
20070027677 | Ouyang et al. | Feb 2007 | A1 |
20070043575 | Onuma et al. | Feb 2007 | A1 |
20070067166 | Pan | Mar 2007 | A1 |
20070100607 | Villemoes | May 2007 | A1 |
20070112559 | Schuijers et al. | May 2007 | A1 |
20070129036 | Arora | Jun 2007 | A1 |
20070147518 | Bessette | Jun 2007 | A1 |
20070196022 | Geiger et al. | Aug 2007 | A1 |
20070223577 | Ehara et al. | Sep 2007 | A1 |
20070282603 | Bessette | Dec 2007 | A1 |
20080027711 | Rajendran et al. | Jan 2008 | A1 |
20080027717 | Rajendran et al. | Jan 2008 | A1 |
20080040103 | Vinton et al. | Feb 2008 | A1 |
20080052066 | Oshikiri et al. | Feb 2008 | A1 |
20080159393 | Lee | Jul 2008 | A1 |
20080208538 | Visser et al. | Aug 2008 | A1 |
20080208600 | Pang et al. | Aug 2008 | A1 |
20080262835 | Oshikiri | Oct 2008 | A1 |
20080262853 | Jung et al. | Oct 2008 | A1 |
20080270125 | Choo et al. | Oct 2008 | A1 |
20080281604 | Choo et al. | Nov 2008 | A1 |
20080312758 | Koishida et al. | Dec 2008 | A1 |
20080312914 | Rajendran et al. | Dec 2008 | A1 |
20090006103 | Koishida et al. | Jan 2009 | A1 |
20090132261 | Kjorling et al. | May 2009 | A1 |
20090144055 | Davidson et al. | Jun 2009 | A1 |
20090144062 | Ramabadran et al. | Jun 2009 | A1 |
20090177478 | Jax et al. | Jul 2009 | A1 |
20090180531 | Wein et al. | Jul 2009 | A1 |
20090192789 | Lee et al. | Jul 2009 | A1 |
20090216527 | Oshikiri | Aug 2009 | A1 |
20090226010 | Schnell et al. | Sep 2009 | A1 |
20090228285 | Schnell et al. | Sep 2009 | A1 |
20090234644 | Reznik et al. | Sep 2009 | A1 |
20090240491 | Reznik | Sep 2009 | A1 |
20090263036 | Tanaka | Oct 2009 | A1 |
20090292537 | Ehara et al. | Nov 2009 | A1 |
20100023322 | Schnell et al. | Jan 2010 | A1 |
20100046626 | Tu | Feb 2010 | A1 |
20100063802 | Gao | Mar 2010 | A1 |
20100063808 | Gao et al. | Mar 2010 | A1 |
20100070270 | Gao | Mar 2010 | A1 |
20100177903 | Vinton et al. | Jul 2010 | A1 |
20100185261 | Schleich | Jul 2010 | A1 |
20100211399 | Liljeryd et al. | Aug 2010 | A1 |
20100211400 | Oh et al. | Aug 2010 | A1 |
20100241436 | Kim et al. | Sep 2010 | A1 |
20100241437 | Taleb et al. | Sep 2010 | A1 |
20100280823 | Shlomot et al. | Nov 2010 | A1 |
20100286981 | Krini et al. | Nov 2010 | A1 |
20110002266 | Gao | Jan 2011 | A1 |
20110015768 | Lim et al. | Jan 2011 | A1 |
20110046945 | Li et al. | Feb 2011 | A1 |
20110093276 | Rämöet al. | Apr 2011 | A1 |
20110099004 | Krishnan et al. | Apr 2011 | A1 |
20110106545 | Disch et al. | May 2011 | A1 |
20110125505 | Vaillancourt et al. | May 2011 | A1 |
20110170711 | Rettelbach et al. | Jul 2011 | A1 |
20110170790 | Cheon | Jul 2011 | A1 |
20110173006 | Nagel et al. | Jul 2011 | A1 |
20110173007 | Multrus | Jul 2011 | A1 |
20110173008 | Lecomte et al. | Jul 2011 | A1 |
20110194712 | Potard | Aug 2011 | A1 |
20110200196 | Disch et al. | Aug 2011 | A1 |
20110202352 | Neuendorf et al. | Aug 2011 | A1 |
20110202354 | Grill et al. | Aug 2011 | A1 |
20110202358 | Neuendorf et al. | Aug 2011 | A1 |
20110235809 | Schuijers et al. | Sep 2011 | A1 |
20110238425 | Neuendorf et al. | Sep 2011 | A1 |
20110238426 | Fuchs | Sep 2011 | A1 |
20110257984 | Virette et al. | Oct 2011 | A1 |
20110264454 | Ullberg et al. | Oct 2011 | A1 |
20110264457 | Oshikiri et al. | Oct 2011 | A1 |
20110288873 | Nagel et al. | Nov 2011 | A1 |
20110295598 | Yang et al. | Dec 2011 | A1 |
20110295600 | Sung | Dec 2011 | A1 |
20110305352 | Villemoes et al. | Dec 2011 | A1 |
20110320212 | Tsujino et al. | Dec 2011 | A1 |
20120002818 | Heiko et al. | Jan 2012 | A1 |
20120010879 | Tsujino et al. | Jan 2012 | A1 |
20120029923 | Rajendran et al. | Feb 2012 | A1 |
20120029925 | Duni et al. | Feb 2012 | A1 |
20120065965 | Choo et al. | Mar 2012 | A1 |
20120095756 | Sung | Apr 2012 | A1 |
20120095769 | Zhang et al. | Apr 2012 | A1 |
20120136670 | Ishikawa et al. | May 2012 | A1 |
20120158409 | Nagel et al. | Jun 2012 | A1 |
20120209600 | Kim et al. | Aug 2012 | A1 |
20120226505 | Lin et al. | Sep 2012 | A1 |
20120230599 | Norkin | Sep 2012 | A1 |
20120245947 | Neuendorf | Sep 2012 | A1 |
20120253797 | Geiger et al. | Oct 2012 | A1 |
20120265534 | Coorman et al. | Oct 2012 | A1 |
20120271644 | Bessette et al. | Oct 2012 | A1 |
20120296641 | Rajendran et al. | Nov 2012 | A1 |
20130006645 | Jiang et al. | Jan 2013 | A1 |
20130035777 | Niemisto et al. | Feb 2013 | A1 |
20130051571 | Nagel et al. | Feb 2013 | A1 |
20130051574 | Yoo | Feb 2013 | A1 |
20130090933 | Villemoes et al. | Apr 2013 | A1 |
20130090934 | Nagel et al. | Apr 2013 | A1 |
20130117015 | Bayer et al. | May 2013 | A1 |
20130121411 | Robillard et al. | May 2013 | A1 |
20130124214 | Yamamoto et al. | May 2013 | A1 |
20130144632 | Sung | Jun 2013 | A1 |
20130156112 | Suzuki et al. | Jun 2013 | A1 |
20130182870 | Villemoes | Jul 2013 | A1 |
20130185085 | Tsujino et al. | Jul 2013 | A1 |
20130226596 | Geiger | Aug 2013 | A1 |
20130282383 | Hedelin et al. | Oct 2013 | A1 |
20130289981 | Ragot et al. | Oct 2013 | A1 |
20130332176 | Setiawan et al. | Dec 2013 | A1 |
20140088973 | Gibbs et al. | Mar 2014 | A1 |
20140149126 | Soulodre | May 2014 | A1 |
20140188464 | Choo | Jul 2014 | A1 |
20140200901 | Kawashima et al. | Jul 2014 | A1 |
20140229186 | Mehrotra et al. | Aug 2014 | A1 |
20140329511 | Vesa | Nov 2014 | A1 |
20150046172 | Moriya et al. | Feb 2015 | A1 |
20150071446 | Sun et al. | Mar 2015 | A1 |
20150187366 | Moriya et al. | Jul 2015 | A1 |
20160035329 | Ekstrand et al. | Feb 2016 | A1 |
20160104490 | Sukowski et al. | Apr 2016 | A1 |
20160140980 | Disch et al. | May 2016 | A1 |
20160210977 | Ghido et al. | Jul 2016 | A1 |
20160247506 | Lecomte | Aug 2016 | A1 |
20160275955 | Liu | Sep 2016 | A1 |
20170011746 | Zhou et al. | Jan 2017 | A1 |
20170110135 | Disch et al. | Apr 2017 | A1 |
20170116999 | Gao | Apr 2017 | A1 |
20170133023 | Disch et al. | May 2017 | A1 |
20170178649 | Sung | Jun 2017 | A1 |
20170221492 | Villemoes | Aug 2017 | A1 |
20180068646 | Esparza | Mar 2018 | A1 |
20220120641 | Chen | Apr 2022 | A1 |
20220284908 | Guo | Sep 2022 | A1 |
Number | Date | Country |
---|---|---|
2896814 | Aug 2014 | CA |
1114122 | Dec 1995 | CN |
1465137 | Dec 2003 | CN |
1467703 | Jan 2004 | CN |
1496559 | May 2004 | CN |
1503968 | Jun 2004 | CN |
1647154 | Jul 2005 | CN |
1659927 | Aug 2005 | CN |
1677491 | Oct 2005 | CN |
1677493 | Oct 2005 | CN |
1813286 | Aug 2006 | CN |
1864436 | Nov 2006 | CN |
1905373 | Jan 2007 | CN |
1918631 | Feb 2007 | CN |
1918632 | Feb 2007 | CN |
101006494 | Jul 2007 | CN |
101067931 | Nov 2007 | CN |
101083076 | Dec 2007 | CN |
101185124 | May 2008 | CN |
101185127 | May 2008 | CN |
101238510 | Aug 2008 | CN |
101325059 | Dec 2008 | CN |
101502122 | Aug 2009 | CN |
101521014 | Sep 2009 | CN |
101552005 | Oct 2009 | CN |
101609680 | Dec 2009 | CN |
101622669 | Jan 2010 | CN |
101689961 | Mar 2010 | CN |
101335000 | Apr 2010 | CN |
101849258 | Sep 2010 | CN |
101933086 | Dec 2010 | CN |
101939782 | Jan 2011 | CN |
101946526 | Jan 2011 | CN |
102089758 | Jun 2011 | CN |
102105930 | Jun 2011 | CN |
101223573 | Jul 2011 | CN |
102227910 | Oct 2011 | CN |
101847413 | Nov 2011 | CN |
102798870 | Nov 2012 | CN |
103038819 | Apr 2013 | CN |
103038821 | Apr 2013 | CN |
103165136 | Jun 2013 | CN |
103971699 | Aug 2014 | CN |
0751493 | Feb 1997 | EP |
1734511 | Dec 2006 | EP |
1446797 | May 2007 | EP |
2077551 | Jul 2009 | EP |
2144171 | Jan 2010 | EP |
2077551 | Mar 2011 | EP |
2830056 | Jan 2015 | EP |
2830059 | Jan 2015 | EP |
2830063 | Jan 2015 | EP |
3021322 | May 2016 | EP |
2470385 | Nov 2010 | GB |
07336231 | Dec 1995 | JP |
2001053617 | Feb 2001 | JP |
2002050967 | Feb 2002 | JP |
2002268693 | Sep 2002 | JP |
2003108197 | Apr 2003 | JP |
2003140692 | May 2003 | JP |
2004046179 | Feb 2004 | JP |
2006293400 | Oct 2006 | JP |
2006323037 | Nov 2006 | JP |
3898218 | Mar 2007 | JP |
3943127 | Jul 2007 | JP |
2007532934 | Nov 2007 | JP |
2009501358 | Jan 2009 | JP |
2010526346 | Jul 2010 | JP |
2010538318 | Dec 2010 | JP |
2011154384 | Aug 2011 | JP |
2011527447 | Oct 2011 | JP |
2012027498 | Feb 2012 | JP |
2012037582 | Feb 2012 | JP |
2013125187 | Jun 2013 | JP |
2013521538 | Jun 2013 | JP |
2013524281 | Jun 2013 | JP |
2013532851 | Aug 2013 | JP |
6031198 | Oct 2016 | JP |
6666356 | Feb 2020 | JP |
7078592 | May 2022 | JP |
100406674 | Jan 2004 | KR |
1020070118173 | Dec 2007 | KR |
20130025963 | Mar 2013 | KR |
2323469 | Apr 2008 | RU |
2325708 | May 2008 | RU |
2388068 | Apr 2010 | RU |
2422922 | Jun 2011 | RU |
2428747 | Sep 2011 | RU |
2455709 | Jul 2012 | RU |
2459282 | Aug 2012 | RU |
2470385 | Dec 2012 | RU |
2477532 | Mar 2013 | RU |
2481650 | May 2013 | RU |
2482554 | May 2013 | RU |
2487427 | Jul 2013 | RU |
412719 | Nov 2000 | TW |
200537436 | Nov 2005 | TW |
200638336 | Nov 2006 | TW |
200912897 | Mar 2009 | TW |
200939206 | Sep 2009 | TW |
201007696 | Feb 2010 | TW |
201009812 | Mar 2010 | TW |
201034001 | Sep 2010 | TW |
201205558 | Feb 2012 | TW |
201243833 | Nov 2012 | TW |
201316327 | Apr 2013 | TW |
201333933 | Aug 2013 | TW |
201506908 | Feb 2015 | TW |
9602050 | Jan 1996 | WO |
WO-0122402 | Mar 2001 | WO |
2005104094 | Nov 2005 | WO |
2005109240 | Nov 2005 | WO |
2006049204 | May 2006 | WO |
2006107840 | Oct 2006 | WO |
2006113921 | Oct 2006 | WO |
2008084427 | Jul 2008 | WO |
2009121298 | Oct 2009 | WO |
2010070770 | Jun 2010 | WO |
2010114123 | Oct 2010 | WO |
2010136459 | Dec 2010 | WO |
2011047887 | Apr 2011 | WO |
2011110499 | Sep 2011 | WO |
2012012414 | Jan 2012 | WO |
2012110482 | Aug 2012 | WO |
2013035257 | Mar 2013 | WO |
2013061530 | May 2013 | WO |
2013147666 | Oct 2013 | WO |
2013147668 | Oct 2013 | WO |
2014108393 | Jul 2014 | WO |
2014161996 | Oct 2014 | WO |
2015010949 | Jan 2015 | WO |
Entry |
---|
Hamdy, K. N, et al., “Low bit rate high quality audio coding with combined harmonic and wavelet representations”, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing, Conference Proceedings vol. 2, pp. 1045-1048, 1045-1048. |
Zhe, Ji “Research on Low-Rate Speech Coding Algorithms”, CNKI China Doctoral Dissertation, 2012. |
“Information technology—MPEG audio technologies—Part 3: Unified speech and audio coding”, ISO/IEC FDIS 23003-3:2011(E); ISO/IEC JTC 1/SC 29/WG 11; STD Version 2.1c2, 2011, 286 pps. |
3GPP TS 26.443, , “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Codec for Enhanced Voice Services (EVS); ANSI C code (floating-point) (Release 12)”, 3GPP TS 26.443, “Codec for Enhanced Voice Services (EVS),” Release 12, Dec. 2014., Dec. 2014, 1-9. |
Annadana, Raghuram et al., “New Results in Low Bit Rate Speech Coding and Bandwidth Extension”, Audio Engineering Society Convention 121, Audio Engineering Society Convention Paper 6876, Oct. 5, 2006, pp. 1-6. |
Bosi, Marina et al., “ISO/IEC MPEG-2 Advanced Audio Coding”, J. Audio Eng. Soc., vol. 45, No. 10, Oct. 1997, pp. 789-814. |
Daudet, Laurent et al., “MDCT analysis of sinusoids: exact results and applications to coding artifacts reduction”, IEEE Transactions on Speech and Audio Processing, IEEE, vol. 12, No. 3, May 2004, pp. 302-312. |
Den Brinker, A. C. et al., “An overview of the coding standard MPEG-4 audio amendments 1 and 2: HE-AAC, SSC, and HE-AAC v2”, EURASIP Journal on Audio, Speech, and Music Processing, 2009, Feb. 24, 2009, 24 pps. |
Dietz, Martin et al., “Spectral Band Replication, a Novel Approach in Audio Coding”, Engineering Society Convention 121, Audio Engineering Society Paper 5553, May 10, 2002, pp. 1-8. |
Ekstrand, Per , “Bandwidth Extension of Audio Signals by Spectral Band Replication”, Proc.1st IEEE Benelux Workshop on Model based Processing and Coding of Audio (MPCA-2002), Nov. 15, 2002, pp. 53-58. |
Ferreira, Anibal J. et al., “Accurate Spectral Replacement”, Audio Engineering Society Convention, 118, Audio Engineering Society Convention Paper No. 6383, May 28, 2005, pp. 1-11. |
Geiser, Bernd et al., “Bandwidth Extension for Hierarchical Speech and Audio Coding in ITU-T Rec. G.729.1”, IEEE Transactions on Audio, Speech and Language Processing, IEEE Service Center, vol. 15, No. 8, Nov. 2007, pp. 2496-2509. |
Herre, Jurgen et al., “Extending the MPEG-4 AAC Codec by Perceptual Noise Substitution”, Audio Engineering Society Convention 104, Audio Engineering Society Preprint,, May 16, 1998, pp. 1-14. |
Herre, Jurgen , “Temporal Noise Shaping, Quantization and Coding Methods in Perceptual Auidio Coding: A Tutorial Introduction”, Audio Engineering Society Conference: 17th International Conference: High-Quality Audio Coding, Audio Engineering Society, Aug. 1, 1999, pp. 312-325. |
Herre, Jürgen et al., “Extending the MPEG-4 AAC Codec by Perceptual Noise Substitution”, 104th AES Convention, Amsterdam, 1998, Preprint 4720, 1998,. |
ISO/IEC 13818-3:1998(E), , “Information Technology—Generic Coding of Moving Pictures and Associated Audio, Part 3: Audio”, Second Edition, ISO/IEC, Apr. 15, 1998, 132 pps. |
ISO/IEC 13818-7, , “Information technology—Generic coding of moving pictures and associated audio information—Part 7: Advanced Audio Coding (AAC)”, ISO/IEC 13818-7, Information technology—Part 7: Advanced Audio Coding (AAC), 2006, Version attached by inventor is 2005. 2006 publication not available online and would have to be bought. Inventor says 2005 publication is a pre-version. Let us know whether this version is acceptable., 2005, 1-202. |
ISO/IEC 14496-3:2001, , “Information Technology—Coding of audio-visual objects—Part 3: Audio, AMENDMENT 1: Bandwidth Extension”, ISO/IEC JTC1/SC29/WG11/N5570, ISO/IEC 14496-3:2001/FDAM 1:2003(E), Mar. 2003, 127 pps. |
ISO/IEC 23008-3:2015(E), , “Information Technology—High Efficiency Coding and Media Delivery in Heterogeneous Environments—Part 3: 3D audio”, Feb. 20, 2015, 438 pages. |
ISO/IEC FDIS 23003-3:2011(E), , “Information Technology—MPEG audio technologies—Part 3: Unified speech and audio coding, Final Draft”, ISO/IEC, 2010, 286 pps. |
Mcaulay, Robert J. et al., “Speech Analysis/ Synthesis Based on a Sinusoidal Representation”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-34, No. 4, Aug. 1986, pp. 744-754. |
Mehrotra, Sanjeev et al., “Hybrid low bitrate audio coding using adaptive gain shape vector quantization”, Multimedia Signal Processing, 2008 IEEE 10th Workshop On, IEEE, Piscataway, NJ, USA XP031356759 ISBN: 978-1-4344-3394-4, Oct. 8, 2008, pp. 927-932. |
Nagel, Frederik et al., “A Continuous Modulated Single Sideband Bandwidth Extension”, ICASSP International Conference on Acoustics, Speech and Signal Processing, Apr. 2010, pp. 357-360. |
Nagel, Frederik et al., “A Harmonic Bandwidth Extension Method for Audio Codecs”, International Conference on Acoustics, Speech and Signal Processing, XP002527507, Apr. 19, 2009, pp. 145-148. |
Nagel, Frederik et al., “A Harmonic Banwidth Extension Method for Audio Codecs”, International Conference on Acoustics, Speech and Signal Processing 2009, Taipei, Apr. 19, 2009, pp. 145-148. |
Neuendorf, Max et al., “MPEG Unified Speech and Audio Coding—The ISO/MPEG Standard for High-Efficiency Audio Coding of all Content Types”, Audio Engineering Society Convention Paper 8654, Presented at the 132nd Convention, Apr. 26, 2012, pp. 1-22. |
Purnhagen, Heiko et al., “HILN-the MPEG-4 parametric audio coding tools”, Proceedings ISCAS 2000 Geneva, The 2000 IEEE International Symposium on Circuits and Systems, 2000, pp. 201-204. |
Sinha, Deepen et al., “A Novel Integrated Audio Bandwidth Extension Toolkit”, ABET, Audio Engineering Society Convention, Paris, France, 2006, 1-12. |
Smith, Julius O. et al., “PARSHL: An analysis/synthesis program for non-harmonic sounds based on a sinusoidal representation”, Proceedings of the International Computer Music Conference, 1987, 1-22. |
Valin, JM et al., “Defintion of the Opus Audio Codec”, IETF, Sep. 2012, pp. 1-326. |
Zernicki, Tomasz et al., “Audio bandwidth extension by frequency scaling of sinusoidal partials”, Audio Engineering Society Convention, San Francisco, USA, 2008, 1-7. |
EVS Codec Detailed Algorithmic Description (3GPP TS 26.445 version 12.0.0 Release 12). ETSI TS 126 445 V12.0.0.2014.11., 2014. |
“Information technology—Generic coding of moving pictures and associated audio information—Part 7: Advanced Audio Coding (AAC)”, International Standard ISO/IEC13818-7 Fourth edition,“, International Standard ISO/IEC13818-7 Fourth edition”, Jan. 15, 2006, pp. 172-174. |
Motlicek, Petr , et al., “Wide-Band Perceptual Audio Coding Based On Frequency-Domain Linear Prediction”. |
Number | Date | Country | |
---|---|---|---|
20200227058 A1 | Jul 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15697042 | Sep 2017 | US |
Child | 16802397 | US | |
Parent | PCT/EP2016/054831 | Mar 2016 | WO |
Child | 15697042 | US |