The present invention is concerned with a linear prediction based audio codec using frequency domain noise shaping such as the TCX mode known from USAC.
As a relatively new audio codec, USAC has recently been finalized. USAC is a codec which supports switching between several coding modes such as an AAC like coding mode, a time-domain coding mode using linear prediction coding, namely ACELP, and transform coded excitation coding forming an intermediate coding mode according to which spectral domain shaping is controlled using the linear prediction coefficients transmitted via the data stream. In WO 2011147950, a proposal has been made to render the USAC coding scheme more suitable for low delay applications by excluding the AAC like coding mode from availability and restricting the coding modes to ACELP and TCX only. Further, it has been proposed to reduce the frame length.
However, it would be favorable to have a possibility at hand to reduce the complexity of a linear prediction based coding scheme using spectral domain shaping while achieving similar coding efficiency in terms of, for example, rate/distortion ratio sense.
Thus, it is an object of the present invention to provide such a linear prediction based coding scheme using spectral domain shaping allowing for a complexity reduction at a comparable or even increased coding efficiency.
According to an embodiment, an audio encoder may have: a spectral decomposer for spectrally decomposing, using an MDCT, an audio input signal into a spectrogram of a sequence of spectrums; an autocorrelation computer configured to compute an autocorrelation from a current spectrum of the sequence of spectrums; a linear prediction coefficient computer configured to compute linear prediction coefficients based on the autocorrelation; a spectral domain shaper configured to spectrally shape the current spectrum based on the linear prediction coefficients; and a quantization stage configured to quantize the spectrally shaped spectrum; wherein the audio encoder is configured to insert information on the quantized spectrally shaped spectrum and information on the linear prediction coefficients into a data stream, and wherein the autocorrelation computer is configured to, in computing the autocorrelation from the current spectrum, compute the power spectrum from the current spectrum, and subject the power spectrum to an inverse ODFT transform.
According to another embodiment, an audio encoding method may have the steps of: spectrally decomposing, using an MDCT, an audio input signal into a spectrogram of a sequence of spectrums; computing an autocorrelation from a current spectrum of the sequence of spectrums; computing linear prediction coefficients based on the autocorrelation; spectrally shaping the current spectrum based on the linear prediction coefficients; quantizing the spectrally shaped spectrum; and inserting information on the quantized spectrally shaped spectrum and information on the linear prediction coefficients into a data stream, wherein the computation of the autocorrelation from the current spectrum, has computing the power spectrum from the current spectrum, and subjecting the power spectrum to an inverse ODFT transform.
Another embodiment may have a computer program having a program code for performing, when running on a computer, the above audio encoding method.
It is a basic idea underlying the present invention that an encoding concept which is linear prediction based and uses spectral domain noise shaping may be rendered less complex at a comparable coding efficiency in terms of, for example, rate/distortion ratio, if the spectral decomposition of the audio input signal into a spectrogram comprising a sequence of spectra is used for both linear prediction coefficient computation as well as the input for a spectral domain shaping based on the linear prediction coefficients.
In this regard, it has been found out that the coding efficiency remains even if such a lapped transform is used for the spectral decomposition which causes aliasing and necessitates time aliasing cancellation such as critically sampled lapped transforms such as an MDCT.
Embodiments of the present application are described with respect to the figures, among which
In order to ease the understanding of the main aspects and advantages of the embodiments of the present invention further described below, reference is preliminarily made to
In particular, the audio encoder of
Further, the audio encoder of
For sake of completeness only, it is noted that a temporal noise shaping module 26 may optionally subject the spectra forwarded from spectral decomposer 10 to spectral domain shaper 22 to a temporal noise shaping, and a low frequency emphasis module 28 may adaptively filter each shaped spectrum output by spectral domain shaper 22 prior to quantization 24.
The quantized and spectrally shaped spectrum is inserted into the data stream 30 along with information on the linear prediction coefficients used in spectral shaping so that, at the decoding side, the de-shaping and de-quantization may be performed.
The most parts of the audio codec, one exception being the TNS module 26, shown in
Nevertheless, more emphasis is provided in the following with regard to the linear prediction analyzer 20. As is shown in
As became clear from the above discussion, the linear prediction analysis performed by analyzer 20 causes overhead which completely adds-up to the spectral decomposition and the spectral domain shaping performed in blocks 10 and 22 and accordingly, the computational overhead is considerable.
Briefly spoken, in the audio encoder of
Before describing the detailed and mathematical framework of the embodiment of
As shown in
The linear prediction coefficient computer 52 of
Internally, the autocorrelation computer 50 comprises a sequence of a power spectrum computer 54 followed by a scale warper/spectrum weighter 56 which in turn is followed by an inverse transformer 58. The details and significance of the sequence of modules 54 to 58 will be described in more detail below.
In order to understand as to why it is possible to co-use the spectral decomposition of decomposer 10 for both, spectral domain noise shaping within shaper 22 as well as linear prediction coefficient computation, one should consider the Wiener-Khinichin Theorem which shows that an autocorrelation can be calculated using a DFT:
Thus, Rm are the autocorrelation coefficients of the autocorrelation of the signal's portion xn of which the DFT is Xk.
Accordingly, if spectral decomposer 10 would use a DFT in order to implement the lapped transform and generate the sequence of spectra of the input audio signal 12, then autocorrelation calculator 50 would be able to perform a faster calculation of an autocorrelation at its output, merely by obeying the just outlined Wiener-Khinichin Theorem.
If the values for all lags m of the autocorrelation are necessitated, the DFT of the spectral decomposer 10 could be performed using an FFT and an inverse FFT could be used within the autocorrelation computer 50 so as to derive the autocorrelation therefrom using the just mentioned formula. When, however, only M<<N lags are needed, it would be faster to use an FFT for the spectral decomposition and directly apply an inverse DFT so as to obtain the relevant autocorrelation coefficients.
The same holds true when the DFT mentioned above is replaced with an ODFT, i.e. odd frequency DFT, where a generalized DFT of a time sequence x is defined as:
is set for ODFT (Odd Frequency DFT).
If, however, an MDCT is used in the embodiment of
where xn with n=0 . . . 2N−1 defines a current windowed portion of the input audio signal 12 as output by windower 16 and Xk is, accordingly, the k-th spectral coefficient of the resulting spectrum for this windowed portion.
The power spectrum computer 54 calculates from the output of the MDCT the power spectrum by squaring each transform coefficient Xk according to:
Sk=|Xk|2 k=0, . . . , N−1
The relation between an MDCT spectrum as defined by Xk and an ODFT spectrum XkODFT can be written as:
This means that using the MDCT instead of an ODFT as input for the autocorrelation computer 50 performing the MDCT to autocorrelation procedure, is equivalent to the autocorrelation obtained from the ODFT with a spectrum weighting of
fkmdct=|cos [arg(Xkodft)−θk]|
This distortion of the autocorrelation determined is, however, transparent for the decoding side as the spectral domain shaping within shaper 22 takes place in exactly the same spectral domain as the one of the spectral decomposer 10, namely the MDCT. In other words, since the frequency domain noise shaping by frequency domain noise shaper 48 of
Accordingly, in the autocorrelation computer 50, the inverse transformer 58 performs an inverse ODFT and an inverse ODFT of a symmetrical real input is equal to a DCT type II:
Thus, this allows a fast computation of the MDCT based LPC in the autocorrelation computer 50 of
Details regarding the scale warper/spectrum weighter 56 have not yet been described. In particular, this module is optional and may be left away or replaced by a frequency domain decimator. Details regarding possible measures performed by module 56 are described in the following. Before that, however, some details regarding some of the other elements shown in
Thus, the LPC weighting thus performed approximates the simultaneous masking. A constant of γ=0.92 or somewhere between 0.85 and 0.95, both inclusively, produces good results.
Regarding module 42 it is noted that variable bitrate coding or some other entropy coding scheme may be used in order to encode the information concerning the linear prediction coefficients into the data stream 30. As already mentioned above, the quantization could be performed in the LSP/LSF domain, but the ISP/ISF domain is also feasible.
Regarding the LPC-to-MDCT module 46 which converts the LPC into spectral weighting values which are called, in case of MDCT domain, MDCT gains in the following, reference is made, for example, to the USAC codec where this transform is explained in detail. Briefly spoken, the LPC coefficients may be subject to an ODFT so as to obtain MDCT gains, the inverse of which may then be used as weightings for shaping the spectrum in module 48 by applying the resulting weightings onto respective bands of the spectrum. For example, 16 LPC coefficients are converted into MDCT gains. Naturally, instead of weighting using the inverse, weighting using the MDCT gains in non-inverted form is used at the decoder side in order to obtain a transfer function resembling an LPC synthesis filter so as to form the quantization noise as already mentioned above. Thus, summarizing, in module 46, the gains used by the FDNS 48 are obtained from the linear prediction coefficients using an ODFT and are called MDCT gains in case of using MDCT.
For sake of completeness,
The spectral domain deshaper 82 has a structure which is very similar to that of the spectral domain shaper 22 of
The time domain noise shaper 84 reverses the filtering of module 26 of
The spectral composer 86 comprises, internally, an inverse transformer 100 performing, for example, an IMDCT individually onto the inbound de-shaped spectra, followed by an aliasing canceller such as an overlap-add adder 102 configured to correctly temporally register the reconstructed windowed versions output by retransformer 100 so as to perform time aliasing cancellation between same and to output the reconstructed audio signal at output 90.
As already mentioned above, due to the spectral domain shaping 22 in accordance with a transfer function corresponding to an LPC analysis filter defined by the LPC coefficients conveyed within data stream 30, the quantization in quantizer 24, which has, for example, a spectrally flat noise, is shaped by the spectral domain deshaper 82 at a decoding side in a manner so as to be hidden below the masking threshold.
Different possibilities exist for implementing the TNS module 26 and the inverse thereof in the decoder, namely module 84. Temporal noise shaping is for shaping the noise in the temporal sense within the time portions which the individual spectra spectrally formed by the spectral domain shaper referred to. Temporal noise shaping is especially useful in case of transients being present within the respective time portion the current spectrum refers to. In accordance with a specific embodiment, the temporal noise shaper 26 is configured as a spectrum predictor configured to predictively filter the current spectrum or the sequence of spectra output by the spectral decomposer 10 along a spectral dimension. That is, spectrum predictor 26 may also determine prediction filter coefficients which may be inserted into the data stream 30. This is illustrated by a dashed line in
In other words, by predictively filtering the current spectrum in time domain noise shaper 26, the time domain noise shaper 26 obtains as spectrum reminder, i.e. the predictively filtered spectrum which is forwarded to the spectral domain shaper 22, wherein the corresponding prediction coefficients are inserted into the data stream 30. The time domain noise deshaper 84, in turn, receives from the spectral domain deshaper 82 the de-shaped spectrum and reverses the time domain filtering along the spectral domain by inversely filtering this spectrum in accordance with the prediction filters received from data stream, or extracted from data stream 30. In other words, time domain noise shaper 26 uses an analysis prediction filter such as a linear prediction filter, whereas the time domain noise deshaper 84 uses a corresponding synthesis filter based on the same prediction coefficients.
As already mentioned, the audio encoder may be configured to decide to enable or disable the temporal-noise shaping depending on the filter prediction gain or a tonality or transiency of the audio input signal 12 at the respective time portion corresponding to the current spectrum. Again, the respective information on the decision is inserted into the data stream 30.
In the following, the possibility is discussed according to which the autocorrelation computer 50 is configured to compute the autocorrelation from the predictively filtered, i.e. TNS-filtered, version of the spectrum rather than the unfiltered spectrum as shown in
As just mentioned, the TNS-filtered MDCT spectrum as output by spectral decomposer 10 can be used as an input or basis for the autocorrelation computation within computer 50. As just mentioned, the TNS-filtered spectrum could be used whenever TNS is applied, or the audio encoder could decide for spectra to which TNS was applied between using the unfiltered spectrum or the TNS-filtered spectrum. The decision could be made, as mentioned above, depending on the audio input signal's characteristics. The decision could be, however, transparent for the decoder, which merely applies the LPC coefficient information for the frequency domain deshaping. Another possibility would be that the audio encoder switches between the TNS-filtered spectrum and the non-filtered spectrum for spectrums to which TNS was applied, i.e. to make the decision between these two options for these spectrums, depending on a chosen transform length of the spectral decomposer 10.
To be more precise, the decomposer 10 in
Until now it has not yet been described which perceptual relevant modifications could be performed onto the power spectrum within module 56. Now, various measures are explained, and they could be applied individually or in combination onto all embodiments and variants described so far. In particular, a spectrum weighting could be applied by module 56 onto the power spectrum output by power spectrum computer 54. The spectrum weighting could be:
Sk′=fk2Sk k=0, . . . , N−1
wherein Sk are the coefficients of the power spectrum as already mentioned above.
Spectral weighting can be used as a mechanism for distributing the quantization noise in accordance with psychoacoustical aspects. Spectrum weighting corresponding to a pre-emphasis in the sense of
Moreover, scale warping could be used within module 56. The full spectrum could be divided, for example, into M bands for spectrums corresponding to frames or time portions of a sample length of l1 and 2M bands for spectrums corresponding to time portions of frames having a sample length of l2, wherein l2 may be two times l1, wherein l1 may be 64, 128 or 256. In particular, the division could obey:
The band division could include frequency warping to an approximation of the Bark scale according to:
alternatively the bands could be equally distributed to form a linear scale according to:
For the spectrums of frames of length l1, for example, a number of bands could be between 20 and 40, and between 48 and 72 for spectrums belonging to frames of length l2, wherein 32 bands for spectrums of frames of length and 64 bands for spectrums of frames of length l2 are of advantage.
Spectral weighting and frequency warping as optionally performed by optional module 56 could be regarded as a means of bit allocation (quantization noise shaping). Spectrum weighting in a linear scale corresponding to the pre-emphasis could be performed using a constant μ=0.9 or a constant lying somewhere between 0.8 and 0.95, so that the corresponding pre-emphasis would approximately correspond to Bark scale warping.
Modification of the power spectrum within module 56 may include spreading of the power spectrum, modeling the simultaneous masking, and thus replace the LPC Weighting modules 44 and 94.
If a linear scale is used and the spectrum weighting corresponding to the pre-emphasis is applied, then the results of the audio encoder of
Some listening test results have been performed using the embodiments identified above. From the tests, it turned out that the conventional LPC analysis as shown in
The negligible difference between the conventional LPC analysis and the linear scale MDCT based LPC analysis probably comes from the fact that the LPC is used for the quantization noise shaping and that there are enough bits at 48 kbits to code MDCT coefficients precisely enough.
Further, it turned out that using the Bark scale or non-linear scale by applying scale warping within module 56 results in coding efficiency or listening test results according to which the Bark scale outperforms the linear scale for the test audio pieces Applause, Fatboy, RockYou, Waiting, bohemian, fuguepremikres, kraftwerk, lesvoleurs, teardrop.
The Bark scale fails miserably for hockey and linchpin. Another item that has problems in the Bark scale is bibilolo, but it wasn't included in the test as it presents an experimental music with specific spectrum structure. Some listeners also expressed strong dislike of the bibilolo item.
However, it is possible for the audio encoder of
It should be mentioned that the above outlined embodiments could be used as the TCX mode in a multi-mode audio codec such as a codec supporting ACELP and the above outlined embodiment as a TCX-like mode. As a framing, frames of a constant length such as 20 ms could be used. In this way, a kind of low delay version of the USAC codec could be obtained which is very efficient. As the TNS, the TNS from AAC-ELD could be used. To reduce the number of bits used for side information, the number of filters could be fixed to two, one operating from 600 Hz to 4500 Hz and a second from 4500 Hz to the end of the core coder spectrum. The filters could be independently switched on and off. The filters could be applied and transmitted as a lattice using parcor coefficients. The maximum order of a filter could be set to be eight and four bits could be used per filter coefficient. Huffman coding could be used to reduce the number of bits used for the order of a filter and for its coefficients.
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods may be performed by any hardware apparatus.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which will be apparent to others skilled in the art and which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
This application is a continuation of copending International Application No. PCT/EP2012/052455, filed Feb. 14, 2012, which is incorporated herein by reference in its entirety, and additionally claims priority from U.S. Provisional Application No. 61/442,632, filed Feb. 14, 2011, which is also incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5598506 | Wigren et al. | Jan 1997 | A |
5606642 | Stautner et al. | Feb 1997 | A |
5684920 | Iwakami | Nov 1997 | A |
5727119 | Davidson et al. | Mar 1998 | A |
5848391 | Bosi et al. | Dec 1998 | A |
5890106 | Bosi-Goldberg et al. | Mar 1999 | A |
5953698 | Hayata | Sep 1999 | A |
5960389 | Jarvinen et al. | Sep 1999 | A |
6070137 | Bloebaum et al. | May 2000 | A |
6134518 | Cohen et al. | Oct 2000 | A |
6173257 | Gao | Jan 2001 | B1 |
6236960 | Peng et al. | May 2001 | B1 |
6587817 | Vähätalo et al. | Jul 2003 | B1 |
6636829 | Benyassine et al. | Oct 2003 | B1 |
6636830 | Princen et al. | Oct 2003 | B1 |
6680972 | Liljeryd et al. | Jan 2004 | B1 |
6879955 | Rao et al. | Apr 2005 | B2 |
6969309 | Carpenter | Nov 2005 | B2 |
6980143 | Linzmeier et al. | Dec 2005 | B2 |
7003448 | Lauber et al. | Feb 2006 | B1 |
7249014 | Kannan et al. | Jul 2007 | B2 |
7280959 | Bessette | Oct 2007 | B2 |
7343283 | Ashley et al. | Mar 2008 | B2 |
7363218 | Jabri et al. | Apr 2008 | B2 |
7565286 | Gracie et al. | Jul 2009 | B2 |
7587312 | Kim | Sep 2009 | B2 |
7627469 | Nettre et al. | Dec 2009 | B2 |
7707034 | Sun et al. | Apr 2010 | B2 |
7711563 | Chen | May 2010 | B2 |
7788105 | Miseki | Aug 2010 | B2 |
7801735 | Thumpudi et al. | Sep 2010 | B2 |
7809556 | Goto et al. | Oct 2010 | B2 |
7860720 | Thumpudi et al. | Dec 2010 | B2 |
7877253 | Krishnan et al. | Jan 2011 | B2 |
7917369 | Lee et al. | Mar 2011 | B2 |
7930171 | Chen et al. | Apr 2011 | B2 |
7933769 | Bessette | Apr 2011 | B2 |
7979271 | Bessette | Jul 2011 | B2 |
7987089 | Krishnan et al. | Jul 2011 | B2 |
8045572 | Li et al. | Oct 2011 | B1 |
8078458 | Zopf et al. | Dec 2011 | B2 |
8121831 | Oh et al. | Feb 2012 | B2 |
8160274 | Bongiovi et al. | Apr 2012 | B2 |
8239192 | Kovesi et al. | Aug 2012 | B2 |
8255207 | Vaillancourt et al. | Aug 2012 | B2 |
8255213 | Yoshida et al. | Aug 2012 | B2 |
8363960 | Petersohn et al. | Jan 2013 | B2 |
8364472 | Ehara | Jan 2013 | B2 |
8428936 | Mittal et al. | Apr 2013 | B2 |
8428941 | Boehm et al. | Apr 2013 | B2 |
8452884 | Wang et al. | May 2013 | B2 |
8566106 | Salami et al. | Oct 2013 | B2 |
8630862 | Geiger et al. | Jan 2014 | B2 |
8630863 | Son et al. | Jan 2014 | B2 |
8635357 | Ebersviller | Jan 2014 | B2 |
8825496 | Setiawan et al. | Sep 2014 | B2 |
8954321 | Beack et al. | Feb 2015 | B1 |
20020111799 | Bernard | Aug 2002 | A1 |
20020176353 | Atlas et al. | Nov 2002 | A1 |
20020184009 | Heikkinen | Dec 2002 | A1 |
20030009325 | Kirchherr et al. | Jan 2003 | A1 |
20030033136 | Lee | Feb 2003 | A1 |
20030046067 | Gradl | Mar 2003 | A1 |
20030078771 | Jung et al. | Apr 2003 | A1 |
20030225576 | Li et al. | Dec 2003 | A1 |
20040010329 | Lee et al. | Jan 2004 | A1 |
20040093204 | Byun et al. | May 2004 | A1 |
20040093368 | Lee et al. | May 2004 | A1 |
20040184537 | Geiger et al. | Sep 2004 | A1 |
20040193410 | Lee et al. | Sep 2004 | A1 |
20040220805 | Geiger et al. | Nov 2004 | A1 |
20050021338 | Graboi et al. | Jan 2005 | A1 |
20050065785 | Bessette | Mar 2005 | A1 |
20050080617 | Koshy et al. | Apr 2005 | A1 |
20050091044 | Ramo et al. | Apr 2005 | A1 |
20050096901 | Uvliden et al. | May 2005 | A1 |
20050130321 | Nicholson et al. | Jun 2005 | A1 |
20050165603 | Bessette et al. | Jul 2005 | A1 |
20050192798 | Vainio et al. | Sep 2005 | A1 |
20050240399 | Makinen et al. | Oct 2005 | A1 |
20050278171 | Suppappola et al. | Dec 2005 | A1 |
20060095253 | Schuller et al. | May 2006 | A1 |
20060115171 | Geiger et al. | Jun 2006 | A1 |
20060116872 | Byun et al. | Jun 2006 | A1 |
20060173675 | Ojanpera et al. | Aug 2006 | A1 |
20060206334 | Kapoor et al. | Sep 2006 | A1 |
20060210180 | Geiger et al. | Sep 2006 | A1 |
20060293885 | Gournay et al. | Dec 2006 | A1 |
20070050189 | Cruz-Zeno et al. | Mar 2007 | A1 |
20070100607 | Villemoes | May 2007 | A1 |
20070147518 | Bessette et al. | Jun 2007 | A1 |
20070160218 | Jakka et al. | Jul 2007 | A1 |
20070171931 | Manjunath et al. | Jul 2007 | A1 |
20070174047 | Anderson et al. | Jul 2007 | A1 |
20070196022 | Geiger et al. | Aug 2007 | A1 |
20070225971 | Bessette et al. | Sep 2007 | A1 |
20070282603 | Bessette | Dec 2007 | A1 |
20080010064 | Takeuchi et al. | Jan 2008 | A1 |
20080015852 | Kruger et al. | Jan 2008 | A1 |
20080027719 | Kirshnan et al. | Jan 2008 | A1 |
20080046236 | Thyssen et al. | Feb 2008 | A1 |
20080052068 | Aguilar et al. | Feb 2008 | A1 |
20080097764 | Grill et al. | Apr 2008 | A1 |
20080120116 | Schnell et al. | May 2008 | A1 |
20080147415 | Schnell et al. | Jun 2008 | A1 |
20080208599 | Rosec et al. | Aug 2008 | A1 |
20080221905 | Schnell et al. | Sep 2008 | A1 |
20080249765 | Schuijers et al. | Oct 2008 | A1 |
20080275580 | Andersen | Nov 2008 | A1 |
20090024397 | Ryu et al. | Jan 2009 | A1 |
20090076807 | Xu et al. | Mar 2009 | A1 |
20090110208 | Choo et al. | Apr 2009 | A1 |
20090204412 | Kovesi et al. | Aug 2009 | A1 |
20090226016 | Fitz et al. | Sep 2009 | A1 |
20090228285 | Schnell et al. | Sep 2009 | A1 |
20090319283 | Schnell et al. | Dec 2009 | A1 |
20090326930 | Kawashima et al. | Dec 2009 | A1 |
20090326931 | Ragot et al. | Dec 2009 | A1 |
20100017200 | Oshikiri et al. | Jan 2010 | A1 |
20100017213 | Edler et al. | Jan 2010 | A1 |
20100049511 | Ma et al. | Feb 2010 | A1 |
20100063811 | Gao et al. | Mar 2010 | A1 |
20100063812 | Gao | Mar 2010 | A1 |
20100070270 | Gao | Mar 2010 | A1 |
20100106496 | Morii et al. | Apr 2010 | A1 |
20100138218 | Geiger et al. | Jun 2010 | A1 |
20100198586 | Edler et al. | Aug 2010 | A1 |
20100217607 | Neuendorf et al. | Aug 2010 | A1 |
20100262420 | Herre et al. | Oct 2010 | A1 |
20100268542 | Kim et al. | Oct 2010 | A1 |
20110002393 | Suzuki et al. | Jan 2011 | A1 |
20110007827 | Virette et al. | Jan 2011 | A1 |
20110106542 | Bayer et al. | May 2011 | A1 |
20110153333 | Bessette | Jun 2011 | A1 |
20110173010 | Lecomte et al. | Jul 2011 | A1 |
20110173011 | Geiger et al. | Jul 2011 | A1 |
20110178795 | Bayer et al. | Jul 2011 | A1 |
20110218797 | Mittal et al. | Sep 2011 | A1 |
20110218799 | Mittal et al. | Sep 2011 | A1 |
20110218801 | Vary et al. | Sep 2011 | A1 |
20110257979 | Gao | Oct 2011 | A1 |
20110270616 | Garudadri | Nov 2011 | A1 |
20110311058 | Oh et al. | Dec 2011 | A1 |
20120226505 | Lin et al. | Sep 2012 | A1 |
20120228810 | Huang et al. | Sep 2012 | A1 |
20120271644 | Bessette et al. | Oct 2012 | A1 |
20130332151 | Fuchs et al. | Dec 2013 | A1 |
20140257824 | Taleb et al. | Sep 2014 | A1 |
Number | Date | Country |
---|---|---|
2007312667 | Apr 2008 | AU |
2730239 | Jan 2010 | CA |
1274456 | Nov 2000 | CN |
1344067 | Apr 2002 | CN |
1381956 | Nov 2002 | CN |
1437747 | Aug 2003 | CN |
1539137 | Oct 2004 | CN |
1539138 | Oct 2004 | CN |
101351840 | Oct 2006 | CN |
101110214 | Jan 2008 | CN |
101366077 | Feb 2009 | CN |
101371295 | Feb 2009 | CN |
101379551 | Mar 2009 | CN |
101388210 | Mar 2009 | CN |
101425292 | May 2009 | CN |
101483043 | Jul 2009 | CN |
101488344 | Jul 2009 | CN |
101743587 | Jun 2010 | CN |
101770775 | Jul 2010 | CN |
102008015702 | Aug 2009 | DE |
0665530 | Aug 1995 | EP |
0673566 | Sep 1995 | EP |
0758123 | Feb 1997 | EP |
0784846 | Jul 1997 | EP |
0843301 | May 1998 | EP |
1120775 | Aug 2001 | EP |
1852851 | Jul 2007 | EP |
1845520 | Oct 2007 | EP |
2107556 | Jul 2009 | EP |
2109098 | Oct 2009 | EP |
2144230 | Jan 2010 | EP |
2911228 | Jul 2008 | FR |
H08263098 | Oct 1996 | JP |
10039898 | Feb 1998 | JP |
H10214100 | Aug 1998 | JP |
H11502318 | Feb 1999 | JP |
H1198090 | Apr 1999 | JP |
2000357000 | Dec 2000 | JP |
2002-118517 | Apr 2002 | JP |
2003501925 | Jan 2003 | JP |
2003506764 | Feb 2003 | JP |
2004513381 | Apr 2004 | JP |
2004514182 | May 2004 | JP |
2005534950 | Nov 2005 | JP |
2006504123 | Feb 2006 | JP |
2007065636 | Mar 2007 | JP |
2007523388 | Aug 2007 | JP |
2007525707 | Sep 2007 | JP |
2007538282 | Dec 2007 | JP |
2008-15281 | Jan 2008 | JP |
2008513822 | May 2008 | JP |
2008261904 | Oct 2008 | JP |
2009508146 | Feb 2009 | JP |
2009075536 | Apr 2009 | JP |
2009522588 | Jun 2009 | JP |
2009-527773 | Jul 2009 | JP |
2010530084 | Sep 2010 | JP |
2010-538314 | Dec 2010 | JP |
2010539528 | Dec 2010 | JP |
2011501511 | Jan 2011 | JP |
2011527444 | Oct 2011 | JP |
1020040043278 | May 2004 | KR |
1020060025203 | Mar 2006 | KR |
1020070088276 | Aug 2007 | KR |
20080032160 | Apr 2008 | KR |
1020100059726 | Jun 2010 | KR |
1020100134709 | Apr 2015 | KR |
2169992 | Jun 2001 | RU |
2183034 | May 2002 | RU |
2003118444 | Dec 2004 | RU |
2004138289 | Jun 2005 | RU |
2296377 | Mar 2007 | RU |
2302665 | Jul 2007 | RU |
2312405 | Dec 2007 | RU |
2331933 | Aug 2008 | RU |
2335809 | Oct 2008 | RU |
2008126699 | Feb 2010 | RU |
2009107161 | Sep 2010 | RU |
2009118384 | Nov 2010 | RU |
200830277 | Oct 1996 | TW |
200943279 | Oct 1998 | TW |
201032218 | Sep 1999 | TW |
I 320172 | Feb 2010 | TW |
201009812 | Mar 2010 | TW |
201040943 | Nov 2010 | TW |
201103009 | Jan 2011 | TW |
9222891 | Dec 1992 | WO |
9510890 | Apr 1995 | WO |
9530222 | Nov 1995 | WO |
9629696 | Sep 1996 | WO |
0031719 | Jun 2000 | WO |
0075919 | Dec 2000 | WO |
02101724 | Dec 2002 | WO |
WO-02101722 | Dec 2002 | WO |
2005041169 | May 2005 | WO |
2005078706 | Aug 2005 | WO |
2005081231 | Sep 2005 | WO |
2005112003 | Nov 2005 | WO |
2006082636 | Aug 2006 | WO |
WO-2007051548 | May 2007 | WO |
WO-2007073604 | Jul 2007 | WO |
WO2007096552 | Aug 2007 | WO |
WO-2008013788 | Oct 2008 | WO |
WO-2008013788 | Oct 2008 | WO |
2008157296 | Dec 2008 | WO |
WO-2009029032 | Mar 2009 | WO |
2009077321 | Oct 2009 | WO |
2009121499 | Oct 2009 | WO |
2010003563 | Jan 2010 | WO |
2010003491 | Jan 2010 | WO |
WO-2010003491 | Jan 2010 | WO |
WO-2010040522 | Apr 2010 | WO |
2010059374 | May 2010 | WO |
2010081892 | Jul 2010 | WO |
2011006369 | Jan 2011 | WO |
WO-2010003532 | Feb 2011 | WO |
2011048117 | Apr 2011 | WO |
WO-2011048094 | Apr 2011 | WO |
2011147950 | Dec 2011 | WO |
Entry |
---|
Britanak et al. “A new fast algorithm for the unified forward and inverse MDCT/MDST computation”, Signal Processing vol. 82, Issue 3, Mar. 2002, pp. 433-459. |
“Digital Cellular Telecommunications System (Phase 2+); Universal Mobile Telecommunications System (UMTS); LTE; Speech codec speech processing functions; Adaptive Multi-Rate-Wideband (AMR-)WB Speech Codec; Transcoding Functions (3GPP TS 26.190 version 9.0.0”, Technical Specification, European Telecommunications Standards Institute (ETSI) 650, Route Des Lucioles; F-06921 Sophia-Antipolis; France; No. V.9.0.0, Jan. 1, 2012, 54 Pages. |
“IEEE Signal Processing Letters”, IEEE Signal Processing Society. vol. 15. ISSN 1070-9908., 2008, 9 Pages. |
“Information Technology—MPEG Audio Technologies—Part 3: Unified Speech and Audio Coding”, ISO/IEC JTC 1/SC 29 ISO/IEC DIS 23003-3, Feb. 9, 2011, 233 Pages. |
“WD7 of USAC”, International Organisation for Standardisation Organisation Internationale De Normailisation. ISO/IEC JTC1/SC29/WG11. Coding of Moving Pictures and Audio. Dresden, Germany., Apr. 2010, 148 Pages. |
3GPP, “3rd Generation Partnership Project; Technical Specification Group Service and System Aspects. Audio Codec Processing Functions. Extended AMR Wideband Codec; Transcoding functions (Release 6).”, 3GPP Draft; 26.290, V2.0.0 3rd Generation Partnership Project (3GPP), Mobile Competence Centre; Valbonne, France., Sep. 2004, pp. 1-85. |
Ashley, J et al., “Wideband Coding of Speech Using a Scalable Pulse Codebook”, 2000 IEEE Speech Coding Proceedings., Sep. 17, 2000, pp. 148-150. |
Bessette, B et al., “The Adaptive Multirate Wideband Speech Codec (AMR-WB)”, IEEE Transactions on Speech and Audio Processing, IEEE Service Center. New York. vol. 10, No. 8., Nov. 1, 2002, pp. 620-636. |
Bessette, B et al., “Universal Speech/Audio Coding Using Hybrid ACELP/TCX Techniques”, ICASSP 2005 Proceedings. IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 3 Jan. 2005, pp. 301-304. |
Bessette, B et al., “Wideband Speech and Audio Codec at 16/24/32 Kbit/S Using Hybrid Acelp/Tcx Techniques”, 1999 IEEE Speech Coding Proceedings. Porvoo, Finland., Jun. 20, 1999, pp. 7-9. |
Ferreira, A et al., “Combined Spectral Envelope Normalization and Subtraction of Sinusoidal Components in the ODFT and MDCT Frequency Domains”, 2001 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics., Oct. 2001, pp. 51-54. |
Fischer, et al., “Enumeration Encoding and Decoding Algorithms for Pyramid Cubic Lattice and Trellis Codes”, IEEE Transactions on Information Theory. IEEE Press, USA, vol. 41, No. 6, Part 2., Nov. 1, 1995, pp. 2056-2061. |
Hermansky, H et al., “Perceptual linear predictive (PLP) analysis of speech”, J. Acoust. Soc. Amer. 87 (4)., Apr. 1990, pp. 1738-1751. |
Hofbauer, K et al., “Estimating Frequency and Amplitude of Sinusoids in Harmonic Signals—A Survey and the Use of Shifted Fourier Transforms”, Graz: Graz University of Technology; Graz University of Music and Dramatic Arts; Diploma Thesis, Apr. 2004, 111 pages. |
Lanciani, C et al., “Subband-Domain Filtering of MPEG Audio Signals”, 1999 IEEE International Conference on Acoustics, Speech, and Signal AZ, USA., Mar. 15, 1999, pp. 917-920. |
Lauber, P et al., “Error Concealment for Compressed Digital Audio”, Presented at the 111th AES Convention. Paper 5460. New York, USA., Sep. 21, 2001, 12 Pages. |
Lee, Ick Don et al., “A Voice Activity Detection Algorithm for Communication Systems with Dynamically Varying Background Acoustic Noise”, Dept. of Electrical Engineering, 1998 IEEE, May 18-21, 1998, pp. 1214-1218. |
Makinen, J et al., “AMR-WB+: a New Audio Coding Standard for 3rd Generation Mobile Audio Services”, 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing. Philadelphia, PA, USA., Mar. 18, 2005, 1109-1112. |
Motlicek, P et al., “Audio Coding Based on Long Temporal Contexts”, Rapport de recherche de l'IDIAP 06-30, Apr. 2006, pp. 1-10. |
Neuendorf, M et al., “A Novel Scheme for Low Bitrate Unified Speech Audio Coding—MPEG RMO”, AES 126th Convention. Convention Paper 7713. Munich, Germany, May 1, 2009, 13 Pages. |
Neuendorf, M et al., “Completion of Core Experiment on unification of USAC Windowing and Frame Transitions”, International Organisation for Standardisation Organisation Internationale De Normalisation ISO/IEC JTC1/SC29/WG11. Coding of Moving Pictures and Audio. Kyoto, Japan., Jan. 2010, 52 Pages. |
Neuendorf, M et al., “Unified Speech and Audio Coding Scheme for High Quality at Low Bitrates”, ICASSP 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway, NJ, USA., Apr. 19, 2009, 4 Pages. |
Patwardhan, P et al., “Effect of Voice Quality on Frequency-Warped Modeling of Vowel Spectra”, Speech Communication. vol. 48, No. 8., Aug. 2006, pp. 1009-1023. |
Ryan, D et al., “Reflected Simplex Codebooks for Limited Feedback MIMO Beamforming”, IEEE. XP31506379A., Jun. 14-18, 2009, 6 Pages. |
Sjoberg, J et al., “RTP Payload Format for the Extended Adaptive Multi-Rate Wideband (AMR-WB+) Audio Codec”, Memo. The Internet Society. Network Working Group. Category: Standards Track., Jan. 2006, pp. 1-38. |
Terriberry, T et al., “A Multiply-Free Enumeration of Combinations with Replacement and Sign”, IEEE Signal Processing Letters. vol. 15, 2008, 11 Pages. |
Terriberry, T et al., “Pulse Vector Coding”, Retrieved from the internet on Oct. 12, 2012. XP55025946. URL:http://people.xiph.org/˜tterribe/notes/cwrs.html, Dec. 1, 2007, 4 Pages. |
Virette, D et al., “Enhanced Pulse Indexing CE for ACELP in USAC”, Organisation Internationale De Normalisation ISO/IEC JTC1/SC29/WG11. MPEG2012/M19305. Coding of Moving Pictures and Audio. Daegu, Korea., Jan. 2011, 13 Pages. |
Wang, F et al., “Frequency Domain Adaptive Postfiltering for Enhancement of Noisy Speech”, Speech Communication 12. Elsevier Science Publishers. Amsterdam, North-Holland. vol. 12, No. 1., Mar. 1993, 41-56. |
Waterschoot, T et al., “Comparison of Linear Prediction Models for Audio Signals”, EURASIP Journal on Audio, Speech, and Music Processing. vol. 24., Dec. 2008, 27 pages. |
Zernicki, T et al., “Report on CE on Improved Tonal Component Coding in eSBR”, International Organisation for Standardisation Organisation Internationale De Normalisation ISO/IEC JTC1/SC29/WG11. Coding of Moving Pictures and Audio. Daegu, South Korea, Jan. 2011, 20 Pages. |
A Silence Compression Scheme for G.729 Optimized for Terminals Conforming to Recommendation V.70″, ITU-T Recommendation G.729—Annex B, International Telecommunication Union, Nov. 1996, pp. 1-16. |
Martin, R., Spectral Subtraction Based on Minimum Statistics, Proceedings of European Signal Processing Conference (EUSIPCO), Edinburg, Scotland, Great Britain, Sep. 1994, pp. 1182-1185. |
Lefebvre, R. et al., “High quality coding of wideband audio signals using transform coded excitation (TCX)”, 1994 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr. 19-22, 1994, pp. 1/193-1/196 (4 pages). |
3GPP, TS 26.290 Version 9.0.0; Digital cellular telecommunications system (Phase 2+); Universal Mobile Telecommunications System (UMTS); LTE; Audio codec processing functions; Extended Adaptive Multi-Rate-Wideband (AMR-WB+) codec; Transcoding functions (3GPP TS 26.290 version 9.0.0 release 9), Jan. 2010, Chapter 5.3, pp. 24-39. |
Herley, C. et al., “Tilings of the Time-Frequency Plane: Construction of Arbitrary Orthogonal Bases and Fast Tilings Algorithms”, IEEE Transactions on Signal Processing , vol. 41, No. 12, Dec. 1993, pp. 3341-3359. |
Fuchs, et al., “MDCT-Based Coder for Highly Adaptive Speech and Audio Coding”, 17th European Signal Processing Conference (EUSIPCO 2009), Glasgow, Scotland, Aug. 24-28, 2009, pp. 1264-1268. |
Song, et al., “Research on Open Source Encoding Technology for MPEG Unified Speech and Audio Coding”, Journal of the Institute of Electronics Engineers of Korea vol. 50 No. 1, Jan. 2013, pp. 86-96. |
Number | Date | Country | |
---|---|---|---|
20130332153 A1 | Dec 2013 | US |
Number | Date | Country | |
---|---|---|---|
61442632 | Feb 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2012/052455 | Feb 2012 | US |
Child | 13966601 | US |