Packet loss concealment (PLC) is used in audio codecs to conceal lost or corrupted packets during the transmission from the encoder to the decoder. PLC is performed at the decoder side and works by extrapolating the decoded signal either in the transform-domain or in the time-domain. Ideally, the concealed signal should be artifact-free and should have the same spectral characteristics as the missing signal.
Error robust audio codecs, as described in [2] and [4], have generally multiple concealment methods for the various signal types like speech as an example for a monophonic signal, music as an example for polyphonic signal or noise signal. The selection is based on a set of signal features, which are either transmitted and decoded from the bit stream or estimated in the decoder.
Pitch-based PLC techniques generally produce good results for speech and monophonic signals. These approaches assume that the signal is locally stationary and recover the lost signal by synthesizing a periodic signal using an extrapolated pitch period. These techniques are widely used in CELP-based speech coding such as in ITU-T G.718 [2]. They can also be used for PCM coding such as in ITU-T G.711 [3] and more recently they were applied to DECT-based audio coding, the best example being TCX time domain concealment, TCX TD-PLC, in the 3GPP EVS standard [4].
The pitch-lag is the main parameter used in pitch-based PLC. This parameter can be estimated at the encoder-side and encoded into the bit stream. In this case, the pitch-lag of the last good frame is used to conceal the current lost frame such as in [2] and [4]. If there is no pitch-lag in the bitstream, it can be estimated at the decoder-side by running a pitch detection algorithm on the decoded signal such as in [3].
For non-periodic, non-tonal, noise-like signals, a low complexity technique called frame repetition with sign scrambling has been found to be effective. It is based on repeating the last frame and multiplying the spectral coefficients with a randomly generated sign to conceal the lost frame. One example of MDCT frame repetition with sign scrambling can be found in the 3GPP EVS standard [4].
For tonal polyphonic signals or complex music signals a method is used which is based on predicting the phase of the spectral coefficients of any detected tonal component. This method shows a consistent improvement for stationary tonal signals. A tonal component consists of a peak that also existed in the previous received frame(s). The phase of the spectral coefficients belonging to the tonal components is determined from the power spectrum of the last received frame(s). One example of tonal MDCT concealment can be found in the 3GPP EVS standard [4].
Summarizing the above, different PLC methods are known but they are specific for certain situations, i.e., for certain audio characteristics. That is, an audio coder supporting several of these PLC methods should have a mechanism to choose the most suitable PLC method at the time of encountering frame or packet loss. The most suitable PLC method is the one leading to the least noticeable substitute for the lost signal.
According to an embodiment, an audio decoder for decoding an audio signal from a data stream may have a set of different loss concealment tools and configured to: determine a first measure measuring a spectral position of a spectral centroid of a spectrum of the audio signal, determine a second measure measuring a temporal predictability of the audio signal, assign one of the set of different loss concealment tools to a portion of the audio signal affected by loss based on the first and second measures, and recover the portion of the audio signal using the one loss concealment tool assigned to the portion.
According to another embodiment, a method for performing loss concealment in audio decoding an audio signal from a data stream may be configured to: determine a first measure measuring a spectral position of a spectral centroid of a spectrum of the audio signal, determine a second measure measuring a temporal predictability of the audio signal, assign one of a set of different loss concealment tools to a portion of the audio signal affected by loss based on the first and second measures, and recover the portion of the audio signal using the one loss concealment tool assigned to the portion.
Another embodiment may have a non-transitory digital storage medium having a computer program stored thereon to perform the method for performing loss concealment in audio decoding an audio signal from a data stream, wherein the method may be configured to: determine a first measure measuring a spectral position of a spectral centroid of a spectrum of the audio signal, determine a second measure measuring a temporal predictability of the audio signal, assign one of a set of different loss concealment tools to a portion of the audio signal affected by loss based on the first and second measures, and recover the portion of the audio signal using the one loss concealment tool assigned to the portion, when said computer program is run by a computer.
The idea of the present invention is based on the finding that an assignment of one of phase set of different loss concealment tools of an audio decoder to a portion of the audio signal to be decoded from a data stream, which portion is affected by loss, that is the selection out of the set of different loss concealment tools, may be made in a manner leading to a more pleasant loss concealment if the assignment/selection is done based on two measures: A first measure which is determined measures a spectral position of a spectral centroid of a spectrum of the audio signal and a second measure which is determined measures a temporal predictability of the audio signal. The assigned or selected loss concealment tool may then be used to recover the portion of the audio signal.
For instance, based on the aforementioned first and second measures, one of first and second loss concealment tools may be assigned to the lost portion with a first being configured to recover the audio signal by audio signal synthesis using a periodic signal of a periodicity which depends on a pitch value derived from the data stream, and the second loss concealment tool may be configured to recover the audio signal by detecting tonal spectral components of the audio signal, performing phase detection at the tonal spectral components and audio signal synthesis by combining the signals of periodicities which depend on the tonal spectral components at adjustment of a mutual phase shift between the signals depending on the phase detection. In other words, based on the first and second measures, one of a tonal frequency domain PLC tool and a tonal time domain PLC tool may be assigned to the lost portion.
In accordance with an embodiment, the assignment/selection for a lost portion is performed in stages: A third measure measuring a tonality of the spectrum of the audio signal is determined and one of a first and second subsets of one or more loss concealment tools out of the set of different loss concealment tools is assigned to the lost portion, and merely if the first subset of one or more loss concealment tools is assigned to the lost portion, the assignment of the one PLC tool for the lost portion is performed based on the first and second measures out of this first subset. Otherwise, the assignment/selection is performed out of the second subset.
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
The data stream 14 may be received by the audio decoder 10 in a packetized form, i.e., in units of packets. The sub-division of data stream 14 into frame 18 itself represents a kind of packetization, i.e., the frames 18 represent packets. Additionally, data stream 14 may be packed into packets of a transport stream or media file format, but this circumstance is not inspected in further detail here. Rather, it should suffice to state that the reception of the data stream 14 by audio decoder 10 is liable to data or signal loss, called packet loss in the following. That is, some continuous portion 20 of the data stream 14 might have got lost during transmission, thus not received by audio decoder 10, so that the corresponding portion is missing and not available for the audio decoder 10. As a consequence, audio decoder 10 misses information in the data stream 14 so as to reconstruct a portion 22 corresponding to portion 20. In other words, the audio decoder 10 is not able to reconstruct portion 22 from data stream 14 in accordance with a normal audio decoding process implemented, for instance, in an audio decoding core 24 of the audio decoder, as portion 20 of data stream 14 is missing. Rather, in order to deal with such missing portions 20, audio decoder 10 comprises a set 26 of PLC tools 28 so as to recover or synthesize the audio signal 12 within portion 22 by a substitute signal 30. The PLC tools 28 comprised by set 26 differs in their suitability for different audio signal characteristics. That is, the degree of annoyance when using a certain PLC tool for the recovery of a signal substitute 30 within a certain portion 22 of the audio signal 12 depends on the audio signal characteristic at that portion 22 and PLC tools 28 within set 26 show mutually different degrees of annoyance for a certain set of audio signal characteristics. Accordingly, audio decoder 10 comprises an assigner 32 which assigns one of the set 26 of packet loss concealment tools 28 to portion 22 of the audio signal 12 which is affected by a packet loss such as the lost portion 20 of data stream 14. The assigner 32 tries to assign the best PLC tool 28 to portion 22, namely the one which leads to the lowest annoyance.
Once the assigner 32 has assigned a certain PLC tool 28 to a lost portion 22 of the audio signal 12, the audio decoder 10 recovers this portion 22 of the audio signal using the assigned PLC tool 28, thereby substituting the audio signal 12 within this portion 22, as it would have been reconstructed from the audio data stream 14 if the corresponding data stream portion 22 would not have got lost, by a substitute signal 30 obtained using the PLC tool 28 assigned for portion 22 by assigner 32.
As already indicated above, the assignment of a particular PLC tool 28 to a certain lost portion 22 should be made signal dependent in order to render the lost concealment as least annoying as possible. Signal dependency, however, is restricted to portions of data stream 14 preceding the lost data stream portion 20 and, in accordance with the embodiment described herein, the assigner 32 acts as follows.
In order to explain this in more detail, reference is made to
Further, the assignment process triggered by loss detection comprises a determination 50 of a temporal predictability of the audio signal so as to obtain a measure 52 of this temporal predictability, see
It is obvious that the order of performing determination 40 and 50, respectively, may be switched or that both detections may be performed concurrently. Based on measures 42 and 52, an assignment 60 is performed. This assignment 60 selects one of two PLC tools 28 for concealment of loss of portion 22. This PLC tool, i.e., the assigned one 62, is then used for the concealment of the loss of portion 22.
As a brief note, it should be noted that the number of PLC tools 28, between which the selection by assignment 60 is performed, may be greater than two.
In accordance with an embodiment further outlined below, however, the PLC tool PLC 1 of
The second PLC tool PLC 2 may be dedicated for the recovery of audio signals of polyphonic type. The concealment of this second PLC tool PLC 2 may be based on tonal frequency domain packet loss concealment.
With respect to
As described in more detail below, the assignment 60 may be done in a manner so that PLC 1 is chosen or assigned to portion 22 the more likely the lower the spectral position 48 is and the higher the temporal predictability is and, vice versa, PLC 2 is assigned or selected the more likely the higher the spectral position 48 is and the lower the temporal predictability is. A higher spectral position corresponds to a higher frequency and a lower spectral position to a lower frequency. By doing this in this manner, PLC 1 is more likely chosen in case of portion 22 corresponding to lost speech and PLC 2 is more likely selected in case of portion 22 relating to polyphone signals or music.
For the sake of completeness,
PLC 3 may be a non-tonal PLC such as a PLC which recovers an audio signal for a portion 22 by use of frame repetition with or without replicate modification, when the replicate modification may, as indicated above, involve sign scrambling, i.e., a random sign flip of spectral coefficients of a most recently received spectrum such as spectrum 46 which is then inversely transformed and used to derive substitute signal 30.
The decision tree of
The decision B which corresponds to assignment 60 based on determinations 40 and 52, yields a good choice between PLC #1 and PLC #2. In [6], such a choice has been done based on a stability measurement of the spectral envelope, which correlates to the short-term stationarity of the signal. However, the more stationary a signal is, the better the performance both tonal PLC methods PLC #1 and PLC #2 is. This means stationarity is, hence, not a suitable criterion to select the optimal tonal concealment method. The stationarity feature indicates tonality very well, however it cannot differentiate between speech/monophonic and polyphonic/music.
As discussed above, it is possible, to perform the decision tree of
The decision A may be done based on a tonality indicator 86, which can be the presence of a pitch value in the last good received audio frame. The decision B may be done by using the spectral centroid 48 and a long term prediction gain 56 calculated on the last good received audio frame.
The decision B may switch between a pitch-based time domain concealment method PLC #1, best suited for monophonically and speech-like signals, and frequency domain methods PLC #2, best suited for polyphone or complex music signals. An advantage of the classification of decision B results from the fact, that:
Therefore, a weighted combination of both features 42 and 52 may be used for decision B and assignment process 60 and results in a reliable discrimination of speech/monophonic and polyphonic/complex music signals. At the same time, the complexity may be kept low.
If the audio decoder receives a corrupted frame or if the frame is lost, i.e. encounters a lost portion 20, as detected at 38, the following may be done, wherein reference is also made to
For a positive decision A, the features 42 and 52 may be calculated based on the last good frame in the following manner:
may be computed in 50, where Tc is the pitch value of the last good frame and x(k), k=0 . . . N−1, are the last decoded time samples of the last good frame and
where NF can be a limited value like the maximum pitch value or a frame length (for example 10 ms).
may be computed in 40, where N is the length of the last received spectrum Xs_lastGood(k) and |Xs_lastGood(k)| means the magnitude spectrum.
The two calculated features are combined with the following formula:
class=w1·xcorr+w2·sc+β
where w1, w2 and β are weights. In one embodiment, these are w1=520/1185, w2=−1 and β=287/1185. Alternatives are setting w1, w2 and β so that ¼<w1<¾, −2<w2<−½, and −½<β<− 1/16. The weights may be normalized here to be in the range [−1:1]
Then, the PLC #1, e.g. time domain pitch-based PLC method, may be chosen if class >0 in 60 and PLC #2, such as a frequency domain tonal concealment, otherwise.
Some notes shall be made with respect to the above description. For instance, the spectrum, the spectral centroid of which is measured to obtain the first measure 42, might be a so called weighted version such as a pre-emphasized version. Such weighting is used, for instance, to adapt the quantization noise to the psychoacoustic masking threshold. In other words, it might be that the first measure 42 measuring a spectral position 48 of a spectral centroid of a psychoacoustic scaled spectrum of the audio signal. This might be especially advantageous in cases where the normal audio decoding coded underlying audio decoding core 24 involves that data stream 14 has audio signal 12 encoded thereinto in spectral domain anyway, namely in the weighted domain.
Additionally or alternatively, the spectrum, the spectral centroid of which is measured to obtain the first measure 42, is not necessarily one represented at a spectral resolution as high as the spectral resolution used in the audio decoding core 24 to transition to time domain. Rather, it may be higher or lower. Even additionally or alternatively, it should be noted that the audio signal's spectrum also manifests itself in scale factors. Such scale factors might be transmitted in the data stream 14 along with spectral coefficients in order to, together, form a coded representation of the audio signal's spectrum. For a certain portion 22, the spectral coefficients are scaled according to the scale factors. There are more spectral coefficients than scaler factors. Each scale factor, for instance, is assigned to one of several spectral bands, so called scale factor bands, into which the audio signal's bandwidth is partitioned. The scale factors, thus, define the spectrum of the audio signal for a certain portion in terms of envelope at some spectral resolution reduced compared to the one at which the quantized spectral coefficients are coded in the data stream 14. It could even be that the spectral resolution at which the scale factors are coded in the data stream 14 is even lower than a spectral resolution at which the decoding core 24 performs the dequantization of the spectral coefficients. For instance, the decoding core 24 might subject the scale factors coded into the data stream 14 to spectral interpolation to obtain interpolated scale factors of higher spectral resolution as the ones coded into the data stream, and use the interpolated scale factors for dequantization. Either one of the scale factors coded into the data stream and the interpolated scale factors might be used as the spectrum of the audio signal the spectral centroid of which is measured by the first measure 42. This means that centroid measurement becomes quite computational efficient to be determined as the number of computational operations to be performed to determine the first measure is low compared to performing the centroid measurement at any higher resolution such as at the one at which the spectral coefficient are coded or some other resolution in case of obtaining the spectrum for the centroid measurement by subjecting the decoded audio signal to an extra spectral decomposition which would even further increase the efforts. Thus, as a concrete example, first and second measures could be computed as follows based on coded down-sampled scale factors SNS (spectral noise shaping):
Firstly, a pitch value Tc might be computed as a basis:
where pitch_present and pitch_int are bitstream parameters derived by the decoder from the last good frame. pitch_present can be interpreted as a tonality indicator.
As the second measure, a long term prediction gain xcorr might be computed according to:
where x(k), k=0 . . . N−1 are the last decoded time samples and N can be can be a predetermined length value such as limited value like the maximum pitch value or a frame length NF (for example 10 ms), for example
where pitmin is the minimal pitch value. Thus, the second measure would be computed as the self-similarity of the decoded audio time signal at the most recently received portion with itself, mutually shifted at the pitch.
As the second measure, a spectral centroid sc could be computed as:
where fs is the sampling rate and
and Ifs are non-uniform band indices, i.e. band indices defining for each band the lower and upper frequency border in a manner so that the band widths defined by the difference between the associated lower and upper border differ from each other such as increase with increasing frequency although the difference is optional. The band indices might be defined in dependency of the sampling rate/frequency of the audio signal. Further,
where scf Q−1(k) is the scale factor vector stored in the bitstream of the last good frame and gtilt is a predetermined tilt factor which might be set by default and, possibly, depending on the sample frequency of the audio signal. The term 2scf Q
is applied to inverse the encoder side pre-emphasis filter, which is called de-emphasis filter.
The scale factor vector is calculated at encoder side and transmitted in the bitstream. It is determined on the energies per band of the MDCT coefficients, where the bands are non-uniform and follow the perceptually-relevant bark scale (smaller in low-frequencies, larger in high-frequencies). After smoothing, pre-emphasing and transforming the energies to logarithmic domain, they are, at the encoder side, downsampled from 64 parameters to 16 parameters to form the scale factor vector, which afterwards is coded and transmitted in the bitstream. Thus, sc is a measure for a spectral position 48 of a spectral centroid of a spectrum 46 of the audio signal, here determined based on the spectrally coarse sampled version thereof, namely the SNS parameters.
The decision or selection among the various PLC methods may then be done with the criteria xcorr and sc. Frame repetition with sign scrambling might be selected if Tc=0 (which means that the tonality indicator pitch_present=0). Otherwise, the value class is calculated as follows:
class=7640/23768xcorr−sc−5112/32768 (7)
time domain pitch-based PLC method might be chosen if class >0; frequency domain tonal concealment otherwise.
Thus, an audio decoder for decoding an audio signal 12 from a data stream 14, which comprises a set 26 of different loss concealment tools 28 might be configured to determine 40 a first measure 42 measuring a spectral position 48 of a spectral centroid of a spectrum 46 of the audio signal by deriving the spectrum from scale factors in a most recent non-lost portion of the data stream, determine 50 a second measure 52 measuring a temporal predictability of the audio signal, assign 32 one 62 of the set 26 of different loss concealment tools 28 to a portion 22 of the audio signal 12 affected by loss based on the first and second measures, and recover the portion 22 of the audio signal using the one loss concealment tool 62 assigned to the portion 22. The derivation of the spectrum might involve, as described, subjecting the scaler factors coded in the data stream to spectral interpolation. Additionally or alternatively, they may be subject to de-emphasis filtering, i.e. they might be multiplied by a de-emphasis filter's transfer function. The resulting scale factors may then be subject to spectral centroid measurement. All the other details described above may then be applied as well. That is, to mention examples which are not meant be exclusively: The set 26 of different loss concealment tools may comprise a first loss concealment tool for audio signal recovery of monophonic portions, and a second loss concealment tool for audio signal recovery of polyphonic portions, and the audio decoder may be configured to, in assigning the one of the set of different loss concealment tools to the portion of the audio signal based on the first and second measures, assign the first loss concealment tool to the portion the more likely the lower the spectral position of the spectral centroid is and the higher the temporal predictability is, and assign the second loss concealment tool to the portion the more likely the higher the spectral position of the spectral centroid is and the lower the temporal predictability is. Additionally or alternatively, the audio decoder may be configured to, in assigning one of the set of different loss concealment tools to a portion 22 of the audio signal affected by loss based on the first and second measures, perform a summation over the first and second measures 42, 52 so as to obtain a scalar sum value and subjecting the scalar sum value to thresholding.
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus.
The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
The apparatus described herein, or any components of the apparatus described herein, may be implemented at least partially in hardware and/or in software.
The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
The methods described herein, or any components of the apparatus described herein, may be performed at least partially by hardware and/or by software.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
17201142 | Nov 2017 | EP | regional |
This application is a continuation of copending International Application No. PCT/EP2018/080198, filed Nov. 5, 2018, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No. EP 17 201 142.1, filed Nov. 10, 2017, which is incorporated herein by reference in its entirety. The present application is concerned with an audio decoder supporting a set of different loss concealment tools.
Number | Name | Date | Kind |
---|---|---|---|
4972484 | Link et al. | Nov 1990 | A |
5012517 | Chhatwal et al. | Apr 1991 | A |
5581653 | Todd | Dec 1996 | A |
5651091 | Chen et al. | Jul 1997 | A |
5781888 | Herre | Jul 1998 | A |
5812971 | Herre | Sep 1998 | A |
5819209 | Inoue | Oct 1998 | A |
5909663 | Iijima et al. | Jun 1999 | A |
5999899 | Robinson | Dec 1999 | A |
6018706 | Huang et al. | Jan 2000 | A |
6148288 | Park | Nov 2000 | A |
6167093 | Tsutsui et al. | Dec 2000 | A |
6507814 | Gao | Jan 2003 | B1 |
6570991 | Scheirer | May 2003 | B1 |
6665638 | Kang et al. | Dec 2003 | B1 |
6735561 | Johnston et al. | May 2004 | B1 |
7009533 | Wegener | Mar 2006 | B1 |
7353168 | Chen et al. | Apr 2008 | B2 |
7395209 | Dokic et al. | Jul 2008 | B1 |
7539612 | Chen et al. | May 2009 | B2 |
7546240 | Chen et al. | Jun 2009 | B2 |
8015000 | Chen et al. | Sep 2011 | B2 |
8095359 | Boehm et al. | Jan 2012 | B2 |
8280538 | Kim et al. | Oct 2012 | B2 |
8473301 | Chen et al. | Jun 2013 | B2 |
8543389 | Ragot et al. | Sep 2013 | B2 |
8554549 | Oshikiri et al. | Oct 2013 | B2 |
8612240 | Fuchs et al. | Dec 2013 | B2 |
8682681 | Fuchs et al. | Mar 2014 | B2 |
8738385 | Chen | May 2014 | B2 |
8751246 | Bayer et al. | Jun 2014 | B2 |
8847795 | Faure et al. | Sep 2014 | B2 |
8891775 | Mundt et al. | Nov 2014 | B2 |
8898068 | Fuchs et al. | Nov 2014 | B2 |
9026451 | Kleijn et al. | May 2015 | B1 |
9123350 | Zhao et al. | Sep 2015 | B2 |
9489961 | Kovesi et al. | Nov 2016 | B2 |
9595262 | Fuchs et al. | Mar 2017 | B2 |
10726854 | Ghido et al. | Jul 2020 | B2 |
20010026327 | Schreiber et al. | Oct 2001 | A1 |
20030101050 | Cuperman et al. | May 2003 | A1 |
20050010395 | Chiu et al. | Jan 2005 | A1 |
20050015249 | Chen et al. | Jan 2005 | A1 |
20050192799 | Kim et al. | Sep 2005 | A1 |
20050246178 | Fejzo | Nov 2005 | A1 |
20060288851 | Esima et al. | Dec 2006 | A1 |
20070033056 | Johnston et al. | Feb 2007 | A1 |
20070078646 | Lei et al. | Apr 2007 | A1 |
20070118369 | Chen | May 2007 | A1 |
20070127729 | Breebaart et al. | Jun 2007 | A1 |
20070129940 | Geyersberger et al. | Jun 2007 | A1 |
20070276656 | Solbach et al. | Nov 2007 | A1 |
20080033718 | Zopf et al. | Feb 2008 | A1 |
20080126086 | Kandhadai et al. | May 2008 | A1 |
20080126096 | Oh | May 2008 | A1 |
20090076805 | Du et al. | Mar 2009 | A1 |
20090076830 | Taleb | Mar 2009 | A1 |
20090089050 | Mo et al. | Apr 2009 | A1 |
20090138267 | Davidson et al. | May 2009 | A1 |
20090254352 | Zhao | Oct 2009 | A1 |
20100010810 | Morii | Jan 2010 | A1 |
20100070270 | Gao | Mar 2010 | A1 |
20100094637 | Vinton | Apr 2010 | A1 |
20100115370 | Ramo et al. | May 2010 | A1 |
20100198588 | Sudo | Aug 2010 | A1 |
20100223061 | Ojanpera | Sep 2010 | A1 |
20100312552 | Kandhadai et al. | Dec 2010 | A1 |
20100312553 | Fang et al. | Dec 2010 | A1 |
20100324912 | Oh et al. | Dec 2010 | A1 |
20110015768 | Kim et al. | Jan 2011 | A1 |
20110022924 | Malenovsky et al. | Jan 2011 | A1 |
20110035212 | Briand et al. | Feb 2011 | A1 |
20110060597 | Chen et al. | Mar 2011 | A1 |
20110071839 | Budnikov et al. | Mar 2011 | A1 |
20110095920 | Ashley et al. | Apr 2011 | A1 |
20110096830 | Ashley et al. | Apr 2011 | A1 |
20110116542 | Antonini et al. | May 2011 | A1 |
20110125505 | Gournay et al. | May 2011 | A1 |
20110145003 | Bessette | Jun 2011 | A1 |
20110196673 | Park et al. | Aug 2011 | A1 |
20110200198 | Bayer et al. | Aug 2011 | A1 |
20110238425 | Lecomte et al. | Sep 2011 | A1 |
20110238426 | Borsum et al. | Sep 2011 | A1 |
20120010879 | Kikuiri et al. | Jan 2012 | A1 |
20120022881 | Geiger et al. | Jan 2012 | A1 |
20120072209 | Krishnan et al. | Mar 2012 | A1 |
20120109659 | Chen et al. | May 2012 | A1 |
20120214544 | Rodriguez et al. | Aug 2012 | A1 |
20120245947 | Neuendorf et al. | Sep 2012 | A1 |
20120265540 | Fuchs et al. | Oct 2012 | A1 |
20120265541 | Geiger et al. | Oct 2012 | A1 |
20130030819 | Carlsson et al. | Jan 2013 | A1 |
20130096912 | Resch et al. | Apr 2013 | A1 |
20130226594 | Fuchs et al. | Aug 2013 | A1 |
20130282369 | Ryu et al. | Oct 2013 | A1 |
20140052439 | Nanjundaswamy et al. | Feb 2014 | A1 |
20140067404 | Baumgarte | Mar 2014 | A1 |
20140074486 | Disch | Mar 2014 | A1 |
20140108020 | Bai et al. | Apr 2014 | A1 |
20140142957 | Lee et al. | May 2014 | A1 |
20140223029 | Bhaskar et al. | Aug 2014 | A1 |
20140358531 | Vos | Dec 2014 | A1 |
20150010155 | Lang et al. | Jan 2015 | A1 |
20150081312 | Fuchs et al. | Mar 2015 | A1 |
20150142452 | Lee et al. | May 2015 | A1 |
20150154969 | Craven et al. | Jun 2015 | A1 |
20150170668 | Kovesi et al. | Jun 2015 | A1 |
20150221311 | Jeon et al. | Aug 2015 | A1 |
20150228287 | Bruhn | Aug 2015 | A1 |
20150255079 | Huang | Sep 2015 | A1 |
20150302859 | Aguilar et al. | Oct 2015 | A1 |
20150325246 | Chen et al. | Nov 2015 | A1 |
20150371647 | Faure et al. | Dec 2015 | A1 |
20160019898 | Schreiner et al. | Jan 2016 | A1 |
20160027450 | Gao | Jan 2016 | A1 |
20160078878 | Ravelli et al. | Mar 2016 | A1 |
20160111094 | Dietz et al. | Apr 2016 | A1 |
20160225384 | Kjörling et al. | Aug 2016 | A1 |
20160285718 | Bruhn | Sep 2016 | A1 |
20160293174 | Atti et al. | Oct 2016 | A1 |
20160293175 | Atti et al. | Oct 2016 | A1 |
20160307576 | Fuchs | Oct 2016 | A1 |
20160365097 | Guan et al. | Dec 2016 | A1 |
20160372125 | Atti et al. | Dec 2016 | A1 |
20160372126 | Atti et al. | Dec 2016 | A1 |
20160379655 | Truman et al. | Dec 2016 | A1 |
20170011747 | Faure et al. | Jan 2017 | A1 |
20170053658 | Atti et al. | Feb 2017 | A1 |
20170078794 | Bongiovi et al. | Mar 2017 | A1 |
20170103769 | Laaksonen et al. | Apr 2017 | A1 |
20170110135 | Disch et al. | Apr 2017 | A1 |
20170133029 | Markovic et al. | May 2017 | A1 |
20170140769 | Ravelli et al. | May 2017 | A1 |
20170154631 | Bayer et al. | Jun 2017 | A1 |
20170154635 | Doehla et al. | Jun 2017 | A1 |
20170221495 | Sung et al. | Aug 2017 | A1 |
20170236521 | Atti et al. | Aug 2017 | A1 |
20170256266 | Sung et al. | Sep 2017 | A1 |
20170294196 | Bradley et al. | Oct 2017 | A1 |
20170303114 | Johansson et al. | Oct 2017 | A1 |
20190027156 | Sung et al. | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
101140759 | Mar 2008 | CN |
102779526 | Nov 2012 | CN |
107103908 | Aug 2017 | CN |
0716787 | Jun 1996 | EP |
0732687 | Sep 1996 | EP |
1791115 | May 2007 | EP |
2676266 | Dec 2013 | EP |
2980796 | Feb 2016 | EP |
2980799 | Feb 2016 | EP |
3111624 | Jan 2017 | EP |
2944664 | Oct 2010 | FR |
H05-281996 | Oct 1993 | JP |
H07-28499 | Jan 1995 | JP |
H0811644 | Jan 1996 | JP |
H9-204197 | Aug 1997 | JP |
H10-51313 | Feb 1998 | JP |
H1091194 | Apr 1998 | JP |
H11-330977 | Nov 1999 | JP |
2004-138756 | May 2004 | JP |
2006-527864 | Dec 2006 | JP |
2007-525718 | Sep 2007 | JP |
2009-003387 | Jan 2009 | JP |
2009-008836 | Jan 2009 | JP |
2009-538460 | Nov 2009 | JP |
2010-500631 | Jan 2010 | JP |
2010-501955 | Jan 2010 | JP |
2012-533094 | Dec 2012 | JP |
2016-523380 | Aug 2016 | JP |
2016-200750 | Dec 2016 | JP |
2017-522604 | Aug 2017 | JP |
2017-528752 | Sep 2017 | JP |
100261253 | Jul 2000 | KR |
20030031936 | Apr 2003 | KR |
10-2010-0136890 | Dec 2010 | KR |
20130019004 | Feb 2013 | KR |
20170000933 | Jan 2017 | KR |
2337414 | Oct 2008 | RU |
2376657 | Dec 2009 | RU |
2413312 | Feb 2011 | RU |
2419891 | May 2011 | RU |
2439718 | Jan 2012 | RU |
2483365 | May 2013 | RU |
2520402 | Jun 2014 | RU |
2568381 | Nov 2015 | RU |
2596594 | Sep 2016 | RU |
2596596 | Sep 2016 | RU |
2015136540 | Mar 2017 | RU |
2628162 | Aug 2017 | RU |
2016105619 | Aug 2017 | RU |
200809770 | Feb 2008 | TW |
201005730 | Feb 2010 | TW |
201126510 | Aug 2011 | TW |
201131550 | Sep 2011 | TW |
201207839 | Feb 2012 | TW |
201243832 | Nov 2012 | TW |
201612896 | Apr 2016 | TW |
201618080 | May 2016 | TW |
201618086 | May 2016 | TW |
201642246 | Dec 2016 | TW |
201642247 | Dec 2016 | TW |
201705126 | Feb 2017 | TW |
201711021 | Mar 2017 | TW |
201713061 | Apr 2017 | TW |
201724085 | Jul 2017 | TW |
201732779 | Sep 2017 | TW |
9916050 | Apr 1999 | WO |
2004072951 | Aug 2004 | WO |
2005086138 | Sep 2005 | WO |
2005086139 | Sep 2005 | WO |
2007073604 | Jul 2007 | WO |
2007138511 | Dec 2007 | WO |
2008025918 | Mar 2008 | WO |
2008046505 | Apr 2008 | WO |
2009066869 | May 2009 | WO |
2011048118 | Apr 2011 | WO |
2011086066 | Jul 2011 | WO |
2011086067 | Jul 2011 | WO |
2012000882 | Jan 2012 | WO |
2012000882 | Jan 2012 | WO |
2012126893 | Sep 2012 | WO |
2014165668 | Oct 2014 | WO |
2014202535 | Dec 2014 | WO |
2014202535 | Dec 2014 | WO |
2015063045 | May 2015 | WO |
2015063227 | May 2015 | WO |
2015071173 | May 2015 | WO |
2015174911 | Nov 2015 | WO |
2016016121 | Feb 2016 | WO |
2016142002 | Sep 2016 | WO |
2016142337 | Sep 2016 | WO |
Entry |
---|
O.E. Groshev, “Office Action for RU Application No. 2020118947”, dated Dec. 1, 2020, ROSPATENT, Russia. |
O.I. Starukhina, “Office Action for RU Application No. 2020118968”, dated Dec. 23, 2020, ROSPATENT, Russia. |
ETSI TS 126 445 V13.2.0 (Aug. 2016), Universal Mobile Telecommunications System (UMTS); LTE; Codec for Enhanced Voice Services (EVS); Detailed algorithmic description (3GPP TS 26.445 version 13.2.0 Release 13) [Online]. Available: http://www.3gpp.org/ftp/Specs/archive/26_series/26.445/26445-d00.zip. |
Geiger, “Audio Coding based on integer transform”, Ilmenau: https://www.db-thueringen.de/receive/dbt_mods_00010054, 2004. |
Henrique S Malvar, “Biorthogonal and Nonuniform Lapped Transforms for Transform Coding with Reduced Blocking and Ringing Artifacts”, IEEE Transactions on Signal Processing, IEEE Service Center, New York, NY, US, (199804), vol. 46, No. 4, ISSN 1053-587X, XP011058114. |
Anonymous, “ISO/IEC 14496-3:2005/FDAM 9, AAC-ELD”, 82. MPEG Meeting;Oct. 22, 2007-Oct. 26, 2007; Shenzhen; (Motion Picture Expert Group or ISO/IEC JTC1/SC29/WG11),, (Feb. 21, 2008), No. N9499, XP030015994. |
Virette, “Low Delay Transform for High Quality Low Delay Audio Coding”, Université de Rennes 1, (Dec. 10, 2012), pp. 1-195, URL: https://hal.inria.fr/tel-01205574/document, (Mar. 30, 2016), XP055261425. |
ISO/IEC 14496-3:2001; Information technology—Coding of audio-visual objects—Part 3: Audio. |
3GPP TS 26.403 v14.0.0 (Mar. 2017); General audio codec audio processing functions; Enhanced acPlus general audio codec; Encoder specification; Advanced Audio Coding (AAC) part; (Release 14). |
ISO/IEC 23003-3; Information technology—MPEG audio technologies—Part 3: Unified speech and audio coding, 2011. |
3GPP TS 26.445 V14.1.0 (Jun. 2017), 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Codec for Enhanced Voice Services (EVS); Detailed Algorithmic Description (Release 14), http://www.3gpp.org/ftp//Specs/archive/26_series/26.445/26445-e10.zip, Section 5.1.6 “Bandwidth detection”. |
Eksler Vaclav et al., “Audio bandwidth detection in the EVS codec”, 2015 IEEE Global Conference on Signal and Information Processing (Globalsip), IEEE, (Dec. 14, 2015), doi:10.1109/GLOBALSIP.2015.7418243, pp. 488-492, XP032871707. |
Oger M et al, “Transform Audio Coding with Arithmetic-Coded Scalar Quantization and Model-Based Bit Allocation”, International Conference on Acoustics, Speech, and Signalprocessing, IEEE, XX,Apr. 15, 2007 (Apr. 15, 2007), p. IV-545, XP002464925. |
Asad et al., “An enhanced least significant bit modification technique for audio steganography”, International Conference on Computer Networks and Information Technology, Jul. 11-13, 2011. |
Makandar et al, “Least Significant Bit Coding Analysis for Audio Steganography”, Journal of Future Generation Computing, vol. 2, No. 3, Mar. 2018. |
ISO/IEC 23008-3:2015; Information technology—High efficiency coding and media delivery in heterogeneous environments—Part 3: 3D audio. |
Itu-T G.718 (Jun. 2008): Series G: Transmission Systems and Media, Digital Systems and Networks, Digital terminal equipments—Coding of voice and audio signals, Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s. |
3GPP TS 26.447 V14.1.0 (Jun. 2017), Technical Specification, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Codec for Enhanced Voice Services (EVS); Error Concealment of Lost Packets (Release 14). |
DVB Organization, “ISO-IEC_23008-3_A3_(E)_(H 3DA FDAM3).docx”, DVB, Digital Video Broadcasting, C/O EBU—17A Ancienne Route—CH-1218 Grand Saconnex, Geneva—Switzerland, (Jun. 13, 2016), XP017851888. |
Hill et al., “Exponential stability of time-varying linear systems,” IMA J Numer Anal, pp. 865-885, 2011. |
3GPP Ts 26.090 V14.0.0 (Mar. 2017), 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Mandatory Speech Codec speech processing functions; Adaptive Multi-Rate (AMR) speech codec; Transcoding functions (Release 14). |
3GPP TS 26.190 V14.0.0 (Mar. 2017), Technical Specification, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Speech codec speech processing functions; Adaptive Multi-Rate—Wideband (AMR-WB) speech codec; Transcoding functions (Release 14). |
3GPP TS 26.290 V14.0.0 (Mar. 2017), Technical Specification, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Audio codec processing functions; Extended Adaptive Multi-Rate—Wideband (AMR-WB+) codec; Transcoding functions (Release 14). |
Edler et al., “Perceptual Audio Coding Using a Time-Varying Linear Pre-and Post-Filter,” in AES 109th Convention, Los Angeles, 2000. |
Cray et al., “Digital lattice and ladder filter synthesis,” IEEE Transactions on Audio and Electroacoustics, vol. vol. 21, No. No. 6, pp. 491-500, 1973. |
Lamoureux et al., “Stability of time variant filters,” CREWES Research Report—vol. 19, 2007. |
Herre et al., “Enhancing the performance of perceptual audio coders by using temporal noise shaping (TNS).” Audio Engineering Society Convention 101. Audio Engineering Society, 1996. |
Herre et al., “Continuously signal-adaptive filterbank for high-quality perceptual audio coding.” Applications of Signal Processing to Audio and Acoustics, 1997. 1997 IEEE ASSP Workshop on. IEEE, 1997. |
Herre, “Temporal noise shaping, quantization and coding methods in perceptual audio coding: A tutorial ntroduction.” Audio Engineering Society Conference: 17th International Conference: High-Quality Audio Coding. Audio Engineering Society, 1999. |
Fuchs Guillaume et al, “Low delay LPC and MDCT-based audio coding in the EVS codec”, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, (Apr. 19, 2015), doi: 10.1109/ICASSP.2015.7179068, pp. 5723-5727, XP033187858. |
Niamut et al, “RD Optimal Temporal Noise Shaping for Transform Audio Coding”, Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on Toulouse, France May 14-19, 2006, Piscataway, NJ, USA,IEEE, Piscataway, NJ, USA, (Jan. 1, 2006), doi:10.1109/ICASSP.2006.1661244, ISBN 978-1-4244-0469-8, pp. V-V, XP031015996. |
Itu-T G.711 (09/99): Series G: Transmission Systems and Media, Digital Systems and Networks, Digital transmission systems—Terminal equipments—Coding of analogue signals by pulse code modulation, Pulse code modulation (PCM) of voice frequencies, Appendix I: A high quality low-complexity algorithm for packet loss concealment with G.711. |
Cheveigne et al.,“YIN, a fundamental frequency estimator for speech and music.” The Journal of the Acoustical Society of America 111.4 (2002): 1917-1930. |
Ojala P et al, “A novel pitch-lag search method using adaptive weighting and median filtering”, Speech Coding Proceedings, 1999 IEEE Workshop on Porvoo, Finland Jun. 20-23, 1999, Piscataway, NJ, USA, IEEE, US, (Jun. 20, 1999), doi:10.1109/SCFT.1999.781502, ISBN 978-0-7803-5651-1, pp. 114-116, XP010345546. |
“5 Functional description of the encoder”, Dec. 10, 2014 (Dec. 10, 2014), 3GPP Standard; 26445-C10_1_S05_S0501, 3rd Generation Partnership Project (3GPP)?, Mobile Competence Centre ; 650, Route Des Lucioles ; F-06921 Sophia-Antipolis Cedex; France Retrieved from the lnternet:URL: http://www.3gpp.org/ftp/Specs/2014-12/Rel-12/26_series/ XP050907035. |
Sujoy Sarkar, “Examination Report for IN Application No. 202037018091”, dated Jun. 1, 2021, Intellectual Property India, India. |
“Decision on Grant Patent for Invention for RU Application No. 2020118949”, Nov. 11, 2020, ROSPATENT, Russia. |
P.A. Volkov, “Office Action for RU Application No. 2020120251”, dated Oct. 28, 2020, ROSPATENT, Russia. |
P.A. Volkov, “Office Action for RU Application No. 2020120256”, dated Oct. 28, 2020, ROSPATENT, Russia. |
D.V.Travnikov, “Decision on Grant for RU Application No. 2020118969”, Nov. 2, 2020, ROSPATENT, Russia. |
Santosh Mehtry, “Office Action for IN Application No. 202037019203”, dated Mar. 19, 2021, Intellectual Property India, India. |
Takeshi Yamashita, “Office Action for JP Application 2020-524877”, dated Jun. 24, 2021, JPO, Japan. |
Tomonori Kikuchi, “Office Action for JP Application No. 2020-524874”, dated Jun. 2, 2021, JPO Japan. |
Khalid Sayood, “Introduction to Data Compression”, Elsevier Science & Technology, 2005, Section 16.4, Figure 16. 13, p. 526. |
Patterson et al., “Computer Organization and Design”, The hardware/software Interface, Revised Fourth Edition, Elsevier, 2012. |
John Tan, “Office Action for SG Application 11202004173P”, dated Jul. 23, 2021, IPOS, Singapore. |
Tetsuyuki Okumachi, “Office Action for JP Application 2020-118837”, dated Jul. 16, 2021, JPO, Japan. |
Tetsuyuki Okumachi, “Office Action for JP Application 2020-118838”, dated Jul. 16, 2021, JPO, Japan. |
Guojun Lu et al., “A Technique towards Automatic Audio Classification and Retrieval, Forth International Conference on Signal Processing”, 1998, IEEE, Oct. 12, 1998, pp. 1142 to 1145. |
Hiroshi Ono, “Office Action for JP Application No. 2020-526135”, dated May 21, 2021, JPO Japan. |
Hiroshi Ono, “Office Action for JP Application No. 2020-526081”, dated Jun. 22, 2021, JPO, Japan. |
Hiroshi Ono, “Office Action for JP Application No. 2020-526084”, dated Jun. 23, 2021, JPO, Japan. |
Miao Xiaohong, “Examination Report for SG Application No. 11202004228V”, dated Sep. 2, 2021, IPOS, Singapore. |
Miao Xiaohong, “Search Report for SG Application No. 11202004228V”, dated Sep. 3, 2021, IPOS, Singapore. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7015512”, dated Sep. 9, 2021, KIPO, Republic of Korea. |
Dietz, Martin et al., “Overview of the EVS codec architecture.” 2015 IEEE International Conference on Acoustics, Signal Processing (ICASSP), IEEE, 2015. |
Kazunori Mochimura, “Decision to Grant a Patent for JP application No. 2020-524579”, dated Nov. 25, 2021, JPO, Japan. |
International Telecommunication Union, “G. 729-based embedded variable bit-rate coder: An 8-32 kbit/s scalable wideband coder bitstream interoperable with G.729”. ITU-T Recommendation G.729.1., May 2006. |
3GGP TS 26.445, “Universal Mobile TElecommunications System (UMTS); LTE; Codec for Enhanced Voice Services (EVS); Detailed algorithmic description (3GPP TS 26.445 version 13.4.1 Release 13)”, ETSI TS 126 445 V13.4.1., Apr. 2017. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016100”, dated Jan. 13, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016224”, dated Jan. 13, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7015835”, dated Jan. 13, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016424”, dated Feb. 9, 2022, KIPO, Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016503”, dated Feb. 9, 2022, KIPO, Korea. |
Number | Date | Country | |
---|---|---|---|
20200265846 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2018/080198 | Nov 2018 | US |
Child | 16867834 | US |