The present invention is related to audio coding and, particularly, to audio coding relying on switched audio encoders and correspondingly controlled audio decoders, particularly suitable for low-delay applications.
Several audio coding concepts relying on switched codecs are known. One well-known audio coding concept is the so-called Extended Adaptive Multi-Rate-Wideband (AMR-WB+) codec, as described in 3GPP TS 26.290 B10.0.0 (2011-03). The AMR-WB+ audio codec contains all the AMR-WB speech codec modes 1 to 9 and AMR-WB VAD and DTX. AMR-WB+ extends the AMR-WB codec by adding TCX, bandwidth extension, and stereo.
The AMR-WB+ audio codec processes input frames equal to 2048 samples at an internal sampling frequency Fs. The internal sampling frequency is limited to the range of 12800 to 38400 Hz. The 2048 sample frames are split into two critically sampled equal frequency bands. This results in two super-frames of 1024 samples corresponding to the low frequency (LF) and high frequency (HF) bands. Each super-frame is divided into four 256-sample frames. Sampling at the internal sampling rate is obtained by using a variable sampling conversion scheme, which re-samples the input signal.
The LF and HF signals are then encoded using two different approaches: the LF is encoded and decoded using the “core” encoder/decoder based on switched ACELP and transform coded excitation (TCX). In ACELP mode, the standard AMR-WB codec is used. The HF signal is encoded with relatively few bits (16 bits/frame) using a bandwidth extension (BWE) method. The parameters transmitted from encoder to decoder are the mode selection bits, the LF parameters and the HF parameters. The parameters for each 1024 samples super-frame are decomposed into four packets of identical size. When the input signal is stereo, the left and right channels are combined into a mono-signal for ACELP/TCX encoding, whereas the stereo encoding receives both input channels. On the decoder-side, the LF and HF bands are decoded separately after which they are combined in a synthesis filterbank. If the output is restricted to mono only, the stereo parameters are omitted and the decoder operates in mono mode. The AMR-WB+ codec applies LP analysis for both the ACELP and TCX modes when encoding the LF signal. The LP coefficients are interpolated linearly at every 64-samples subframe. The LP analysis window is a half-cosine of length 384 samples. To encode the core mono-signal, either an ACELP or TCX coding is used for each frame. The coding mode is selected based on a closed-loop analysis-by-synthesis method. Only 256-sample frames are considered for ACELP frames, whereas frames of 256, 512 or 1024 samples are possible in TCX mode. The window used for LPC analysis in AMR-WB+ is illustrated in
a illustrates a further encoder, the so-called AMR-WB coder and, particularly, the LPC analysis window used for calculating the analysis coefficients for the current frame. Once again, the current frame extends between 0 and 20 ms and the future frame extends between 20 and 40 ms. In contrast to
While
A long-term prediction or linear prediction (LP) analysis using the auto-correlation approach determines the coefficients of the synthesis filter of the CELP model. In CELP, however, the long-term prediction is usually the “adaptive-codebook” and so is different from the linear-prediction. The linear-prediction can, therefore, be regarded more a short-term prediction. The auto-correlation of windowed speech is converted to the LP coefficients using the Levinson-Durbin algorithm. Then, the LPC coefficients are transformed to the immitance spectral pairs (ISP) and consequently to immitance spectral frequencies (ISF) for quantization and interpolation purposes. The interpolated quantized and unquantized coefficients are converted back to the LP domain to construct synthesis and weighting filters for each subframe. In case of encoding of an active signal frame, two sets of LP coefficients are estimated in each frame using the two LPC analysis windows indicated at 510 and 512 in
The speech encoding parameters such as adaptive codebook delay and gain, algebraic codebook index and gain are searched by minimizing the error between the input signal and the synthesized signal in the perceptually weighted domain. Perceptually weighting is performed by filtering the signal through a perceptual weighting filter derived from the LP filter coefficients. The perceptually weighted signal is also used in open-loop pitch analysis.
The G.718 encoder is a pure speech coder only having the single speech coding mode. Therefore, the G.718 encoder is not a switched encoder and, therefore, this encoder is disadvantageous in that it only provides a single speech coding mode within the core layer. Hence, quality problems will occur when this coder is applied to other signals than speech signals, i.e., to general audio signals, for which the model behind CELP encoding is not appropriate.
An additional switched codec is the so-called USAC codec, i.e., the unified speech and audio codec as defined in ISO/IEC CD 23003-3 dated Sep. 24, 2010. The LPC analysis window used for this switched codec is indicated in
The MDCT-based TCX decoding tool is used to turn the weighted LP residual representation from an MDCT domain back into a time domain signal and outputs the weighted time-domain signal including weighted LP synthesis filtering. The IMDCT can be configured to support 256, 512 or 1024 spectral coefficients. The input to the TCX tool comprises the (inversely quantized) MDCT spectra, and inversely quantized and interpolated LPC filter coefficients. The output of the TCX tool is the time-domain reconstructed audio signal.
It is an object of the present invention to provide an improved coding concept for audio coding or decoding which, on the one hand, provides a good audio quality and which, on the other hand, results in a reduced delay.
According to an embodiment, an apparatus for encoding an audio signal having a stream of audio samples may have: a windower for applying a prediction coding analysis window to the stream of audio samples to obtain windowed data for a prediction analysis and for applying a transform coding analysis window to the stream of audio samples to obtain windowed data for a transform analysis, wherein the transform coding analysis window is associated with audio samples within a current frame of audio samples and with audio samples of a predefined portion of a future frame of audio samples being a transform-coding look-ahead portion, wherein the prediction coding analysis window is associated with at least the portion of the audio samples of the current frame and with audio samples of a predefined portion of the future frame being a prediction coding look-ahead portion, wherein the transform coding look-ahead portion and the prediction coding look-ahead portion are identical to each other or are different from each other by less than 20% of the prediction coding look-ahead portion or less than 20% of the transform coding look-ahead portion; and an encoding processor for generating prediction coded data for the current frame using the windowed data for the prediction analysis or for generating transform coded data for the current frame using the windowed data for the transform analysis.
According to another embodiment, a method of encoding an audio signal having a stream of audio samples may have the steps of: applying a prediction coding analysis window to the stream of audio samples to obtain windowed data for a prediction analysis and applying a transform coding analysis window to the stream of audio samples to obtain windowed data for a transform analysis, wherein the transform coding analysis window is associated with audio samples within a current frame of audio samples and with audio samples of a predefined portion of a future frame of audio samples being a transform-coding look-ahead portion, wherein the prediction coding analysis window is associated with at least the portion of the audio samples of the current frame and with audio samples of a predefined portion of the future frame being a prediction coding look-ahead portion, wherein the transform coding look-ahead portion and the prediction coding look-ahead portion are identical to each other or are different from each other by less than 20% of the prediction coding look-ahead portion or less than 20% of the transform coding look-ahead portion; and generating prediction coded data for the current frame using the windowed data for the prediction analysis or for generating transform coded data for the current frame using the windowed data for the transform analysis.
According to still another embodiment, an audio decoder for decoding an encoded audio signal may have: a prediction parameter decoder for performing a decoding of data for a prediction coded frame from the encoded audio signal; a transform parameter decoder for performing a decoding of data for a transform coded frame from the encoded audio signal, wherein the transform parameter decoder is configured for performing a spectral-time transform and for applying a synthesis window to transformed data to obtain data for the current frame and a future frame, the synthesis window having a first overlap portion, an adjacent second non-overlapping portion and an adjacent third overlap portion, the third overlap portion being associated with audio samples for the future frame and the non-overlap portion being associated with data of the current frame; and an overlap-adder for overlapping and adding synthesis windowed samples associated with the third overlap portion of a synthesis window for the current frame and synthesis windowed samples associated with the first overlap portion of a synthesis window for the future frame to obtain a first portion of audio samples for the future frame, wherein a rest of the audio samples for the future frame are synthesis windowed samples associated with the second non-overlapping portion of the synthesis window for the future frame obtained without overlap-adding, when the current frame and the future frame have transform-coded data.
According to another embodiment, a method of decoding an encoded audio signal may have the steps of: performing a decoding of data for a prediction coded frame from the encoded audio signal; performing a decoding of data for a transform coded frame from the encoded audio signal, wherein the step of performing a decoding of data for a transform coded frame has performing a spectral-time transform and applying a synthesis window to transformed data to obtain data for the current frame and a future frame, the synthesis window having a first overlap portion, an adjacent second non-overlapping portion and an adjacent third overlap portion, the third overlap portion being associated with audio samples for the future frame and the non-overlap portion being associated with data of the current frame; and overlapping and adding synthesis windowed samples associated with the third overlap portion of a synthesis window for the current frame and synthesis windowed samples associated with the first overlap portion of a synthesis window for the future frame to obtain a first portion of audio samples for the future frame, wherein a rest of the audio samples for the future frame are synthesis windowed samples associated with the second non-overlapping portion of the synthesis window for the future frame obtained without overlap-adding, when the current frame and the future frame have transform-coded data.
Another embodiment may have a computer program having a program code for performing, when running on a computer, the method of encoding an audio signal or the method of decoding an audio signal as mentioned above.
In accordance with the present invention, a switched audio codec scheme is applied having a transform coding branch and a prediction coding branch. Importantly, the two kinds of windows, i.e., the prediction coding analysis window on the one hand and the transform coding analysis window on the other hand are aligned with respect to their look-ahead portion so that the transform coding look-ahead portion and the prediction coding look-ahead portion are identical or are different from each other by less than 20% of the prediction coding look-ahead portion or less than 20% of the transform coding look-ahead portion. It is to be noted that the prediction analysis window” is used not only in the prediction coding branch, but it is actually used in both branches. The LPC analysis is also used for shaping the noise in the transform domain. Therefore, in other words, the look-ahead portions are identical or are quite close to each other. This ensures that an optimum compromise is achieved and that no audio quality or delay features are set into a sub-optimum way. Hence, for the prediction coding in the analysis window it has been found out that the LPC analysis is the better the higher the look-ahead is, but, on the other hand, the delay increases with a higher look-ahead portion. On the other hand, the same is true for the TCX window. The higher the look-ahead portion of the TCX window is, the better the TCX bitrate can be reduced, since longer TCX windows result in lower bitrates in general. Therefore, in accordance with the present invention, the look-ahead portions are identical or quite close to each other and, particularly, less than 20% different from each other. Therefore, the look-ahead portion, which is not desired due to delay reasons is, on the other hand, optimally used by both, encoding/decoding branches.
In view of that, the present invention provides an improved coding concept with, on the one hand, a low-delay when the look-ahead portion for both analysis windows is set low and provides, on the other hand, an encoding/decoding concept with good characteristics due to the fact that the delay which has to be introduced for audio quality reasons or bitrate reasons anyways is optimally used by both coding branches and not only by a single coding branch.
An apparatus for encoding an audio signal having a stream of audio samples comprises a windower for applying a prediction coding analysis window to a stream of audio samples to obtain windowed data for a prediction analysis and for applying a transform coding analysis window to the stream of audio samples to obtain windowed data for a transform analysis. The transform coding analysis window is associated with audio samples of a current frame of audio samples of a predefined look-ahead portion of a future frame of audio samples being a transform coding look-ahead portion.
Furthermore, the prediction coding analysis window is associated with at least a portion of the audio samples of the current frame and with audio samples of a predefined portion of the future frame being a prediction coding look-ahead portion.
The transform coding look-ahead portion and the prediction coding look-ahead portion are identical to each other or are different from each other by less than 20% of the prediction coding look-ahead portion or less than 20% of the transform coding look-ahead portion and are therefore quite close to each other. The apparatus additionally comprises an encoding processor for generating prediction coded data for the current frame using the windowed data for the prediction analysis or for generating transform coded data for the current frame using the window data for transform analysis.
An audio decoder for decoding an encoded audio signal comprises a prediction parameter decoder for performing a decoding of data for a prediction coded frame from the encoded audio signal and, for the second branch, a transform parameter decoder for performing a decoding of data for a transform coded frame from the encoded audio signal.
The transform parameter decoder is configured for performing a spectral-time transform which may be an aliasing-affected transform such as an MDCT or MDST or any other such transform, and for applying a synthesis window to transformed data to obtain a data for the current frame and the future frame. The synthesis window applied by the audio decoder is so that it has a first overlap portion, an adjacent second non-overlap portion and an adjacent third overlap portion, wherein the third overlap portion is associated with audio samples for the future frame and the non-overlap portion is associated with data of the current frame. Additionally, in order to have a good audio quality on the decoder side, an overlap-adder is applied for overlapping and adding synthesis windowed samples associated with the third overlap portion of a synthesis window for the current frame and synthesis windowed samples associated with the first overlap portion of a synthesis window for the future frame to obtain a first portion of audio samples for the future frame, wherein a rest of the audio samples for the future frame are synthesis windowed samples associated with the second non-overlapping portion of the synthesis window for the future frame obtained without overlap-adding, when the current frame and the future frame comprise transform coded data.
Embodiments of the present invention have the feature that the same look-ahead for the transform coding branch such as the TCX branch and the prediction coding branch such as the ACELP branch are identical to each other so that both coding modes have the maximum available look-ahead under delay constraints. Furthermore, it is of advantage that the TCX window overlap is restricted to the look-ahead portion so that a switching from the transform coding mode to the prediction coding mode from one frame to the next frame is easily possible without any aliasing addressing issues.
A further reason to restrict the overlap to the look ahead is for not introducing a delay at the decoder side. If one would have a TCX window with 10 ms look ahead, and e.g. 20 ms overlap, one would introduce 10 ms more delay in the decoder. When one has a TCX window with 10 ms look ahead and 10 ms overlap, one does not have any additional delay at the decoder side. The easier switching is a good consequence of that.
Therefore, it is of advantage that the second non-overlap portion of the analysis window and of course the synthesis window extend until the end of current frame and the third overlap portion only starts with respect to the future frame. Furthermore, the non-zero portion of the TCX or transform coding analysis/synthesis window is aligned with the beginning of the frame so that, again, an easy and low efficiency switching over from one mode to the other mode is available.
Furthermore, it is of advantage that a whole frame consisting of a plurality of subframes, such as four subframes, can either be fully coded in the transform coding mode (such as TCX mode) or fully coded in the prediction coding mode (such as the ACELP mode).
Furthermore, it is of advantage to not only use a single LPC analysis window but two different LPC analysis windows, where one LPC analysis window is aligned with the center of the fourth subframe and is an end frame analysis window while the other analysis window is aligned with the center of the second subframe and is a mid frame analysis window. If the encoder is switched to transform coding, then however it is of advantage to only transmit a single LPC coefficient data set only derived from the LPC analysis based on the end frame LPC analysis window. Furthermore, on the decoder-side, it is of advantage to not use this LPC data directly for transform coding synthesis, and particularly a spectral weighting of TCX coefficients. Instead, it is of advantage to interpolate the TCX data obtained from the end frame LPC analysis window of the current frame with the data obtained by the end frame LPC analysis window from the past frame, i.e. the frame immediately preceding in time the current frame. By transmitting only a single set of LPC coefficients for a whole frame in the TCX mode, a further bitrate reduction can be obtained compared to transmitting two LPC coefficient data sets for mid frame analysis and end frame analysis. When, however, the encoder is switched to ACELP mode, then both sets of LPC coefficients are transmitted from the encoder to the decoder.
Furthermore, it is of advantage that the mid-frame LPC analysis window ends immediately at the later frame border of the current frame and additionally extends into the past frame. This does not introduce any delay, since the past frame is already available and can be used without any delay.
On the other hand, it is of advantage that the end frame analysis window starts somewhere within the current frame and not at the beginning of the current frame. This, however, is not problematic, since, for the forming TCX weighting, an average of the end frame LPC data set for the past frame and the end frame LPC data set for the current frame is used so that, in the end, all data are in a sense used for calculating the LPC coefficients. Hence, the start of the end frame analysis window may be within the look-ahead portion of the end frame analysis window of the past frame.
On the decoder-side, a significantly reduced overhead for switching from one mode to the other mode is obtained. The reason is that the non-overlapping portion of the synthesis window, which may be symmetric within itself, is not associated to samples of the current frame but is associated with samples of a future frame, and therefore only extends within the look-ahead portion, i.e., in the future frame only. Hence, the synthesis window is so that only the first overlap portion advantageously starting at the immediate start of the current frame is within the current frame and the second non-overlapping portion extends from the end of the first overlapping portion to the end of the current frame and, therefore, the second overlap portion coincides with the look-ahead portion. Therefore, when there is a transition from TCX to ACELP, the data obtained due to the overlap portion of the synthesis window is simply discarded and is replaced by prediction coding data which is available from the very beginning of the future frame out of the ACELP branch.
On the other hand, when there is a switch from ACELP to TCX, a specific transition window is applied which immediately starts at the beginning of the current frame, i.e., the frame immediately after the switchover, with a non-overlapping portion so that any data do not have to be reconstructed in order to find overlap “partners”. Instead, the non-overlap portion of the synthesis window provides correct data without any overlapping and without any overlap-add procedures necessitated in the decoder. Only for the overlap portions, i.e., the third portion of the window for the current frame and the first portion of the window for the next frame, an overlap-add procedure is useful and performed in order to have, as in a straightforward MDCT, a continuous fade-in/fade-out from one block to the other in order to finally obtain a good audio quality without having to increase the bitrate due to the critically sampled nature of the MDCT as also known in the art under the term “time-domain aliasing cancellation (TDAC).
Furthermore, the decoder is useful in that, for an ACELP coding mode, LPC data derived from the mid-frame window and the end-frame window in the encoder is transmitted while, for the TCX coding mode, only a single LPC data set derived from the end-frame window is used. For spectrally weighting TCX decoded data, however, the transmitted LPC data is not used as it is, but the data is averaged with the corresponding data from the end-frame LPC analysis window obtained for the past frame.
Embodiments of the present invention are subsequently described with respect to the accompanying drawings, in which:
a illustrates a block diagram of a switched audio encoder;
b illustrates a block diagram of a corresponding switched decoder;
c illustrates more details on the transform parameter decoder illustrated in
d illustrates more details on the transform coding mode of the decoder of
a illustrates an embodiment for the windower applied in the encoder for LPC analysis on the one hand and transform coding analysis on the other hand, and is a representation of the synthesis window used in the transform coding decoder of
b illustrates a window sequence of aligned LPC analysis windows and TCX windows for a time span of more than two frames;
c illustrates a situation for a transition from TCX to ACELP and a transition window for a transition from ACELP to TCX;
a illustrates more details of the encoder of
b illustrates an analysis-by-synthesis procedure for deciding on a coding mode for a frame;
c illustrates a further embodiment for deciding between the modes for each frame;
a illustrates the calculation and usage of the LPC data derived by using two different LPC analysis windows for a current frame;
b illustrates the usage of LPC data obtained by windowing using an LPC analysis window for the TCX branch of the encoder;
a illustrates LPC analysis windows for AMR-WB;
b illustrates symmetric windows for AMR-WB+ for the purpose of LPC analysis;
c illustrates LPC analysis windows for a G.718 encoder;
d illustrates LPC analysis windows as used in USAC; and
The transform coding analysis window is associated with audio samples in a current frame of audio samples and with audio samples of a predefined portion of the future frame of audio samples being a transform coding look-ahead portion.
Furthermore, the prediction coding analysis window is associated with at least a portion of the audio samples of the current frame and with audio samples of a predefined portion of the future frame being a prediction coding look-ahead portion.
As outlined in block 102, the transform coding look-ahead portion and the prediction coding look-ahead portion are aligned with each other, which means that these portions are either identical or quite close to each other, such as different from each other by less than 20% of the prediction coding look-ahead portion or less than 20% of the transform coding look-ahead portion. Advantageously, the look-ahead portions are identical or different from each other by less than even 5% of the prediction coding look-ahead portion or less than 5% of the transform coding look-ahead portion.
The encoder additionally comprises an encoding processor 104 for generating prediction coded data for the current frame using the windowed data for the prediction analysis or for generating transform coded data for the current frame using the windowed data for the transform analysis.
Furthermore, the encoder may comprise an output interface 106 for receiving, for a current frame and, in fact, for each frame, LPC data 108a and transform coded data (such as TCX data) or prediction coded data (ACELP data) over line 108b. The encoding processor 104 provides these two kinds of data and receives, as input, windowed data for a prediction analysis indicated at 110a and windowed data for a transform analysis indicated at 110b. Furthermore, the apparatus for encoding comprises an encoding mode selector or controller 112 which receives, as an input, the audio data 100 and which provides, as an output, control data to the encoding processor 104 via control lines 114a, or control data to the output interface 106 via control line 114b.
a provides additional details on the encoding processor 104 and the windower 102. The windower 102 may comprise, as a first module, the LPC or prediction coding analysis windower 102a and, as a second component or module, the transform coding windower (such as TCX windower) 102b. As indicated by arrow 300, the LPC analysis window and the TCX window are aligned with each other so that the look-ahead portions of both windows are identical to each other, which means that both look-ahead portions extend until the same time instant into a future frame. The upper branch in
Although,
Furthermore, for the transform coding branch, an MDCT processing particularly in the time-frequency conversion block 310 is of advantage, although any other spectral domain transforms can be performed as well.
Furthermore,
b illustrates the general overview for illustrating an analysis-by-synthesis or “closed-loop” determination of the coding mode for each frame. To this end, the encoder illustrated in
Based on the quality measure which is provided from each branch 104a, 104b to the decider 112, the decider decides whether the current examined frame is to be encoded using ACELP or TCX. Subsequent to the decision, there are several ways in order to perform the coding mode selection. One way is that the decider 112 controls the corresponding encoder/decoder blocks 104a, 104b, in order to simply output the coding result for the current frame to the output interface 106, so that it is made sure that, for a certain frame, only a single coding result is transmitted in the output coded signal at 107.
Alternatively, both devices 104a, 104b could forward their encoding result already to the output interface 106, and both results are stored in the output interface 106 until the decider controls the output interface via line 105 to either output the result from block 104b or from block 104a.
b illustrates more details on the concept of
Alternatively, an open-loop mode for determining the coding mode for a current frame based on the signal analysis of the audio data for the current frame can be performed. In this case, the decider 112 of
a illustrates an advantageous implementation of the windower 102 and, particularly, the windows supplied by the windower.
Advantageously, the prediction coding analysis window for the current frame is centered at the center of a fourth subframe and this window is indicated at 200. Furthermore, it is of advantage to use an additional LPC analysis window, i.e., the mid-frame LPC analysis window indicated at 202 and centered at the center of the second subframe of the current frame. Furthermore, the transform coding window such as, for example, the MDCT window 204 is placed with respect to the two LPC analysis windows 200, 202 as illustrated. Particularly, the look-ahead portion 206 of the analysis window has the same length in time as the look-ahead portion 208 of the prediction coding analysis window. Both look-ahead portions extend 10 ms into the future frame. Furthermore, it is of advantage that the transform coding analysis window not only has the overlap portion 206, but has a non-overlap portion between 10 and 20 ms 208 and the first overlap portion 210. The overlap portions 206 and 210 are so that an overlap-adder in a decoder performs an overlap-add processing in the overlap portion, but an overlap-add procedure is not necessary for the non-overlap portion.
Advantageously, the first overlap portion 210 starts at the beginning of the frame, i.e., at zero ms and extends until the center of the frame, i.e., 10 ms. Furthermore, the non-overlap portion extends from the end of the first portion of the frame 210 until the end of the frame at 20 ms so that the second overlap portion 206 fully coincides with the look-ahead portion. This has advantages due to switching from one mode to the other mode.. From a TCX performance point of view, it would be better to use a sine window with full overlap (20 ms overlap, like in USAC). This would, however, necessitate a technology like forward aliasing cancellation for the transitions between TCX and ACELP. Forward aliasing cancellation is used in USAC to cancel the aliasing introduced by the missing next TCX frames (replaced by ACELP). Forward aliasing cancellation necessitates a significant amount of bits and thus is not suitable for a constant bitrate and, particularly, low-bitrate codec like an embodiment of the described codec. Therefore, in accordance with the embodiments of the invention, instead of using FAC, the TCX window overlap is reduced and the window is shifted towards the future so that the full overlap portion 206 is placed in the future frame. Furthermore, the window illustrated in
Although
b illustrates a sequence of windows over a portion of a past frame, a subsequently following current frame, a future frame which is subsequently following the current frame and the next future frame which is subsequently following the future frame.
It becomes clear that the overlap-add portion processed by an overlap-add processor illustrated at 250 extends from the beginning of each frame until the middle of each frame, i.e., between 20 and 30 ms for calculating the future frame data and between 40 and 50 ms for calculating TCX data for the next future frame or between zero and 10 ms for calculating data for the current frame. However, for calculating the data in the second half of each frame, no overlap-add, and therefore no forward aliasing cancellation technique is necessary. This is due to the fact that the synthesis window has a non-overlap part in the second half of each frame.
Typically, the length of an MDCT window is twice the length of a frame. This is the case in the present invention as well. When, again,
c illustrates the two possible transitions. For a transition from TCX to ACELP, however, no special care has to be taken since, when it is assumed with respect to
When, however, a transition from ACELP to TCX is performed, then a special transition window as illustrated in
This window is, additionally, padded with zeros between −12.5 ms to zero at the beginning of the window and between 30 and 35.5 ms at the end, i.e., subsequent to the look-ahead portion 222. This results in an increased transform length. The length is 50 ms, but the length of the straightforward analysis/synthesis window is only 40 ms. This, however, does not decrease the efficiency or increase the bitrate, and this longer transform is necessitated when a switch from ACELP to TCX takes place. The transition window used in the corresponding decoder is identical to the window illustrated in
Subsequently, the decoder is discussed in more detail.
c illustrates more details on the construction of the transform parameter decoder 183.
The decoder comprises a decoder processing stage 183a which is configured for performing all processing necessitated for decoding encoded spectral data such as arithmetic decoding or Huffman decoding or generally, entropy decoding and a subsequent de-quantization, noise filling, etc. to obtain decoded spectral values at the output of block 183. These spectral values are input into a spectral weighter 183b. The spectral weighter 183b receives the spectral weighting data from an LPC weighting data calculator 183c, which is fed by LPC data generated from the prediction analysis block on the encoder-side and received, at the decoder, via the input interface 182. Then, an inverse spectral transform is performed which may comprise, as a first stage, a DCT-IV inverse transform 183d and a subsequent defolding and synthesis windowing processing 183e, before the data for the future frame, for example, is provided to the overlap-adder 184. The overlap-adder can perform the overlap-add operation when the data for the next future frame is available. Blocks 183d and 183e together constitute the spectral/time transform or, in the embodiment in
Particularly, the block 183d receives data for a frame of 20 ms, and increases the data volume in the defolding step of block 183e into data for 40 ms, i.e., twice the amount of the data from before and, subsequently, the synthesis window having a length of 40 ms (when the zero portions at the beginning and the end of the window are added together) is applied to these 40 ms of data. Then, at the output of block 183e, the data for the current block and the data within the look-ahead portion for the future block are available.
d illustrates the corresponding encoder-side processing. The features discussed in the context of
On the other hand, on the decoder-side, the spectral weighting corresponding to block 312 in
Subsequently,
Subsequent to the application of the LPC analysis window, the autocorrelation computation is performed with the LPC windowed data. Then, a Levinson Durbin algorithm is applied on the autocorrelation function. Then, the 16 LP coefficients for each LP analysis, i.e., 16 coefficients for the mid-frame window and 16 coefficients for the end-frame window are converted into ISP values. Hence, the steps from the autocorrelation calculation to the ISP conversion are, for example, performed in block 400 of
Finally, the LPC data for the first subframe are calculated, as indicated in block 404, by forming an average between the end-frame LPC data of the last frame and the mid-frame LPC data of the current frame.
For performing the ACELP encoding, both quantized LPC parameter sets, i.e., from the mid-frame analysis and the end-frame analysis are transmitted to a decoder.
Based on the results for the individual subframes calculated by blocks 401 to 404, the ACELP calculations are performed as indicated in block 405 in order to obtain the ACELP data to be transmitted to the decoder.
Subsequently,
In the encoder, however, the procedures in steps 406 to 408 are, nevertheless, to be performed in order to obtain weighting factors for weighting the MDCT spectral data of the current frame. To this end, the end-frame LPC data of the current frame and the end-frame LPC data of the past frame are interpolated. However, it is of advantage to not interpolate the LPC data coefficients themselves as directly derived from the LPC analysis. Instead, it is of advantage to interpolate the quantized and again dequantized ISP values derived from the corresponding LPC coefficients. Hence, the LPC data used in block 406 as well as the LPC data used for the other calculations in block 401 to 404 are, advantageously, quantized and again de-quantized ISP data derived from the original 16 LPC coefficients per LPC analysis window.
The interpolation in block 406 may be a pure averaging, i.e., the corresponding values are added and divided by two. Then, in block 407, the MDCT spectral data of the current frame are weighted using the interpolated LPC data and, in block 408, the further processing of weighted spectral data is performed in order to finally obtain the encoded spectral data to be transmitted from the encoder to a decoder. Hence, the procedures performed in the step 407 correspond to the block 312, and the procedure performed in block 408 in
The present invention is particularly useful for low-delay codec implementations. This means that such codecs are designed to have an algorithmic or systematic delay advantageously below 45 ms and, in some cases even equal to or below 35 ms. Nevertheless, the look-ahead portion for LPC analysis and TCX analysis are necessitated for obtaining a good audio quality. Therefore, a good trade-off between both contradictory requirements is necessitated. It has been found that the good trade-off between delay on the one hand and quality on the other hand can be obtained by a switched audio encoder or decoder having a frame length of 20 ms, but it has been found that values for frame lengths between 15 and 30 ms also provide acceptable results. On the other hand, it has been found that a look-ahead portion of 10 ms is acceptable when it comes to delay issues, but values between 5 ms and 20 ms are also useful depending on the corresponding application. Furthermore, it has been found that the relation between look-ahead portion and the frame length is useful when it has the value of 0.5, but other values between 0.4 and 0.6 are useful as well. Furthermore, although the invention has been described with ACELP on the one hand and MDCT-TCX on the other hand, other algorithms operating in the time domain such as CELP or any other prediction or wave form algorithms are useful as well. With respect to TCX/MDCT, other transform domain coding algorithms such as an MDST, or any other transform-based algorithms can be applied as well.
The same is true for the specific implementation of LPC analysis and LPC calculation. It is of advantage to rely on the procedures described before, but other procedures for calculation/interpolation and analysis can be used as well, as long as those procedures rely on an LPC analysis window.
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
Some embodiments according to the invention comprise a non-transitory data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods may be performed by any hardware apparatus.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which will be apparent to others skilled in the art and which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
This application is a continuation of copending International Application No. PCT/EP2012/052450, filed Feb. 14, 2012, which is incorporated herein by reference in its entirety, and additionally claims priority from U.S. Provisional Application No. 61/442,632, filed Feb. 14, 2011, which is also incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5537510 | Kim | Jul 1996 | A |
5606642 | Stautner et al. | Feb 1997 | A |
5848391 | Bosi et al. | Dec 1998 | A |
5953698 | Hayata | Sep 1999 | A |
5960389 | Jarvinen et al. | Sep 1999 | A |
6070137 | Bloebaum et al. | May 2000 | A |
6134518 | Cohen et al. | Oct 2000 | A |
6236960 | Peng et al. | May 2001 | B1 |
6317117 | Goff | Nov 2001 | B1 |
6532443 | Nishiguchi et al. | Mar 2003 | B1 |
6757654 | Westerlund et al. | Jun 2004 | B1 |
6879955 | Rao | Apr 2005 | B2 |
7124079 | Johansson et al. | Oct 2006 | B1 |
7280959 | Bessette | Oct 2007 | B2 |
7343283 | Ashley et al. | Mar 2008 | B2 |
7363218 | Jabri et al. | Apr 2008 | B2 |
7519535 | Spindola | Apr 2009 | B2 |
7519538 | Villemoes et al. | Apr 2009 | B2 |
7536299 | Cheng et al. | May 2009 | B2 |
7707034 | Sun et al. | Apr 2010 | B2 |
7873511 | Herre et al. | Jan 2011 | B2 |
7933769 | Bessette | Apr 2011 | B2 |
7979271 | Bessette | Jul 2011 | B2 |
7987089 | Krishnan et al. | Jul 2011 | B2 |
8121831 | Oh et al. | Feb 2012 | B2 |
8160274 | Bongiovi | Apr 2012 | B2 |
8255207 | Vaillancourt et al. | Aug 2012 | B2 |
8428936 | Mittal et al. | Apr 2013 | B2 |
8566106 | Salami et al. | Oct 2013 | B2 |
8630862 | Geiger et al. | Jan 2014 | B2 |
8630863 | Son et al. | Jan 2014 | B2 |
20020111799 | Bernard | Aug 2002 | A1 |
20020184009 | Heikkinen | Dec 2002 | A1 |
20030009325 | Kirchherr et al. | Jan 2003 | A1 |
20030078771 | Jung et al. | Apr 2003 | A1 |
20040225505 | Andersen et al. | Nov 2004 | A1 |
20050091044 | Ramo et al. | Apr 2005 | A1 |
20050130321 | Nicholson et al. | Jun 2005 | A1 |
20050131696 | Wang et al. | Jun 2005 | A1 |
20050154584 | Jelinek et al. | Jul 2005 | A1 |
20050240399 | Makinen | Oct 2005 | A1 |
20050278171 | Suppappola et al. | Dec 2005 | A1 |
20060206334 | Kapoor et al. | Sep 2006 | A1 |
20060271356 | Vos | Nov 2006 | A1 |
20060293885 | Gournay et al. | Dec 2006 | A1 |
20070016404 | Kim et al. | Jan 2007 | A1 |
20070050189 | Cruz-Zeno et al. | Mar 2007 | A1 |
20070100607 | Villemoes | May 2007 | A1 |
20070147518 | Bessette | Jun 2007 | A1 |
20070171931 | Manjunath et al. | Jul 2007 | A1 |
20070225971 | Bessette | Sep 2007 | A1 |
20070253577 | Yen et al. | Nov 2007 | A1 |
20070282603 | Bessette | Dec 2007 | A1 |
20080010064 | Takeuchi et al. | Jan 2008 | A1 |
20080015852 | Kruger et al. | Jan 2008 | A1 |
20080027719 | Kirshnan et al. | Jan 2008 | A1 |
20080052068 | Aguilar et al. | Feb 2008 | A1 |
20080208599 | Rosec et al. | Aug 2008 | A1 |
20080275580 | Andersen | Nov 2008 | A1 |
20090024397 | Ryu et al. | Jan 2009 | A1 |
20090226016 | Fitz et al. | Sep 2009 | A1 |
20100017200 | Oshikiri et al. | Jan 2010 | A1 |
20100063812 | Gao | Mar 2010 | A1 |
20100070270 | Gao | Mar 2010 | A1 |
20100138218 | Geiger | Jun 2010 | A1 |
20100198586 | Edler et al. | Aug 2010 | A1 |
20100217607 | Neuendorf et al. | Aug 2010 | A1 |
20110153333 | Bessette | Jun 2011 | A1 |
20110161088 | Bayer et al. | Jun 2011 | A1 |
20110178795 | Bayer et al. | Jul 2011 | A1 |
20110218797 | Mittal et al. | Sep 2011 | A1 |
20110218799 | Mittal et al. | Sep 2011 | A1 |
20110311058 | Oh et al. | Dec 2011 | A1 |
20120022881 | Geiger et al. | Jan 2012 | A1 |
20120226505 | Lin et al. | Sep 2012 | A1 |
Number | Date | Country |
---|---|---|
2007312667 | Apr 2008 | AU |
1274456 | Nov 2000 | CN |
1344067 | Apr 2002 | CN |
1381956 | Nov 2002 | CN |
1437747 | Aug 2003 | CN |
1539137 | Oct 2004 | CN |
1539138 | Oct 2004 | CN |
101351840 | Oct 2006 | CN |
101110214 | Jan 2008 | CN |
101366077 | Feb 2009 | CN |
101371295 | Feb 2009 | CN |
101379551 | Mar 2009 | CN |
101388210 | Mar 2009 | CN |
101425292 | May 2009 | CN |
101483043 | Jul 2009 | CN |
101488344 | Jul 2009 | CN |
101743587 | Jun 2010 | CN |
101770775 | Jul 2010 | CN |
0673566 | Sep 1995 | EP |
0758123 | Feb 1997 | EP |
0843301 | May 1998 | EP |
1120775 | Aug 2001 | EP |
1852851 | Jul 2007 | EP |
2107556 | Jul 2009 | EP |
10039898 | Feb 1998 | JP |
H10214100 | Aug 1998 | JP |
H11502318 | Feb 1999 | JP |
H1198090 | Apr 1999 | JP |
2000357000 | Dec 2000 | JP |
2002-118517 | Apr 2002 | JP |
2003501925 | Jan 2003 | JP |
2003506764 | Feb 2003 | JP |
2004514182 | May 2004 | JP |
2006504123 | Feb 2006 | JP |
2007065636 | Mar 2007 | JP |
2007523388 | Aug 2007 | JP |
2007525707 | Sep 2007 | JP |
2007538282 | Dec 2007 | JP |
2008-15281 | Jan 2008 | JP |
2008261904 | Oct 2008 | JP |
2009508146 | Feb 2009 | JP |
2009522588 | Jun 2009 | JP |
2009-527773 | Jul 2009 | JP |
2010-538314 | Dec 2010 | JP |
2010539528 | Dec 2010 | JP |
2011501511 | Jan 2011 | JP |
2011527444 | Oct 2011 | JP |
2183034 | May 2002 | RU |
200830277 | Oct 1996 | TW |
200943279 | Oct 1998 | TW |
201032218 | Sep 1999 | TW |
380246 | Jan 2000 | TW |
469423 | Dec 2001 | TW |
I253057 | Apr 2006 | TW |
200703234 | Jan 2007 | TW |
200729156 | Aug 2007 | TW |
200841743 | Oct 2008 | TW |
I313856 | Aug 2009 | TW |
200943792 | Oct 2009 | TW |
I316225 | Oct 2009 | TW |
I 320172 | Feb 2010 | TW |
201009810 | Mar 2010 | TW |
201009812 | Mar 2010 | TW |
I324762 | May 2010 | TW |
201027517 | Jul 2010 | TW |
201030735 | Aug 2010 | TW |
201040943 | Nov 2010 | TW |
I333643 | Nov 2010 | TW |
201103009 | Jan 2011 | TW |
9222891 | Dec 1992 | WO |
9510890 | Apr 1995 | WO |
9629696 | Sep 1996 | WO |
0075919 | Dec 2000 | WO |
02101724 | Dec 2002 | WO |
02101724 | Dec 2002 | WO |
WO-02101722 | Dec 2002 | WO |
2004027368 | Apr 2004 | WO |
2005078706 | Aug 2005 | WO |
2005081231 | Sep 2005 | WO |
2005112003 | Nov 2005 | WO |
2006126844 | Nov 2006 | WO |
WO-2007051548 | May 2007 | WO |
2007083931 | Jul 2007 | WO |
WO-2007073604 | Jul 2007 | WO |
WO2007096552 | Aug 2007 | WO |
WO-2008013788 | Oct 2008 | WO |
WO-2009029032 | Mar 2009 | WO |
2009077321 | Oct 2009 | WO |
2010003491 | Jan 2010 | WO |
WO-2010003491 | Jan 2010 | WO |
WO-2010003532 | Jan 2010 | WO |
WO-2010040522 | Apr 2010 | WO |
2010059374 | May 2010 | WO |
2010093224 | Aug 2010 | WO |
2011006369 | Jan 2011 | WO |
WO-2011048094 | Apr 2011 | WO |
2011147950 | Dec 2011 | WO |
Entry |
---|
A Silence Compression Scheme for G.729 Optimized for Terminals Conforming to Recommendation V.70, ITU-T Recommendation G.729—Annex B, International Telecommunication Union, pp. 1-16., Nov. 1996. |
Martin, R., Spectral Subtraction Based on Minimum Statistics, Proceedings of European Signal Processing Conference (EUSIPCO), Edinburg, Scotland, Great Britain, Sep. 1994, pp. 1182-1185. |
“Digital Cellular Telecommunications System (Phase 2+); Universal Mobile Telecommunications System (UMTS); LTE; Speech codec speech processing functions; Adaptive Multi-Rate-Wideband (AMR-)WB Speech Codec; Transcoding Functions (3GPP TS 26.190 version 9.0.0”, Technical Specification, European Telecommunications Standards Institute (ETSI) 650, Route Des Lucioles; F-06921 Sophia-Antipolis; France; No. V9.0.0, Jan. 1, 2012, 54 Pages. |
“IEEE Signal Processing Letters”, IEEE Signal Processing Society. vol. 15. ISSN 1070-9908., 2008, 9 Pages. |
“Information Technology—MPEG Audio Technologies—Part 3: Unified Speech and Audio Coding”, ISO/IEC JTC 1/SC 29 ISO/IEC DIS 23003-3, Feb. 9, 2011, 233 Pages. |
“WD7 of USAC”, International Organisation for Standardisation Organisation Internationale De Normailisation. ISO/IEC JTC1/SC29/WG11. Coding of Moving Pictures and Audio. Dresden, Germany., Apr. 2010, 148 Pages. |
3GPP, , “3rd Generation Partnership Project; Technical Specification Group Service and System Aspects. Audio Codec Processing Functions. Extended AMR Wideband Codec; Transcoding functions (Release 6).”, 3GPP Draft; 26.290, V2.0.0 3rd Generation Partnership Project (3GPP), Mobile Competence Centre; Valbonne, France., Sep. 2004, pp. 1-85. |
Ashley, J et al., “Wideband Coding of Speech Using a Scalable Pulse Codebook”, 2000 IEEE Speech Coding Proceedings., Sep. 17, 2000, pp. 148-150. |
Bessette, B et al., “The Adaptive Multirate Wideband Speech Codec (AMR-WB)”, IEEE Transactions on Speech and Audio Processing, IEEE Service Center. New York. vol. 10, No. 8., Nov. 1, 2002, pp. 620-636. |
Bessette, B et al., “Universal Speech/Audio Coding Using Hybrid ACELP/TCX Techniques”, ICASSP 2005 Proceedings. IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 3,, Jan. 2005, pp. 301-304. |
Bessette, B et al., “Wideband Speech and Audio Codec at 16/24/32 Kbit/S Using Hybrid ACELP/TCX Techniques”, 1999 IEEE Speech Coding Proceedings. Porvoo, Finland., Jun. 20, 1999, pp. 7-9. |
Ferreira, A et al., “Combined Spectral Envelope Normalization and Subtraction of Sinusoidal Components in the ODFTand MDCT Frequency Domains”, 2001 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics., Oct. 2001, pp. 51-54. |
Fischer, et al., “Enumeration Encoding and Decoding Algorithms for Pyramid Cubic Lattice and Trellis Codes”, IEEE Transactions on Information Theory. IEEE Press, USA, vol. 41, No. 6, Part 2., Nov. 1, 1995, pp. 2056-2061. |
Hermansky, H et al., “Perceptual linear predictive (PLP) analysis of speech”, J. Acoust. Soc. Amer. 87 (4)., Apr. 1990, pp. 1738-1751. |
Hofbauer, K et al., “Estimating Frequency and Amplitude of Sinusoids in Harmonic Signals—A Survey and the Use of Shifted Fourier Transforms”, Graz: Graz University of Technology; Graz University of Music and Dramatic Arts; Diploma Thesis, Apr. 2004, 111 pages. |
Lanciani, C et al., “Subband-Domain Filtering of MPEG Audio Signals”, 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Phoenix, , AZ, USA., Mar. 15, 1999, pp. 917-920. |
Lauber, P et al., “Error Concealment for Compressed Digital Audio”, Presented at the 111th AES Convention. Paper 5460. New York, USA., Sep. 21, 2001, 12 Pages. |
Lee, Ick Don et al., “A Voice Activity Detection Algorithm for Communication Systems with Dynamically Varying Background Acoustic Noise”, Dept. of Electrical Engineering, 1998 IEEE, May 18-21, 1998, pp. 1214-1218. |
Makinen, J et al., “AMR-WB+: a New Audio Coding Standard for 3rd Generation Mobile Audio Services”, 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing. Philadelphia, PA, USA., Mar. 18, 2005, 1109-1112. |
Motlicek, P et al., “Audio Coding Based on Long Temporal Contexts”, Rapport de recherche de l'IDIAP 06-30, Apr. 2006, pp. 1-10. |
Neuendorf, M et al., “A Novel Scheme for Low Bitrate Unified Speech Audio Coding—MPEG RMO”, AES 126th Convention. Convention Paper 7713. Munich, Germany, May 1, 2009, 13 Pages. |
Neuendorf, M et al., “Completion of Core Experiment on unification of USAC Windowing and Frame Transitions”, International Organisation for Standardisation Organisation Internationale De Normalisation ISOIEC JTC1/SC29/WG11. Coding of Moving Pictures and Audio. Kyoto, Japan., Jan. 2010, 52 Pages. |
Neuendorf, M et al., “Unified Speech and Audio Coding Scheme for High Quality at Low Bitrates”, ICASSP 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway, NJ, USA., Apr. 19, 2009, 4 Pages. |
Patwardhan, P et al., “Effect of Voice Quality on Frequency-Warped Modeling of Vowel Spectra”, Speech Communication. vol. 48, No. 8., Aug. 2006, pp. 1009-1023. |
Ryan, D et al., “Reflected Simplex Codebooks for Limited Feedback MIMO Beamforming”, IEEE. XP31506379A., Jun. 14-18, 2009, 6 Pages. |
Sjoberg, J et al., “RTP Payload Format for the Extended Adaptive Multi-Rate Wideband (AMR-WB+) Audio Codec”, Memo. The Internet Society. Network Working Group. Category: Standards Track., Jan. 2006, pp. 1-38. |
Terriberry, T et al., “A Multiply-Free Enumeration of Combinations with Replacement and Sign”, IEEE Signal Processing Letters. vol. 15, 2008, 11 Pages. |
Terriberry, T et al., “Pulse Vector Coding”, Retrieved from the internet on Oct. 12, 2012. XP55025946. URL:http://people.xiph.org/˜tterribe/notes/cwrs.html, Dec. 1, 2007, 4 Pages. |
Virette, D et al., “Enhanced Pulse Indexing CE for ACELP in USAC”, Organisation Internationale De Normalisation ISO/IEC JTC1/SC29/WG11. MPEG2012/M19305. Coding of Moving Pictures and Audio. Daegu, Korea., Jan. 2011, 13 Pages. |
Wang, F et al., “Frequency Domain Adaptive Postfiltering for Enhancement of Noisy Speech”, Speech Communication 12. Elsevier Science Publishers. Amsterdam, North-Holland. vol. 12, No. 1., Mar. 1993, 41-56. |
Waterschoot, T et al., “Comparison of Linear Prediction Models for Audio Signals”, EURASIP Journal on Audio, Speech, and Music Processing. vol. 24., Dec. 2008, 27 pages. |
Zernicki, T et al., “Report on CE on Improved Tonal Component Coding in eSBR”, International Organisation for Standardisation Organisation Internationale De Normalisation ISO/IEC JTC1/SC29/WG11. Coding of Moving Pictures and Audio. Daegu, South Korea, Jan. 2011, 20 Pages. |
Number | Date | Country | |
---|---|---|---|
20130332148 A1 | Dec 2013 | US |
Number | Date | Country | |
---|---|---|---|
61442632 | Feb 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2012/052450 | Feb 2012 | US |
Child | 13966666 | US |