The present application is concerned with a downscaled decoding concept.
The MPEG-4 Enhanced Low Delay AAC (AAC-ELD) usually operates at sample rates up to 48 kHz, which results in an algorithmic delay of 15 ms. For some applications, e.g. lip-sync transmission of audio, an even lower delay is desirable. AAC-ELD already provides such an option by operating at higher sample rates, e.g. 96 kHz, and therefore provides operation modes with even lower delay, e.g. 7.5 ms. However, this operation mode comes along with an unnecessary high complexity due to the high sample rate.
The solution to this problem is to apply a downscaled version of the filter bank and therefore, to render the audio signal at a lower sample rate, e.g. 48 kHz instead of 96 kHz. The downscaling operation is already part of AAC-ELD as it is inherited from the MPEG-4 AAC-LD codec, which serves as a basis for AAC-ELD.
The question which remains, however, is how to find the downscaled version of a specific filter bank. That is, the only uncertainty is the way the window coefficients are derived whilst enabling clear conformance testing of the downscaled operation modes of the AAC-ELD decoder.
In the following the principles of the down-scaled operation mode of the AAC-(E)LD codecs are described.
The downscaled operation mode or AAC-LD is described for AAC-LD in ISO/IEC 14496-3:2009 in section 4.6.17.2.7 “Adaptation to systems using lower sampling rates” as follows:
“In certain applications it may be necessary to integrate the low delay decoder into an audio system running at lower sampling rates (e.g. 16 kHz) while the nominal sampling rate of the bitstream payload is much higher (e.g. 48 kHz, corresponding to an algorithmic codec delay of approx. 20 ms). In such cases, it is favorable to decode the output of the low delay codec directly at the target sampling rate rather than using an additional sampling rate conversion operation after decoding.
This can be approximated by appropriate downscaling of both, the frame size and the sampling rate, by some integer factor (e.g. 2, 3), resulting in the same time/frequency resolution of the codec. For example, the codec output can be generated at 16 kHz sampling rate instead of the nominal 48 kHz by retaining only the lowest third (i.e. 480/3=160) of the spectral coefficients prior to the synthesis filterbank and reducing the inverse transform size to one third (i.e. window size 960/3=320).
As a consequence, decoding for lower sampling rates reduces both memory and computational requirements, but may not produce exactly the same output as a full-bandwidth decoding, followed by band limiting and sample rate conversion.
Please note that decoding at a lower sampling rate, as described above, does not affect the interpretation of levels, which refers to the nominal sampling rate of the AAC low delay bitstream payload.”
Please note that AAC-LD works with a standard MDCT framework and two window shapes, i.e. sine-window and low-overlap-window. Both windows are fully described by formulas and therefore, window coefficients for any transformation lengths can be determined.
Compared to AAC-LD, the AAC-ELD codec shows two major differences:
The IMDCT algorithm using the low delay MDCT window is described in 4.6.20.2 in [1], which is very similar to the standard IMDCT version using e.g. the sine window. The coefficients of the low delay MDCT windows (480 and 512 samples frame size) are given in Table 4.A.15 and 4.A.16 in [1]. Please note that the coefficients cannot be determined by a formula, as the coefficients are the result of an optimization algorithm.
In case the low delay SBR (LD-SBR) tool is used in conjunction with the AAC-ELD coder, the filter banks of the LD-SBR module are downscaled as well. This ensures that the SBR module operates with the same frequency resolution and, therefore, no more adaptions are implemented.
Thus, the above description reveals that there is a need for downscaling decoding operations such as, for example, downscaling a decoding at an AAC-ELD. It would be feasible to find out the coefficients for the downscaled synthesis window function anew, but this is a cumbersome task, necessitates additional storage for storing the downscaled version and renders a conformity check between the non-downscaled decoding and the downscaled decoding more complicated or, from another perspective, does not comply with the manner of downscaling requested in the AAC-ELD, for example. Depending on the downscale ratio, i.e. the ratio between the original sampling rate and the downscaled sampling rate, one could derive the downscaled synthesis window function simply by downsampling, i.e. picking out every second, third, . . . window coefficient of the original synthesis window function, but this procedure does not result in a sufficient conformity of the non-downscaled decoding and downscaled decoding, respectively. Using more sophisticated decimating procedures applied to the synthesis window function, lead to unacceptable deviations from the original synthesis window function shape. Therefore, there is a need in the art for an improved downscaled decoding concept.
According to an embodiment, an audio decoder configured to decode an audio signal at a first sampling rate from a data stream into which the audio signal is transform coded at a second sampling rate, the first sampling rate being 1/Fth of the second sampling rate, may have: a receiver configured to receive, per frame of length N of the audio signal, N spectral coefficients; a grabber configured to grab-out for each frame, a low-frequency fraction of length N/F out of the N spectral coefficients; a spectral-to-time modulator configured to subject, for each frame, the low-frequency fraction to an inverse transform having modulation functions of length (E+2)·N/F temporally extending over the respective frame and E+1 previous frames so as to obtain a temporal portion of length (E+2)·N/F; a windower configured to window, for each frame, the temporal portion using a synthesis window of length (E+2)·N/F having a zero-portion of length ¼·N/F at a leading end thereof and having a peak within a temporal interval of the synthesis window, the temporal interval succeeding the zero-portion and having length 7/4·N/F so that the windower obtains a windowed temporal portion of length (E+2)·N/F; and a time domain aliasing canceler configured to subject the windowed temporal portion of the frames to an overlap-add process so that a trailing-end fraction of length (E+1)/(E+2) of the windowed temporal portion of a current frame overlaps a leading end of length (E+1)/(E+2) of the windowed temporal portion of a preceding frame, wherein the inverse transform is an inverse MDCT or inverse MDST, and wherein the synthesis window is a downsampled version of a reference synthesis window of length (E+2)·N, downsampled by a factor of F by a segmental interpolation in segments of length ¼·N.
Another embodiment may have an audio decoder for generating a downscaled version of a synthesis window of the above inventive audio decoder, wherein E=2 so that the synthesis window function has a kernel related half of length 2·N/F preceded by a reminder half of length 2·N/F and wherein the spectral-to-time modulator, the windower and the time domain aliasing canceler are implemented so as to cooperate in a lifting implementation according to which the spectral-to-time modulator confines the subjecting, for each frame, the low-frequency fraction to the inverse transform having modulation functions of length (E+2)·N/F temporally extending over the respective frame and E+1 previous frames, to a transform kernel coinciding with the respective frame and one previous frame so as to obtain the temporal portion xk,n with n=0 . . . 2M−1 with M=N/F being a sample index and k being a frame index; the windower windowing, for each frame, the temporal portion xk,n according to zk,n=ωn·xk,n for n=0, . . . , 2M−1 so as to obtain the windowed temporal portion zk,n with n=0 . . . 2M−1; the time domain aliasing canceler generates intermediate temporal portions mk(0), . . . mk(M−1) according to mk,n=zk,n+zk−1,n+M for n=0, . . . , M−1, and the audio decoder has a lifter configured to obtain the frames uk,n with n=0 . . . M−1 according to uk,n=mk,n+In−M/2·mk−1,M−1−n for n=M/2, . . . , M−1, and uk,n=mk,n+IM−1−n·outk−1,M−1−n for n=0, . . . , M/2−1, wherein In with n=0 . . . M−1 are lifting coefficients, and wherein In with n=0 . . . M−1 and ωn with n=0, . . . , 2M−1 depend on coefficients ωn with n=0 . . . (E+2)M−1 of the synthesis window.
According to another embodiment, an audio decoder configured to decode an audio signal at a first sampling rate from a data stream into which the audio signal is transform coded at a second sampling rate, the first sampling rate being 1/Fth of the second sampling rate, may have: a receiver configured to receive, per frame of length N of the audio signal, N spectral coefficients; a grabber configured to grab-out for each frame, a low-frequency fraction of length N/F out of the N spectral coefficients; a spectral-to-time modulator configured to subject, for each frame, the low-frequency fraction to an inverse transform having modulation functions of length 2·N/F temporally extending over the respective frame and a previous frame so as to obtain a temporal portion of length 2·N/F; a windower configured to window, for each frame, the temporal portion xk,n according to zk,n=ωn·xk,n for n=0, . . . , 2M−1 so as to obtain a windowed temporal portion zk,n with n=0 . . . 2M−1; a time domain aliasing canceler configured to generate intermediate temporal portions mk(0), . . . mk(M−1) according to mk,n=zk,n+zk−1,n+M
Another embodiment may have an apparatus for generating a downscaled version of a synthesis window of one of the above inventive audio decoders, wherein the apparatus is configured to downsample a reference synthesis window of length (E+2)·N by a factor of F by a segmental interpolation in 4·(E+2) segments of equal length.
Still another embodiment may have a method for generating a downscaled version of a synthesis window of one of the above inventive audio decoders, wherein the method has downsampling a reference synthesis window of length (E+2)·N by a factor of F by a segmental interpolation in 4·(E+2) segments of equal length.
According to another embodiment, a method for decoding an audio signal at a first sampling rate from a data stream into which the audio signal is transform coded at a second sampling rate, the first sampling rate being 1/Fth of the second sampling rate, may have the steps of: receiving, per frame of length N of the audio signal, N spectral coefficients; grabbing-out for each frame, a low-frequency fraction of length N/F out of the N spectral coefficients; performing a spectral-to-time modulation by subjecting, for each frame, the low-frequency fraction to an inverse transform having modulation functions of length (E+2)·N/F temporally extending over the respective frame and E+1 previous frames so as to obtain a temporal portion of length (E+2)·N/F; windowing, for each frame, the temporal portion using a synthesis window of length (E+2)·N/F having a zero-portion of length ¼·N/F at a leading end thereof and having a peak within a temporal interval of the synthesis window, the temporal interval succeeding the zero-portion and having length 7/4·N/F so that the windower obtains a windowed temporal portion of length (E+2)·N/F; and performing a time domain aliasing cancellation by subjecting the windowed temporal portion of the frames to an overlap-add process so that a trailing-end fraction of length (E+1)/(E+2) of the windowed temporal portion of a current frame overlaps a leading end of length (E+1)/(E+2) of the windowed temporal portion of a preceding frame, wherein the inverse transform is an inverse MDCT or inverse MDST, and wherein the synthesis window is a downsampled version of a reference synthesis window of length (E+2)·N, downsampled by a factor of F by a segmental interpolation in segments of length ¼·N.
Another embodiment may have a non-transitory digital storage medium having stored thereon a computer program for performing the above inventive methods, when said computer program is run by a computer.
The present invention is based on the finding that a downscaled version of an audio decoding procedure may more effectively and/or at improved compliance maintenance be achieved if the synthesis window used for downscaled audio decoding is a downsampled version of a reference synthesis window involved in the non-downscaled audio decoding procedure by downsampling by the downsampling factor by which the downsampled sampling rate and the original sampling rate deviate, and downsampled using a segmental interpolation in segments of ¼ of the frame length.
Embodiments of the present application are described below with respect to the figures, among which:
The following description starts with an illustration of an embodiment for downscaled decoding with respect to the AAC-ELD codec. That is, the following description starts with an embodiment which could form a downscaled mode for AAC-ELD. This description concurrently forms a kind of explanation of the motivation underlying the embodiments of the present application. Later on, this description is generalized, thereby leading to a description of an audio decoder and audio decoding method in accordance with an embodiment of the present application.
As described in the introductory portion of the specification of the present application, AAC-ELD uses low delay MDCT windows. In order to generate downscaled versions thereof, i.e. downscaled low delay windows, the subsequently explained proposal for forming a downscaled mode for AAC-ELD uses a segmental spline interpolation algorithm which maintains the perfect reconstruction property (PR) of the LD-MDCT window with a very high precision. Therefore, the algorithm allows the generation of window coefficients in the direct form, as described in ISO/IEC 14496-3:2009, as well as in the lifting form, as described in [2], in a compatible way. This means both implementations generate 16 bit-conform output.
The interpolation of Low Delay MDCT window is performed as follows.
In general a spline interpolation is to be used for generating the downscaled window coefficients to maintain the frequency response and mostly the perfect reconstruction property (around 170 dB SNR). The interpolation needs to be constraint in certain segments to maintain the perfect reconstruction property. For the window coefficients c covering the DCT kernel of the transformation (see also
1=|(sgn·c(i)·c(2N−1−i)+c(N+i)·c(N−1−i))| for i=0 . . . N/2−1 (1)
The coefficients c(0) . . . c(2N−1) are listed along the diamond shape. The N/4 zeros in the window coefficients, which are responsible for the delay reduction of the filter bank, are marked using a bold arrow.
The second constraint is not only implemented for the segment containing the zeros but also for the other segments. Knowing that some coefficients in the DCT kernel were not determined by the optimization algorithm but were determined by formula (1) to enable PR, several discontinuities in the window shape can be explained, e.g. around c(1536+128) in
Due to that reason, the segment size of N/4 is chosen for the segmental spline interpolation to generate the downscaled window coefficients. The source window coefficients are given by the coefficients used for N=512, also for downscaling operations resulting in frame sizes of N=240 or N=120. The basic algorithm is outlined very briefly in the following as MATLAB code:
As the spline function may not be fully deterministic, the complete algorithm is exactly specified in the following section, which may be included into ISO/IEC 14496-3:2009, in order to form an improved downscaled mode in AAC-ELD.
In other words, the following section provides a proposal as to how the above-outlined idea could be applied to ER AAC ELD, i.e. as to how a low-complex decoder could decode a ER AAC ELD bitstream coded at a first data rate at a second data rate lower than the first data rate. It is emphasized however, that the definition of N as used in the following adheres to the standard. Here, N corresponds to the length of the DCT kernel whereas hereinabove, in the claims, and the subsequently described generalized embodiments, N corresponds to the frame length, namely the mutual overlap length of the DCT kernels, i.e. the half of the DCT kernel length. Accordingly, while N was indicated to be 512 hereinabove, for example, it is indicated to be 1024 in the following.
The following paragraphs are proposed for inclusion to 14496-3:2009 via Amendment.
A.0 Adaptation to Systems Using Lower Sampling Rates
For certain applications, ER AAC LD can change the playout sample rate in order to avoid additional resampling steps (see 4.6.17.2.7). ER AAC ELD can apply similar downscaling steps using the Low Delay MDCT window and the LD-SBR tool. In case AAC-ELD operates with the LD-SBR tool, the downscaling factor is limited to multiples of 2. Without LD-SBR, the downscaled frame size needs to be an integer number.
A.1 Downscaling of Low Delay MDCT Window
The LD-MDCT window wLD for N=1024 is downscaled by a factor Fusing a segmental spline interpolation. The number of leading zeros in the window coefficients, i.e. N/8, determines the segment size. The downscaled window coefficients wLD_d are used for the inverse MDCT as described in 4.6.20.2 but with a downscaled window length Nd=N/F. Please note that the algorithm is also able to generate downscaled lifting coefficients of the LD-MDCT.
A.2 Downscaling of Low Delay SBR Tool
In case the Low Delay SBR tool is used in conjunction with ELD, this tool can be downscaled to lower sample rates, at least for downscaling factors of a multiple of 2. The downscale factor F controls the number of bands used for the CLDFB analysis and synthesis filter bank. The following two paragraphs describe a downscaled CLDFB analysis and synthesis filter bank, see also 4.6.19.4.
4.6.20.5.2.1 Downscaled Analyses CLDFB Filter Bank
Please note that setting F=2 provides the downsampled synthesis filter bank according to 4.6.19.4.3. Therefore, to process a downsampled LD-SBR bit stream with an additional downscale factor F, F needs to be multiplied by 2.
4.6.20.5.2.3 Downscaled Real-Valued CLDFB Filter Bank
The downscaling of the CLDFB can be applied for the real valued versions of the low power SBR mode as well. For illustration, please also consider 4.6.19.5.
For the downscaled real-valued analysis and synthesis filter bank, follow the description in 4.6.20.5.2.1 and 4.6.20.2.2 and exchange the expo modulator in M by a cos( ) modulator.
A.3 Low Delay MDCT Analysis
This subclause describes the Low Delay MDCT filter bank utilized in the AAC ELD encoder. The core MDCT algorithm is mostly unchanged, but with a longer window, such that n is now running from —N to N−1 (rather than from 0 to N−1)
The spectral coefficient, xi,k, are defined as follows:
The window length N (based on the sine window) is 1024 or 960.
The window length of the low-delay window is 2*N. The windowing is extended to the past in the following way:
zi,n=wLD(N−1−n)·x′i,n
The synthesis filter bank is modified compared to the standard IMDCT algorithm using a sine window in order to adopt a low-delay filter bank. The core IMDCT algorithm is mostly unchanged, but with a longer window, such that n is now running up to 2N−1 (rather than up to N−1).
The windowing and overlap-add is conducted in the following way:
The length N window is replaced by a length 2N window with more overlap in the past, and less overlap to the future (N/8 values are actually zero).
Windowing for the Low Delay Window:
zi,n=wLD(n)·xi,n
Where the window now has a length of 2N, hence n=0, . . . , 2N−1.
Overlap and add:
Here, the paragraphs proposed for being included into 14496-3:2009 via amendment end.
Naturally, the above description of a possible downscaled mode for AAC-ELD merely represents one embodiment of the present application and several modifications are feasible.
Generally, embodiments of the present application are not restricted to an audio decoder performing a downscaled version of AAC-ELD decoding. In other words, embodiments of the present application may, for instance, be derived by forming an audio decoder capable of performing the inverse transformation process in a downscaled manner only without supporting or using the various AAC-ELD specific further tasks such as, for instance, the scale factor-based transmission of the spectral envelope, TNS (temporal noise shaping) filtering, spectral band replication (SBR) or the like.
Subsequently, a more general embodiment for an audio decoder is described. The above-outlined example for an AAC-ELD audio decoder supporting the described downscaled mode could thus represent an implementation of the subsequently described audio decoder. In particular, the subsequently explained decoder is shown in
The audio decoder of
In a manner outlined in more details below, the audio decoder 10 of
The manner in which the audio signal 22 is transform coded at the encoding or original sampling rate into the data stream is illustrated in
In particular, coefficients 28 as transmitted within data stream 24 are coefficients of a lapped transform of the audio signal 22 so that the audio signal 22, sampled at the original or encoding sampling rate, is partitioned into immediately temporally consecutive and non-overlapping frames of a predetermined length N, wherein N spectral coefficients are transmitted in data stream 24 for each frame 36. That is, transform coefficients 28 are obtained from the audio signal 22 using a critically sampled lapped transform. In the spectrotemporal spectrogram representation 26, each column of the temporal sequence of columns of spectral coefficients 28 corresponds to a respective one of frames 36 of the sequence of frames. The N spectral coefficients 28 are obtained for the corresponding frame 36 by a spectrally decomposing transform or time-to-spectral modulation, the modulation functions of which temporally extend, however, not only across the frame 36 to which the resulting spectral coefficients 28 belong, but also across E+1 previous frames, wherein E may be any integer or any even numbered integer greater than zero. That is, the spectral coefficients 28 of one column of the spectrogram at 26 which belonged to a certain frame 36 are obtained by applying a transform onto a transform window, which in addition the respective frame comprises E+1 frames lying in the past relative to the current frame. The spectral decomposition of the samples of the audio signal within this transform window 38, which is illustrated in
Before resuming the description of the audio decoder 10, it should be noted that the description of the transmission of the spectral coefficients 28 within the data stream 24 as provided so far has been simplified with respect to the manner in which the spectral coefficients 28 are quantized or coded into data stream 24 and/or the manner in which the audio signal 22 has been pre-processed before subjecting the audio signal to the lapped transform. For example, the audio encoder having transform coded audio signal 22 into data stream 24 may be controlled via a psychoacoustic model or may use a psychoacoustic model to keep the quantization noise and quantizing the spectral coefficients 28 unperceivable for the hearer and/or below a masking threshold function, thereby determining scale factors for spectral bands using which the quantized and transmitted spectral coefficients 28 are scaled. The scale factors would also be signaled in data stream 24. Alternatively, the audio encoder may have been a TCX (transform coded excitation) type of encoder. Then, the audio signal would have had subject to a linear prediction analysis filtering before forming the spectrotemporal representation 26 of spectral coefficients 28 by applying the lapped transform onto the excitation signal, i.e. the linear prediction residual signal. For example, the linear prediction coefficients could be signaled in data stream 24 as well, and a spectral uniform quantization could be applied in order to obtain the spectral coefficients 28.
Furthermore, the description brought forward so far has also been simplified with respect to the frame length of frames 36 and/or with respect to the low delay window function 40. In fact, the audio signal 22 may have been coded into data stream 24 in a manner using varying frame sizes and/or different windows 40. However, the description brought forward in the following concentrates on one window 40 and one frame length, although the subsequent description may easily be extended to a case where the entropy encoder changes these parameters during coding the audio signal into the data stream.
Returning back to the audio decoder 10 of
The output of receiver 12 is the sequence of N spectral coefficients, namely one set of N spectral coefficients, i.e. one column in
Grabber 14 thus receives from receiver 12 the spectrogram 26 of spectral coefficients 28 and grabs, for each frame 36, a low frequency fraction 44 of the N spectral coefficients of the respective frame 36, namely the N/F lowest-frequency spectral coefficients.
That is, spectral-to-time modulator 16 receives from grabber 14 a stream or sequence 46 of N/F spectral coefficients 28 per frame 36, corresponding to a low-frequency slice out of the spectrogram 26, spectrally registered to the lowest frequency spectral coefficients illustrated using index “0” in
The spectral-to-time modulator 16 subjects, for each frame 36, the corresponding low-frequency fraction 44 of spectral coefficients 28 to an inverse transform 48 having modulation functions of length (E+2)·N/F temporally extending over the respective frame and E+1 previous frames as illustrated at 50 in
Thus, windower 52 receives, for each frame, a temporal portion 52, the N/F samples at the leading end thereof temporally corresponding to the respective frame while the other samples of the respective temporal portion 52 belong to the corresponding temporally preceding frames. Windower 18 windows, for each frame 36, the temporal portion 52 using a unimodal synthesis window 54 of length (E+2)·N/F comprising a zero-portion 56 of length ¼ ·N/F at a leading end thereof, i.e. 1/F·N/F zero-valued window coefficients, and having a peak 58 within its temporal interval succeeding, temporally, the zero-portion 56, i.e. the temporal interval of temporal portion 52 not covered by the zero-portion 52. The latter temporal interval may be called the non-zero portion of window 58 and has a length of 7/4·N/F measured in samples of the reduced sampling rate, i.e. 7/4·N/F window coefficients. The windower 18 weights, for instance, the temporal portion 52 using window 58. This weighting or multiplying 58 of each temporal portion 52 with window 54 results in a windowed temporal portion 60, one for each frame 36, and coinciding with the respective temporal portion 52 as far as the temporal coverage is concerned. In the above proposed section A.4, the windowing processing which may be used by window 18 is described by the formulae relating zi,n to xi,n, where xi,n corresponds to the aforementioned temporal portions 52 not yet windowed and zi,n corresponds to the windowed temporal portions 60 with i indexing the sequence of frames/windows, and n indexing, within each temporal portion 52/60, the samples or values of the respective portions 52/60 in accordance with a reduced sampling rate.
Thus, the time domain aliasing canceler 20 receives from windower 18 a sequence of windowed temporal portions 60, namely one per frame 36. Canceler 20 subjects the windowed temporal portions 60 of frames 36 to an overlap-add process 62 by registering each windowed temporal portion 60 with its leading N/F values to coincide with the corresponding frame 36. By this measure, a trailing-end fraction of length (E+1)/(E+2) of the windowed temporal portion 60 of a current frame, i.e. the remainder having length (E+1)·N/F, overlaps with a corresponding equally long leading end of the temporal portion of the immediately preceding frame. In formulae, the time domain aliasing canceler 20 may operate as shown in the last formula of the above proposed version of section A.4, where outi,n corresponds to the audio samples of the reconstructed audio signal 22 at the reduced sampling rate.
The processes of windowing 58 and overlap-adding 62 as performed by windower 18 and time domain aliasing canceler 20 are illustrated in more detail below with respect to
Thus, in the manner outlined above, the audio decoder 10 of
As just mentioned, in order to perform the downsampling 72, the reference synthesis window 70 is processed in segments 74 of equal length. In number, there are (E+2)·4 such segments 74. Measured in the original sampling rate, i.e. in the number of window coefficients of the reference synthesis window 70, each segment 74 is ¼·N window coefficients w′ long, and measured in the reduced or downsampled sampling rate, each segment 74 is ¼·N/F window coefficients w long.
Naturally, it would be possible to perform the downsampling 72 for each downsampled window coefficient wi coinciding accidentally with any of the window coefficients wj′ of the reference synthesis window 70 by simply setting wi=wj′ with the sample time of wi coinciding with that of wj′ and/or by linearly interpolating any window coefficients wi residing, temporally, between two window coefficients wj′ and wj+2′ by linear interpolation, but this procedure would result in a poor approximation of the reference synthesis window 70, i.e. the synthesis window 54 used by audio decoder 10 for the downsampled decoding would represent a poor approximation of the reference synthesis window 70, thereby not fulfilling the request for guaranteeing conformance testing of the downscaled decoding relative to the non-downscaled decoding of the audio signal from data stream 24. Thus, the downsampling 72 involves an interpolation procedure according to which the majority of the window coefficients wi of the downsampled window 54, namely the ones positioned offset from the borders of segments 74, depend by way of the downsampling procedure 72 on more than two window coefficients w′ of the reference window 70. In particular, while the majority of the window coefficients wi of the downsampled window 54 depend on more than two window coefficients wj′ of the reference window 70 in order to increase the quality of the interpolation/downsampling result, i.e. the approximation quality, for every window coefficient wi of the downsampled version 54 it holds true that same does not depend in window coefficients wj′ belonging to different segments 74. Rather, the downsampling procedure 72 is a segmental interpolation procedure.
For example, the synthesis window 54 may be a concatenation of spline functions of length ¼·N/F. Cubic spline functions may be used. Such an example has been outlined above in section A.1 where the outer for-next loop sequentially looped over segments 74 wherein, in each segment 74, the downsampling or interpolation 72 involved a mathematical combination of consecutive window coefficients w′ within the current segment 74 at, for example, the first for next clause in the section “calculate vector r needed to calculate the coefficients c”. The interpolation applied in segments, may, however, also be chosen differently. That is, the interpolation is not restricted to splines or cubic splines. Rather, linear interpolation or any other interpolation method may be used as well. In any case, the segmental implementation of the interpolation would cause the computation of samples of the downscaled synthesis window, i.e. the outmost samples of the segments of the downscaled synthesis window, neighboring another segment, to not depend on window coefficients of the reference synthesis window residing in different segments.
It may be that windower 18 obtains the downsampled synthesis window 54 from a storage where the window coefficients wi of this downsampled synthesis window 54 have been stored after having been obtained using the downsampling 72. Alternatively, as illustrated in
It should be noted that the audio decoder 10 of
Naturally, the modulator 16 would also be responsive to F input 78, as modulator 16 would use appropriately downsampled versions of the modulation functions and the same holds true for the windower 18 and canceler 20 with respect to an adaptation of the actual length of the frames in the reduced or downsampled sampling rate.
For example, F may lie between 1.5 and 10, both inclusively.
It should be noted that the decoder of
The modulator 16 comprises an inverse type-iv discrete cosine transform frequency/time converter. Instead of outputting sequences of (E+2)·N/F long temporal portions 52, it merely outputs temporal portions 52 of length 2·N/F, all derived from the sequence of N/F long spectra 46, these shortened portions 52 corresponding to the DCT kernel, i.e. the 2·N/F newest samples of the erstwhile described portions.
The windower 18 acts as described previously and generates a windowed temporal portion 60 for each temporal portion 52, but it operates merely on the DCT kernel. To this end, windower 18 uses window function ωi with i=0 . . . 2·N/F−1, having the kernel size. The relationship between wi with i=0 . . . (E+2)·N/F−1 is described later, just as the relationship between the subsequently mentioned lifting coefficients and wi with i=0 . . . (E+2)·N/F−1 is.
Using the nomenclature applied above, the process described so far yields:
zk,n=ωn·xk,n for n=0, . . . ,2M−1,
The overlap/add process of the canceller 20 operates in a manner different compared to the above description. It generates intermediate temporal portions mk(0), . . . mk(M−1) based on the equation or expression
mk,n=zk,n+zk−1,n+M for n=0, . . . ,M−1.
In the implementation of
uk,n=mk,n+In−M/2·mk−1,M−1−n for n=M/2, . . . ,M−1,
uk,n=mk,n+IM−1−n·outk,n−1,M−1−n for n=0, . . . ,M/2−1
In other words, for the extended overlap of E frames into the past, only M additional multiplier-add operations are implemented, as can be seen in the framework of the lifter 80. These additional operations are sometimes also referred to as “zero-delay matrices”. Sometimes these operations are also known as “lifting steps”. The efficient implementation shown in
As to the dependency of ωn with n=0 . . . 2M−1 and In with n=0 . . . M−1 on the synthesis window wi with i=0 . . . (E+2)M−1 (it is recalled that here E=2), the following formulae describe the relationship between them with displacing, however, the subscript indices used so far into the parenthesis following the respective variable:
Please note that the window wi contains the peak values on the right side in this formulation, i.e. between the indices 2M and 4M−1. The above formulae relate coefficients In with n=0 . . . M−1 and ωn n=0, . . . , 2M−1 to the coefficients ωn with n=0 . . . (E+2)M−1 of the downscaled synthesis window. As can be seen, In with n=0 . . . M−1 actually merely depend on ¾ of the coefficients of the downsampled synthesis window, namely on ωn with n=0 . . . (E+1)M−1, while ωn n=0, . . . , 2M−1 depend on all ωn with n=0 . . . (E+2)M−1.
As stated above, it might be that windower 18 obtains the downsampled synthesis window 54 ωn with n=0 . . . (E+2)M−1 from a storage where the window coefficients wi of this downsampled synthesis window 54 have been stored after having been obtained using the downsampling 72, and from where same are read to compute coefficients In with n=0 . . . M−1 and ωn n=0, . . . , 2M−1 using the above relation, but alternatively, winder 18 may retrieve the coefficients In with n=0 . . . M−1 and ωn n=0, . . . , 2M−1, thus computed from the pre-downsampled synthesis window, from the storage directly. Alternatively, as stated above, the audio decoder 10 may comprise the segmental downsampler 76 performing the downsampling 72 of
Briefly summarizing the lifting implementation, same results in an audio decoder 10 configured to decode an audio signal 22 at a first sampling rate from a data stream 24 into which the audio signal is transform coded at a second sampling rate, the first sampling rate being 1/Fth of the second sampling rate, the audio decoder 10 comprising the receiver 12 which receives, per frame of length N of the audio signal, N spectral coefficients 28, the grabber 14 which grabs-out for each frame, a low-frequency fraction of length N/F out of the N spectral coefficients 28, a spectral-to-time modulator 16 configured to subject, for each frame 36, the low-frequency fraction to an inverse transform having modulation functions of length 2·N/F temporally extending over the respective frame and a previous frame so as to obtain a temporal portion of length 2·N/F, and a windower 18 which windows, for each frame 36, the temporal portion xk,n according to zk,n=ωn xk,n for n=0, . . . , 2M−1 so as to obtain a windowed temporal portion zk,n with n=0 . . . 2M−1. The time domain aliasing canceler 20 generates intermediate temporal portions mk(0), . . . mk(M−1) according to mk,n=zk,n+zk−1,n+M for n=0, . . . , M−1. Finally, the lifter 80 computes frames uk,n of the audio signal with n=0 . . . M−1 according to uk,n=mk,n+In−M/2·mk−1,M−1−n for n=M/2, . . . , M−1, and uk,n=mk,n+IM−1−n·outk−1,M−1−n for n=0, . . . ,M/2−1, wherein In with n=0 . . . M−1 are lifting coefficients, wherein the inverse transform is an inverse MDCT or inverse MDST, and wherein In with n=0 . . . M−1 and ωn n=0, . . . , 2M−1 depend on coefficients ωn with n 10=0 . . . (E+2)M−1 of a synthesis window, and the synthesis window is a downsampled version of a reference synthesis window of length 4·N, downsampled by a factor of F by a segmental interpolation in segments of length ¼ ·N.
It already turned out from the above discussion of a proposal for an extension of AAC-ELD with respect to a downscaled decoding mode that the audio decoder of
In
Please note, that the standard operation of SBR utilizes a 32 band CLDFB. The interpolation algorithm for the 32 band CLDFB window coefficients ci32 is already given in 4.6.19.4.1 in [1],
Thus, above examples provided some missing definitions for the AAC-ELD codec in order to adapt the codec to systems with lower sample rates. These definitions may be included in the ISO/IEC 14496-3:2009 standard.
Thus, in the above discussion it has, inter alias, been described:
An audio decoder may be configured to decode an audio signal at a first sampling rate from a data stream into which the audio signal is transform coded at a second sampling rate, the first sampling rate being 1/Fth of the second sampling rate, the audio decoder comprising: a receiver configured to receive, per frame of length N of the audio signal, N spectral coefficients; a grabber configured to grab-out for each frame, a low-frequency fraction of length N/F out of the N spectral coefficients; a spectral-to-time modulator configured to subject, for each frame, the low-frequency fraction to an inverse transform having modulation functions of length (E+2)·N/F temporally extending over the respective frame and E+1 previous frames so as to obtain a temporal portion of length (E+2)·N/F; a windower configured to window, for each frame, the temporal portion using a unimodal synthesis window of length (E+2)·N/F comprising a zero-portion of length ¼ ·N/F at a leading end thereof and having a peak within a temporal interval of the unimodal synthesis window, the temporal interval succeeding the zero-portion and having length 7/4·N/F so that the windower obtains a windowed temporal portion of length (E+2)·N/F; and a time domain aliasing canceler configured to subject the windowed temporal portion of the frames to an overlap-add process so that a trailing-end fraction of length (E+1)/(E+2) of the windowed temporal portion of a current frame overlaps a leading end of length (E+1)/(E+2) of the windowed temporal portion of a preceding frame, wherein the inverse transform is an inverse MDCT or inverse MDST, and wherein the unimodal synthesis window is a downsampled version of a reference unimodal synthesis window of length (E+2)·N, downsampled by a factor of F by a segmental interpolation in segments of length ¼ ·N/F.
Audio decoder according to an embodiment, wherein the unimodal synthesis window is a concatenation of spline functions of length ¼ ·N/F.
Audio decoder according to an embodiment, wherein the unimodal synthesis window is a concatenation of cubic spline functions of length ¼ ·N/F.
Audio decoder according to any of the previous embodiments, wherein E=2.
Audio decoder according to any of the previous embodiments, wherein the inverse transform is an inverse MDCT.
Audio decoder according to any of the previous embodiments, wherein more than 80% of a mass of the unimodal synthesis window is comprised within the temporal interval succeeding the zero-portion and having length 7/4·N/F.
Audio decoder according to any of the previous embodiments, wherein the audio decoder is configured to perform the interpolation or to derive the unimodal synthesis window from a storage.
Audio decoder according to any of the previous embodiments, wherein the audio decoder is configured to support different values for F.
Audio decoder according to any of the previous embodiments, wherein F is between 1.5 and 10, both inclusively.
A method performed by an audio decoder according to any of the previous embodiments.
A computer program having a program code for performing, when running on a computer, a method according to an embodiment.
As far as the term “of . . . length” is concerned it should be noted that this term is to be interpreted as measuring the length in samples. As far as the length of the zero portion and the segments is concerned it should be noted that same may be integer valued. Alternatively, same may be non-integer valued.
As to the temporal interval within which the peak is positioned it is noted that
As to the term “downsampled version” it is noted that in the above specification, instead of this term, “downscaled version” has synonymously been used.
As to the term “mass of a function within a certain interval” it is noted that same shall denote the definite integral of the respective function within the respective interval.
In case of the audio decoder supporting different values for F, same may comprise a storage having accordingly segmentally interpolated versions of the reference unimodal synthesis window or may perform the segmental interpolation for a currently active value of F. The different segmentally interpolated versions have in common that the interpolation does not negatively affect the discontinuities at the segment boundaries. They may, as described above, spline functions.
By deriving the unimodal synthesis window by a segmental interpolation from the reference unimodal synthesis window such as the one shown in
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which will be apparent to others skilled in the art and which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
15172282 | Jun 2015 | EP | regional |
15189398 | Oct 2015 | EP | regional |
This application is a continuation of copending U.S. patent application Ser. No. 17/367,037 filed Jul. 2, 2021, which is a continuation of U.S. patent application Ser. No. 16/549,914 filed Aug. 23, 2019, which in turn is a continuation of U.S. patent application Ser. No. 15/843,358 filed Dec. 15, 2017, which is a continuation of International Application No. PCT/EP2016/063371, filed Jun. 10, 2016, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No. EP15172282.4, filed Jun. 16, 2015, and from European Application No. 15189398.9, filed Oct. 12, 2015, which are also incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5729556 | Benbassat et al. | Mar 1998 | A |
6092041 | Pan et al. | Jul 2000 | A |
9520136 | Sung et al. | Dec 2016 | B2 |
9911434 | Purnhagen et al. | Mar 2018 | B2 |
10236007 | Disch et al. | Mar 2019 | B2 |
10515647 | Villemoes | Dec 2019 | B2 |
11735192 | Disch | Aug 2023 | B2 |
11922956 | Disch | Mar 2024 | B2 |
20020007273 | Chen | Jan 2002 | A1 |
20030088423 | Nishio et al. | May 2003 | A1 |
20050047793 | Butler et al. | Mar 2005 | A1 |
20050171785 | Nomura et al. | Aug 2005 | A1 |
20050177360 | Schuijers et al. | Aug 2005 | A1 |
20060265087 | Philippe et al. | Nov 2006 | A1 |
20080097764 | Grill et al. | Apr 2008 | A1 |
20100023322 | Schnell et al. | Jan 2010 | A1 |
20100057447 | Ehara | Mar 2010 | A1 |
20100286990 | Biswas et al. | Nov 2010 | A1 |
20110119055 | Lee et al. | May 2011 | A1 |
20110173011 | Geiger et al. | Jul 2011 | A1 |
20110307248 | Yamanashi et al. | Dec 2011 | A1 |
20120245947 | Neuendorf et al. | Sep 2012 | A1 |
20120265541 | Geiger et al. | Oct 2012 | A1 |
20120271644 | Bessette et al. | Oct 2012 | A1 |
20130030798 | Mittal | Jan 2013 | A1 |
20130030819 | Purnhagen | Jan 2013 | A1 |
20140142957 | Sung | May 2014 | A1 |
20150025896 | Purnhagen et al. | Jan 2015 | A1 |
20150163614 | Radhakrishnan et al. | Jun 2015 | A1 |
20160372123 | Kjoerling et al. | Dec 2016 | A1 |
20180247660 | Purnhagen et al. | Aug 2018 | A1 |
20190267016 | Disch et al. | Aug 2019 | A1 |
Number | Date | Country |
---|---|---|
1484822 | Mar 2004 | CN |
1669073 | Sep 2005 | CN |
1677492 | Oct 2005 | CN |
1890712 | Jan 2007 | CN |
101351840 | Jan 2009 | CN |
101405791 | Apr 2009 | CN |
102089758 | Jun 2011 | CN |
102334159 | Jan 2012 | CN |
102419978 | Apr 2012 | CN |
103531203 | Jan 2014 | CN |
103632674 | Mar 2014 | CN |
104078048 | Oct 2014 | CN |
104488026 | Apr 2015 | CN |
0918407 | May 1999 | EP |
0957580 | Nov 1999 | EP |
2378516 | Jan 2015 | EP |
2006119363 | May 2006 | JP |
2009530675 | Aug 2009 | JP |
2014013408 | Jan 2014 | JP |
2014059570 | Apr 2014 | JP |
2325046 | May 2008 | RU |
2456682 | Jul 2012 | RU |
118128 | Nov 2012 | TH |
128687 | Oct 2013 | TH |
136424 | Sep 2014 | TH |
201230010 | Jul 2012 | TW |
2005055203 | Jun 2005 | WO |
2007051548 | May 2007 | WO |
2008049590 | May 2008 | WO |
2011147950 | Dec 2011 | WO |
2012048472 | Apr 2012 | WO |
2013068634 | May 2013 | WO |
2013142650 | Sep 2013 | WO |
2014056705 | Apr 2014 | WO |
2014128197 | Aug 2014 | WO |
2016062869 | Apr 2016 | WO |
Entry |
---|
Chen, Juin-Hwey, “A High-Fidelity Speech and Audio Codec with Low Delay and Low Complexity”, IEEE International Conference On Acoustics, Speech And Signal Processing; May 26-31, 2013XP055294519, vol. 2. |
Deng, Feng, et al., “[Part 1 of 2] Audio Enhancement in Compressed Domain Based on MPEG-AAC Codec”, Acta Electronica Sinica, vol. 42, No. 7, Jul. 2014, p. 1410-1418, pp. 1410-1418. |
Deng, Feng, et al., “[Part 2 of 2] Audio Enhancement in Compressed Domain Based on MPEG-AAC Codec”, Acta Electronica Sinica, vol. 42, No. 7, Jul. 2014, p. 1410-1418, pp. 1410-1418. |
ETSI TS 126 445 V12.0.0, “Evs Codec Detailed Algorithmic Description (Part 1 of 3)”, (3GPP TS 26.445 version 12.0.0 Release 12), Nov. 2014. |
ETSI TS 126 445 V12.0.0, “Evs Codec Detailed Algorithmic Description (Part 2 of 3)”, (3GPP TS 26.445 version 12.0.0 Release 12), Nov. 2014. |
ETSI TS 126 445 V12.0.0, “EVS Codec Detailed Algorithmic Description (Part 3 of 3)”, (3GPP TS 26.445 version 12.0.0 Release 12), Nov. 2014. |
ISO/IEC 14496-3:2009(E), “Information technology—Coding of audio-visual objects—”, International Standard ISO/IEC 14496-3, 2009, Part 3: Audio, Subpart 4. |
M13958, “Proposal for an Enhanced Low Delay Coding Mode”, International Organisation for Standardisation Organisation Internationale De Normalisation ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and AudioISO/IEC JTC1/SC29/WG11; Hangzhou, China, Oct. 2006, 41 pp. |
Schnell, Markus, et al., “Delay-reduced mode of MPEG-4 Enhanced Low Delay AAC (AAC-ELD)”, Audio Engineering Society Convention 140Convention Paper 9488, pp. 1-8. |
Number | Date | Country | |
---|---|---|---|
20230360656 A1 | Nov 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17367037 | Jul 2021 | US |
Child | 18195213 | US | |
Parent | 16549914 | Aug 2019 | US |
Child | 17367037 | US | |
Parent | 15843358 | Dec 2017 | US |
Child | 16549914 | US | |
Parent | PCT/EP2016/063371 | Jun 2016 | WO |
Child | 15843358 | US |