The present examples relate to encoders and decoders and methods for these apparatus, in particular for information signals, such as audio signals.
General audio codecs need to transmit music and speech signals in a very good quality. Such audio codecs are for instance used in Bluetooth where the audio signals are transmitted from the mobile phone to a headset or headphone and vice versa.
Quantizing parts of a spectrum to zeros often leads to a perceptual degradation. Therefore, it is possible to replace zero-quantized spectral lines with noise using a noise filler tool operating in the frequency domain (FD).
Temporal noise shaping (TNS) uses open-loop linear prediction in the frequency domain (FD). This predictive encoding/decoding process over frequency effectively adapts the temporal structure of the quantization noise to that of the time signal, thereby efficiently using the signal to mask the effects of noise. In the MPEG2 Advanced Audio Coder (AAC) standard, TNS is currently implemented by defining one filter for a given frequency band, and then switching to another filter for the adjacent frequency band when the signal structure in the adjacent band is different than the one in the previous band.
Especially, for speech signals, the audio content may be bandlimited, meaning the audio bandwidth contains only 4 kHz (narrow band, NB), 8 kHz (wide band, WB) or 16 kHz (super wide band, SWB). Audio codecs need to detect the active audio bandwidth and control the coding tools accordingly. As the detection of the bandwidth is not 100% reliable, technical issues may arise.
Some audio coding tools, e.g. Temporal Noise Shaping (TNS) or noise filling (NF), may cause annoying artefacts when operating on bandlimited audio files, e.g., if the tool is not aware about the active signal part. Assuming that the WB signal is coded at 32 kHz, the tools might fill the upper spectrum (8-16 kHz) with artificial noise.
Therefore, the tools need to be restricted to operate only on the active frequency regions.
Some codecs like AAC are configured so as to send the information on active spectrum per scale factor band. This information is also used to control the coding tools. This provides precise results but involves a significant amount of side information to be transmitted. As speech is usually just transmitted in NB, WB, SWB and FB, this limited set of possible active bandwidths is advantageously used to limit the side information.
It is unavoidable that a bandwidth detector returns wrong results from time to time. For instance, a detector may see the fade out of a music signal and interprets this as a low bandwidth case. For codecs, which switch between the different bandwidth modes (NB, WB, SWB, FB) in a hard manner, e.g. 3GPP EVS codec [1], this results in a rectangular spectral hole. Hard manner means that the complete coding operation is limited to the detected bandwidth. Such hard switch can result in audible artefacts.
It is requested to overcome or reduce impairments such as those identified above.
According to an embodiment, an encoder apparatus may have: a plurality of frequency domain, FD, encoder tools for encoding an information signal, the information signal presenting a plurality of frames; and an encoder bandwidth detector and controller configured to select a bandwidth for at least a subgroup of the plurality of FD encoder tools, the subgroup including less FD encoder tools than the plurality of FD encoder tools, on the basis of information signal characteristics so that at least one of the FD encoder tools of the subgroup has a different bandwidth with respect to at least one of the FD encoder tools which are not in the subgroup.
According to another embodiment, a decoder apparatus may have: a plurality of FD decoder tools for decoding an information signal encoded in a bitstream, wherein: the FD decoder tools are divided:
According to another embodiment, a system may have: an inventive encoder apparatus and an inventive decoder apparatus.
According to another embodiment, a method for encoding an information signal according to at least a plurality of operations in the frequency domain, FD, may have the steps of: selecting a bandwidth for a subgroup of FD operations; performing first signal processing operations at the a bandwidth for the subgroup of FD operations; performing second signal processing operations at a different bandwidth for FD operations which are not in the subgroup.
According to yet another embodiment, a method for decoding a bitstream with an information signal and control data, the method including a plurality of signal processing operations in the frequency domain, FD, may have the steps of: choosing a bandwidth selection for a subgroup of FD operations on the basis of the control data; performing first signal processing operations at the a bandwidth for the subgroup of FD operations; performing second signal processing operations at a different bandwidth for FD operations which are not in the subgroup.
In accordance with examples, there is provided an encoder apparatus comprising:
Accordingly, it is possible to avoid spectral holes while maintaining in case of wrong detection of the bandwidth.
In accordance with examples, at least one FD encoder tool of the subgroup may be a temporal noise shaping, TNS, tool and/or a noise level estimator tool.
In accordance with examples, at least one FD encoder tool which is not in the subgroup is chosen among at least on of linear predictive coding, LPC, based spectral shaper, a spectral noise shaper, SNS, tool a spectral quantizer, and a residual coder.
In accordance with examples, the encoder bandwidth detector and controller is configured to select the bandwidth of the at least one FD encoder tool of the subgroup between at least a first bandwidth common to at least one of the FD encoder tools which are not in the subgroup and a second bandwidth different from the bandwidth of the at least one of the FD encoder tools which are not in the subgroup.
In accordance with examples, the encoder bandwidth detector and controller is configured to select the bandwidth of the at least one of the plurality of FD encoder tools on the basis of at least one energy estimate on the information signal.
In accordance with examples, the encoder bandwidth detector and controller is configured to compare at least one energy estimation associated to a bandwidth of the information signal to a respective threshold to control the bandwidth for the at least one of the plurality of FD encoder tools.
In accordance with examples, the at least one of the plurality of FD encoder tools of the subgroup comprises a TNS configured to autocorrelate a TNS input signal within the bandwidth chosen by the encoder bandwidth detector and controller.
In accordance with examples, the at least one of the FD encoder tools which are not in the subgroup is configured to operate at a full bandwidth.
Therefore, the bandwidth selection operates only for the tools of the subgroup (e.g., TNS, noise estimator tool).
In accordance with examples, the encoder bandwidth detector and controller is configured to select at least one bandwidth which is within the full bandwidth at which the at least one of the FD encoder tools which are not in the subgroup is configured to operate.
In accordance with examples, the at least one of the remaining FD encoder tools of the plurality of FD encoder tools is configured to operate in open chain with respect to the bandwidth chosen by the encoder bandwidth detector and controller.
In accordance with examples, the encoder bandwidth detector and controller is configured to select a bandwidth among a finite number of bandwidths and/or among a set of pre-defined bandwidths.
Therefore, the choice is limited and there is no necessity of encoding too complicated and/or long parameters. In examples, only one single parameter (e.g., encoded in 0-3 bits) may be used for the bitstream.
In accordance with examples, the encoder bandwidth detector and controller is configured to perform a selection among at least one or a combination of: a 8 KHz, 16 KHz, 24 KHz, 32 KHz, and 48 KHz, and/or NB, WB, SSWB, SWB, FB, etc.
In accordance with examples, the encoder bandwidth detector and controller is configured to control the signalling of the bandwidth to a decoder.
Therefore, also the bandwidth of signals processed by some tools at the decoder may be controlled (e.g., using the same bandwidth).
In accordance with examples, the encoder apparatus is configured to encode a control data field including an information regarding the chosen bandwidth.
In accordance with examples, the encoder apparatus is configured to define a control data field including:
In accordance with examples, the encoder apparatus at least one energy estimation is performed by:
where X(k) are MDCT (or MDST . . . ) coefficients, NB is the number of bands and If
In accordance with examples, the encoder apparatus comprises a TNS tool which may be configured to perform a filtering operation including the calculation of an autocorrelation function. One of the possible autocorrelation functions may be in the following form:
where X(k) are MDCT coefficients, sub_start(f,s) and sub_stop(f,s) are associated to the particular bandwidth as detected by the encoder bandwidth detector and controller.
In accordance with examples, the encoder apparatus may comprise a noise estimator tool which may be configured to estimate a noise level. One of the procedures used for such an estimation may be in the form of
where gg refers to the global gain, INF(k) to the identification of the spectral lines on which the noise level is to be estimated, and Xf(k) is the signal (e.g., the MDCT or MDST or another FD spectrum after TNS).
In examples, INF(k) may be obtained with:
where bwstop depends on the bandwidth detected by the encoder bandwidth detector and controller.
In accordance with examples, there may be provided a decoder apparatus comprising a plurality of FD decoder tools for decoding an information signal encoded in a bitstream, wherein:
the FD decoder tools are subdivided:
wherein the decoder apparatus is configured so that the at least one of the plurality of decoder tools of the subgroup performs signal processing a different bandwidth with respect to at least one of the remaining FD decoder tools of the plurality of decoder tools.
In accordance with examples, the decoder apparatus may comprise a bandwidth controller configured to choose the bandwidth on the basis of the bandwidth information.
In accordance with examples, the decoder apparatus may be such that the subgroup comprises at least one of a decoder noise estimator tool and/or a temporal noise shape, TNS, decoder.
In accordance with examples, the at least one of the remaining FD decoder tools is at least one of a linear predictive coding, LPC, decoder tool, spectral noise shaper decoder, SNS, tool, a decoder global gain tool, an MDCT or MDST shaping tool.
In accordance with examples, the decoder apparatus may be configured to control the bandwidth of the at least one of the plurality of decoder tools in the subgroup between:
In accordance with examples, the at least one of the FD remaining decoder tools is configured to operate at a full bandwidth.
In accordance with examples, the at least one of the remaining FD decoder tools is configured to operate in open chain with respect to the bandwidth (e.g., chosen by the bandwidth controller).
In accordance with examples, the bandwidth controller is configured to choose a bandwidth among a finite number of bandwidths and/or among a set of pre-defined bandwidths.
In accordance with examples, the bandwidth controller is configured to perform a choice among at least one or a combination of: a 8 KHz, 16 KHz, 24 KHz, 32 KHz, and 48 KHz and/or NB, WB, SSWB, SWB, FB.
In accordance with examples, the decoder may be further comprising a noise filling tool (46) configured to apply a noise level using indices. A technique for obtaining the indices may provide, for example:
where bwstop is obtained on the basis of bandwidth information in the bitstream.
In accordance with examples, the decoder apparatus may comprise a TNS decoder tool configured to perform at least some of the following operations:
s0(start_freq(0)−1)=s1(start_freq(0)−1)= . . . =s7(start_freq(0)−1)=0
for f=0 to num_tns_filters−1 do
Coding tools like TNS or noise filling can create unwanted artificial noise in the silent sections of band limited signals. Therefore, bandwidth detectors are usually in-cooperated to control the bandwidth all coding tools should work on. As bandwidth detection might lead to uncertain results, such wrong detection might lead to audible artefacts such as sudden limitation of audio bandwidth.
To overcome the problem, in some examples some tools, e.g., the quantizer, are not controlled by the bandwidth detector. In case of miss-detection, the quantizer can code the upper spectrum—even tough in low quality—to compensate the problem.
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
The invention described in this document permits to avoid the occurrence of spectral holes even when the bandwidth detector returns a wrong result. In particular, soft band switching for audiocoding applications may be obtained.
A key aspect is that parametric coding tools, e.g. TNS and NF, may be strictly controlled by the bandwidth detector and controller 39 while the remaining coding, i.e. LPC based spectral shaper or spectral noise shaper, SNS, spectral quantizer and residual coder, still work on the full audio bandwidth up to the Nyquist frequency.
On the decoder side (
As a result, artificially generated noise in non-active spectral regions is avoided due to the bandwidth parameter used to control the TNS and NF coding tools (unguided tools). The tool just work on the active audio part and therefore do not generate any artificial noise.
On the other side, the audible effect of wrong detections (false bandwidth detection) can be reduced significantly as the remaining coding tools, e.g. spectral quantizer, LPC shaper or SNS (spectral noise shaper) and residual coder, still work up to the Nyquist frequency. In case of wrong detections, these tools can code the upper frequency—at least with some more distortions compared to a regular coding—and therefore avoid the more severe impression that the audio bandwidth suddenly drops.
In case the region outlined in the figure above contains mostly zero values, the arithmetic coder does not need to code those as the information on the last non-zero spectral tuple is transmitted as side information for the arithmetic coder. This means there is no overhead involved for the arithmetic coder.
The side information that may be used for the transmitted bandwidth is also minimized. Due the robust switching behavior, a signaling of the typically used communication audio bandwidths, i.e. NB, WB, SSWB and SWB, is appropriate.
This technique also allows to build less complex bandwidth detectors which do not use frame dependencies and long history memories to get stable decisions, see the EVS codec [1] Section 5.1.6. This means, the new technique allows the bandwidth detector and controller 39 to react very fast on any audio bandwidth change.
Accordingly, a bandwidth information is used to only control specific tools of a codec (e.g., audio codec) while keeping the remaining tools in another operation mode (e.g., full bandwidth).
An information signal (e.g., an audio signal) may be described in the time domain, TD, as a succession of samples (e.g., x(n)) acquired at different discrete time instants (n). The TD representation may be made of a plurality of frames, each associated to a plurality of samples (e.g., 2048 samples per frame). In the frequency domain, FD, a frame may be represented as a succession of bins (e.g., X(k)), each associated to a particular frequency (each frequency being associated to an index k).
Each of the encoder apparatus 30 and 30a may comprise a low delay modified discrete cosine transform, MDCT, tool 31 or low delay modified discrete sine transform, MDST, tool 31 (or a tool based on another transformation, such as a lapped transformation) which may convert an information signal (e.g., an audio signal) from a time domain, TD, representation to a frequency domain, FD, representation (e.g., to obtain MDCT, MDST, or, more in general, FD coefficients).
The encoder apparatus 30 may comprise a linear predictive coding, LPC, tool 32 for performing an LPC analysis in the FD.
The encoder apparatus 30a may comprise an SNS tool 32a for performing an SNS analysis in the FD.
Each of the encoder apparatus 30 and 30a may comprise a temporal noise shaping, TNS, tool 33, to control the temporal shape of noise within each window of the information signal (e.g., as output by the MDCT or MDST tool) in the FD.
Each of the encoder apparatus 30 and 30a may comprise a spectral quantizer 34 processing signals in the in the FD. The signal as output by the TNS tool 33 may be quantized, e.g., using dead-zone plus uniform thresholds scalar quantization. A gain index may be chosen so that the number of bits needed to encode the quantized FD signal is as close as possible to an available bit budget.
Each of the encoder apparatus 30 and 30a may comprise a coder 35 processing signals in the FD, for example, to perform entropy coding, e.g., to compress a bitstream. The coder 35 may, for example, perform residual coding and/or arithmetic coding.
Each of the encoder apparatus 30 and 30a may comprise, for example, a noise level estimator tool 36, processing signals in the FD, to estimate the noise, quantize it, and/or transmit it in a bitstream.
In examples, the level estimator tool 36 may be placed upstream or downstream to the coder 35.
Each of the encoder apparatus 30 and 30a may comprise tools which process signals in the time domain, TD. For example, the encoder apparatus 30 or 30a may comprise a re-sampling tool 38a (e.g., a downsampler) and/or a long term postfiltering, LTPF, tool 38b, for controlling an LTPF active in TD at the decoder.
Each of the encoder apparatus 30 and 30a may comprise a bitstream multiplexer tool 37 to prepare a bitstream with data obtained from TD and/or FD tools placed upstream. The bitstream may comprise a digital representation of an information signal together with control data (including, for example, a bandwidth information for selecting the bandwidth at some tools of the decoder) to be used at the decoder. The bitstream may be compressed or include portions which are compressed.
Therefore, each of the encoder apparatus 30 and 30a may comprise FD tools (e.g., 31-36) and, in case, TD tools (e.g., 38a, 38b).
The encoder bandwidth detector and controller 39 may control the bandwidth of FD tools forming a first group (subgroup), such as the temporal noise shaping, TNS, tool 33, and/or the noise estimator tool 36. The TNS tool 33 may be used to control the quantization noise. The bandwidth at which FD tools which are not in the subgroup (such as at least one of the LPC tool 32 and/or the SNS tool 32a, the spectrum quantizer 34, and the coder 35) perform signal processing may therefore be different from the bandwidth at which the tools of the subgroup (e.g., 33, 36) perform signal processing. For example, the bandwidth for the FD tools which are not in the subgroup may be greater, e.g., may be a full bandwidth.
In examples, the encoder bandwidth detector and controller 39 may be a part of a digital signal processor which, for example, implements also other tools of the encoder apparatus.
Each of the decoder apparatus 40 and 40a may comprise a bitstream multiplex tool 41 to obtain a bitstream (e.g., by transmission) from an encoder apparatus (e.g., the apparatus 30 or 30a). For example, an output from the encoder apparatus 30 or 30a may be provided as an input signal to the decoder apparatus 40 or 40a.
Each of the decoder apparatus 40 and 40a may comprise a decoder 42 which may, for example, decompress data in the bitstream. Arithmetic decoding may be performed. Residual decoding may be performed.
Each of the decoder apparatus 40 and 40a may comprise a noise filling tool 43 processing signals in the FD.
Each of the decoder apparatus 40 and 40a may comprise a global gain tool 44 processing signals in the FD.
Each of the decoder apparatus 40 and 40a may comprise a TNS decoder tool 45 processing signals in the FD. TNS can be briefly described as follows. At the encoder-side and before quantization, a signal is filtered in the frequency domain (FD) using linear prediction, LP, in order to flatten the signal in the time-domain. At the decoder-side and after inverse quantization, the signal is filtered back in the frequency-domain using the inverse prediction filter, in order to shape the quantization noise in the time-domain such that it is masked by the signal.
Each of the decoder apparatus 40 and 40a may comprise an MDCT or MDST shaping tool 46 (other kinds of shaping tools may be used). Notably, the MDCT or MDST shaping tool 46 may process signals by applying scale factors (or quantized scale factors) obtained from the encoder SNS tool 32a or gain factors computed from decoded LP filter coefficients (obtained from an LPC decoding tool 47) transformed to the MDCT or MDST spectrum.
Each of the decoder apparatus 40 and 40a may comprise an inverse low delay inverse MDCT or MDST tool 48a to transform signal representations from FD to TD (tools based on other kinds of inverse transform may be used).
Each of the decoder apparatus 40 and 40a may comprise an LTPF tool 48b for performing a postfilter in the TD, e.g., on the basis of the parameters provided by the component 38b at the decoder.
Each of the decoder apparatus 40 and 40a may comprise a decoder bandwidth controller 49 configured to select the bandwidth of at least one of the FD tools. In particular, the bandwidth of a subgroup (e.g., formed by the tools 43 and 45) may be controlled so as to be different from the bandwidth at which other FD tools (42, 44, 46, 47) process signals. The bandwidth controller 49 may be input with a signal 39a which has been prepared at the encoder side (e.g., by the bandwidth detector and controller 39) to indicate the selected bandwidth for at least one of the subgroups (33, 36, 43, 45).
In examples, the decoder bandwidth controller 49 may perform operations similar to those processed by the encoder bandwidth detector and controller 39. However, in some examples, the decoder bandwidth controller 49 may be intended as a component which obtains control data (e.g., encoded in a bitstream) from the encoder bandwidth detector and controller 39 and provides the control data (e.g., bandwidth information) to the tools of the subgroup (e.g., decoder noise filling tool 43 and/or TNS decoder tool 45). In examples, the controller 39 is a master and the controller 49 is a slave. In examples, the decoder bandwidth controller 49 may be a part or a section of a digital signal processor which, for example, implements also other tools of the decoder.
In general, the bandwidth controllers 39 and 49 may operate so that the FD tools of the subgroups (e.g., 33 and 36 for the encoder apparatus and/or 43 and 45 for the decoder apparatus) have a same frequency band, while the other FD tools of the decoder and/or encoder have another frequency band (e.g., a broader band).
It has been noted, in fact, that accordingly it is possible to reduce impairments of conventional technology. While for some FD tools (e.g., TNS tools, noise filling tools) it may be advantageous to actually perform a band selection, for other FD tools (e.g., 32, 34, 35, 42, 44, 46, 47) it may be advantageous to process signals at a broader band (e.g. full band), Accordingly, it is possible to avoid spectral holes that would be present in case of hard selection of the bandwidth for all the tools (in particular when a wrong band is selected).
In examples, the bandwidth that is selected by the decoder bandwidth controller 49 may be one of a finite number of choices (e.g., a finite number of bandwidths). In examples, it is possible to choose among narrow band NB (e.g., 4 KHz), wide band WB (e.g., 8 KHz), semi-super wide band SSWB (e.g., 12 KHz), super wide band SWB (e.g., 16 KHz) or full band FB (e.g., 20 KHz).
The selection may be encoded in a data field by the encoder apparatus, so that the decoder apparatus knows which bandwidths have been selected (e.g., according to a selection performed by the encoder bandwidth detector and controller 39).
At step S61, an energy per band may be estimated (e.g., by the bandwidth detector and controller 39).
At step S62, the bandwidth may be detected (e.g., by the bandwidth detector and controller 39).
At step S63, the detected bandwidth may be selected for at least one of the TNS tool 33 and noise estimation tool 36: these tools will perform their processes at the bandwidth detected at S62.
In addition or in alternative, at step S64 parameters may be defined (and/or encoded) in the bitstream to be stored and/or transmitted and to be used by a decoder. Among the parameters, a bandwidth selection information (e.g., 39a) may be encoded, so that the decoder will know the detected and selected bandwidth for the subgroup (e.g., TNS and noise filling/estimation).
Then, a new frame of the information signal may be examined. Method 60 may therefore cycle by moving to S61. Therefore, a decision may be carried out frame by frame.
Notably, in accordance to the detected bandwidth, a different number of bits may be encoded in the bitstream. In examples, if a bandwidth 8 KHz (NB) is detected, no bits will be encoded in the bitstream. However, the decoder will understand that the bandwidth is 8 KHz.
Each of the encoder apparatus 30 and 30a of
In particular, the encoder bandwidth detector and controller 39 may be configured to select the bandwidth of the at least one FD encoder tool of the subgroup (33, 36) between at least a first bandwidth (e.g., Nyquist frequency) common to at least one (or more) of the FD encoder tools which are not in the subgroup and a second bandwidth (e.g., NB, WB, SSWB, SWB) different from the bandwidth of the at least one (or more) of the FD encoder tools which are not in the subgroup.
Therefore, some tools may operate at bandwidths different from each other and/or perform signal processing using bandwidths different from each other.
The tools which are not in the subgroup (e.g., global gain, spectral noise shaping, and so on) may operate in open chain which respect to the bandwidth selection.
In examples, the encoder bandwidth detector and controller 39 is configured to select (e.g., at S62) the bandwidth of the at least one of the plurality of FD encoder tools (31-36) on the basis of at least one energy estimation (e.g., at S61) on the information signal.
The decoder apparatus 40 of
It is not necessary, e.g., to perform the steps S61b and S62b in this temporal order. For example, S62b may be performed before S61b. S61b and S62b may also be performed in parallel (e.g., using time-sharing techniques or similar).
It is not necessary, e.g., to perform the steps S61c and S62c in this temporal order. For example, S62c may be performed before S61c. S61c and S62c may also be performed in parallel (e.g., using time-sharing techniques or similar).
According to an example, the encoder bandwidth detector and controller 39 may detect the energy per band, e.g., using an equation such as:
where X(k) are the MDCT or MDST coefficients (or any other representation of the signal in the FD), NB (e.g., 64) is the number of bands and If
It is therefore possible to detect (e.g., at S62) the bandwidth (e.g., among a finite number of bandwidths). The encoder bandwidth detector and controller 39 may be able to detect the commonly used bandwidth in speech communication, i.e. 4 kHz, 8 kHz, 12 kHz and 16 kHz. For example, it is possible to detect the quietness of each bandwidth. In case of a positive detection of quietness for a bandwidth, a dedicated cut-off characteristics on the spectrum is further detected. For example, a flag (or in any case a data) regarding the detection of quietness may be obtained as:
The FQ(bw) is a binary value which is 1 if the summation is less than TQ(bw), and 0 if the summation is greater than TQ(bw). FQ(bw), associated to a particular bandwidth bw, indicates quietness (e.g., with logical value “1”) when the summation of the energy values is less than a threshold for the particular bandwidth bw (and “0” otherwise). The summation relates to the sum of energy values at different indexes (e.g., energy per bin or band), e.g., for n from a first index of the bandwidth associated to the index Ibw start(bw) to a last index of the bandwidth associated to the index Ibw stop(bw). The number of the examined bandwidths is Nbw.
The procedure may stop when FQ(bw)==0 (energy greater than the threshold for the bandwidth bw). In case FQ(bw+1)==1, the flags FC(b) indicating the cut-off characteristic of the spectrum may be detected by
FC(b)=[10 log10(Eb(b−D))−10 log10(Eb(b))]<TC(bw)
where D defines the distance between the bands where the cut-off characteristic should be checked, i.e. D(bw).
Then, it is possible to define a final information (bandwidth information or bandwidth selection information) to be used to control a subgroup (e.g., TNS tool 33 and/or noise level estimation tool 36 and/or the TNS decoder tool 45 and/or noise filling tool 43). The final information may be, for example, encoded in some bits and may take the form of such as
The parameter bandwidth Pbw (bandwidth selection information) may be used to control the TNS and the noise filling tool, e.g., at the decoder and embody the signal 39a. The parameter Pbw may be stored and/or transmitted in a bitstream using the number of bits nbitsbw. Notably, the number of bits is not necessarily constant and may vary according to the chosen sample rate fs, hence, reducing the payload for the bitstream where not necessary.
A table such as the following one may be used:
bw
fs is a given sampling rate (e.g., 8 KHz, 16 KHz, 24 KHz, 32 KHz, and/or 48 KHz) and, for each fs, the number of possible modes is Nbw+1.
Therefore, it is possible to 0 data encode a control data field including:
An electronic version of at least some portions of Table 1 may be stored in the encoder and/or encoder. Accordingly, when the parameter bandwidth Pbw, it is possible to automatically know control information for the TNS and noise filling operations. For example, Ibw start may refer to the start index associated to the lower end of the bandwidth Ibw stop may refer to the final index associated to the higher end of the bandwidth. The bandwidth choice and parameters based on this choice may, therefore, derived from a table such as Table 1.
In examples, when fs=8000, the bandwidth detector is not needed and we have Pbw=0 and nbitsbw=0, i.e. the parameter Pbw is not placed in the bitstream. However, the decoder will understand that the chosen bandwidth is NB (e.g., on the basis of electronic instruments such as an electronic version of Table 1).
Other methods may be used. One of the bandwidths NB, WB, SSWB, SWB, FB may be identified and transmitted to the FD tools of the encoder subgroup, such as the TNS shaping tool 33 and the noise estimator tool 36. Information such as the parameter Pbw (39a) may be encoded and transmitted to the decoder apparatus 40 or 40a, so that the decoder noise estimator tool 43 and the TNS decoder tool 45 make use of the information regarding the selected bandwidth.
In general terms, the information signal characteristics which constitute the basis for the selection of the bandwidth may comprise, inter alia, one or more of the signal bandwidth, at least one energy estimation of the information signal, cut-off characteristics on the spectrum, information on the detection of quietness in some particular bands, FQ(bw), etc.
The examples above permit to obtain a soft bandwidth switching.
A modified discrete cosine transform (MDCT) or modified discrete sine transform (MDST) (or another modulated lapped transform) tool 31 may convert a digital representation in the TD into a digital representation in the FD. Other examples (maybe based on other transformations, such as lapped transformations) may be notwithstanding used. An example is provided here.
The input signal x(n) of a current frame b in the TD may consist of NF audio samples, where the newest one is located at x(NF−1). Audio samples of past frames are accessed by negative indexing, e.g. x(−1) is the newest of the previous frame.
The time input buffer for the MDCT t may be updated according to
A block of NF time samples may be transformed to the frequency coefficients X(k) using the following equation:
where wN is the Low Delay MDCT window according to the used frame size. The window may be optimized for NF=480 and other versions for different frame sizes may be generated by means of interpolation. The window shape may be the result of an optimization procedure and may be provided point by point.
It is also possible to apply MDST or other transformations.
5.3.1 LPC at the Encoder
A linear predictive coding (LPC) analysis may be performed by an LPC tool 32. LPC is a used representing the spectral envelope of a digital signal in compressed form, using the information of a linear predictive model.
An LPC filter may be derived in a warped frequency domain and therefore psychoacoustically optimized. To obtain the autocorrelation function, the Energy EB(b), as defined above, may be pre-emphasized by
where
and transformed to time domain using, for example, an inverse odd DFT
In case RPre(0)=0, set RPre(0)=1 and RPre(1 . . . NB−1)=0. The first NL samples are extracted into the vector RL=RPre(0 . . . NL−1), where NL stands for the LP filter order, i.e. NL=16.
The LP filter coefficients may be calculated, for example, based on the vector RL through the Levinson-Durbin procedure. This procedure may be described by the following pseudo code:
with a(k)=aN
The LPC coefficients may be weighted, in examples, by equation such as:
aw(k)=a(k)·0.94k for k=0 . . . NL
The LPC coefficients may be quantized.
For example, the weighted LPC coefficients aw(k) are first convolved with the coefficients b(i) using
The coefficients ac(k) may then be transformed to the frequency domain using
where NT=256 is the transform length. Note that this transform can be efficiently implemented using a pruned FFT. The real and imaginary parts of A(k) are then extracted
LSFs may be obtained by a zero-crossing search of Ar(k) and Ai(k) that can be described with the following pseudo-code
If less than 16 LSFs are found, the LSFs are set according to
An LPC shaping may be performed in the MDCT or MDST (FD) domain by applying gain factors computed from the weighted and quantized LP filter coefficients transformed to the MDCT or MDST spectrum.
To compute NB=64 LPC shaping gains, weighted LP filter coefficients a are first transformed into the frequency domain using an odd DFT.
LPC shaping gains gLPC(b) may then be obtained as the absolute values of GLPC(b).
gLPC(b)=|GLPC(b)| for b=0 . . . NB−1
The LPC shaping gains gLPC(b) may be applied on the MDCT or MDST frequency lines for each band separately in order to generate the shaped spectrum Xs(k) as outlined by the following code.
As can be seen from above, the LPC tool, for performing the LPC analysis, is not controlled by the controller 39: for example, there is no selection of a particular bandwidth.
5.3.2 SNS at the Encoder
With reference to
Spectral noise shaping (SNS) shapes the quantization noise in the frequency domain such that it is minimally perceived by the human ear, maximizing the perceptual quality of the decoded output.
Spectral noise shaping may be performed using, for example, 16 scaling parameters. These parameters may be obtained in the encoder by first computing the energy of the MDCT (or MDST, or another transform) spectrum in 64 non-uniform bands, then by applying some processing to the 64 energies (smoothing, pre-emphasis, noise-floor, log-conversion), then by downsampling the 64 processed energies by a factor of 4 to obtain 16 parameters which are finally normalized and scaled. These 16 parameters may be then quantized using vector. The quantized parameters may then be interpolated to obtain 64 interpolated scaling parameters. These 64 scaling parameters are then used to directly shape the MDCT (or MDST . . . ) spectrum in the 64 non-uniform bands. The scaled MDCT (or MDST . . . ) coefficients may then be quantized using a scalar quantizer with a step size controlled by a global gain. At the decoder, inverse scaling is performed in every 64 bands, shaping the quantization noise introduced by the scalar quantizer. An SNS technique here disclosed may use, for example, only 16+1 parameters as side-information and the parameters can be efficiently encoded with a low number of bits using vector quantization. Consequently, the number of side-information bits is reduced, which may lead to a significant advantage at low bitrate and/or low delay. A non-linear frequency scaling may be used. In this examples, none of the LPC-related functions are used to reduce complexity. The processing functions involved (smoothing, pre-emphasis, noise-floor, log-conversion, normalization, scaling, interpolation) need very small complexity in comparison. Only the vector quantization still has relatively high complexity. However, some low complexity vector quantization techniques can be used with small loss in performance (multi-split/multi-stage approaches). This SNS technique is not relying on a LPC-based perceptual filter. It uses 16 scaling parameters which can be computed with a lot of freedom. Flexibility is therefore increased.
At the encoder 30a, the SNS tool 32 may perform at least one of the following passages:
Step 1: Energy Per Band
The energy per band EB(n) may be computed as follows
with X(k) are the MDCT (or MDST, or another transform) coefficients, NB=64 is the number of bands and If
Step 2: Smoothing
The energy per band EB (b) is smoothed using
This step may be mainly used to smooth the possible instabilities that can appear in the vector EB(b). If not smoothed, these instabilities are amplified when converted to log-domain (see step 5), especially in the valleys where the energy is close to 0.
Step 3: Pre-Emphasis
The smoothed energy per band ES(b) is then pre-emphasized using
with gtilt controls the pre-emphasis tilt and depends on the sampling frequency. It is for example 18 at 16 kHz and 30 at 48 kHz. The pre-emphasis used in this step has the same purpose as the pre-emphasis used in the LPC-based perceptual filter of conventional technology, it increases the amplitude of the shaped Spectrum in the low-frequencies, resulting in reduced quantization noise in the low-frequencies.
Step 4: Noise Floor
A noise floor at −40 dB is added to EP(b) using
Ep=max(Ep(b),noiseFloor) for b=0 . . . 63
with the noise floor being calculated by
This step improves quality of signals containing very high spectral dynamics such as e.g. glockenspiel, by limiting the amplitude amplification of the shaped spectrum in the valleys, which has the indirect effect of reducing the quantization noise in the peaks (an increase of quantization noise in the valleys is not perceptible).
Step 5: Logarithm
A transformation into the logarithm domain is then performed using
Step 6: Downsampling
The vector EL (b) is then downsampled by a factor of 4 using
This step applies a low-pass filter (w(k)) on the vector EL(b) before decimation. This low-pass filter has a similar effect as the spreading function used in psychoacoustic models: it reduces the quantization noise at the peaks, at the cost of an increase of quantization noise around the peaks where it is anyway perceptually masked.
Step 7: Mean Removal and Scaling
The final scale factors are obtained after mean removal and scaling by a factor of 0.85
Since the codec has an additional global-gain, the mean can be removed without any loss of information. Removing the mean also allows more efficient vector quantization. The scaling of 0.85 slightly compress the amplitude of the noise shaping curve. It has a similar perceptual effect as the spreading function mentioned in Step 6: reduced quantization noise at the peaks and increased quantization noise in the valleys.
Step 8: Quantization
The scale factors are quantized using vector quantization, producing indices which are then packed into the bitstream and sent to the decoder, and quantized scale factors scfQ(n).
Step 9: Interpolation
The quantized scale factors scfQ(n) are interpolated using:
scfQint(0)=scfQ(0)
scfQint(1)=scfQ(0)
scfQint(4n+2)=scfQ(n)+⅛(scfQ(n+1)−scfQ(n)) for n=0 . . . 14
scfQint(4n+3)=scfQ(n)+⅜(scfQ(n+1)−scfQ(n)) for n=0 . . . 14
scfQint(4n+4)=scfQ(n)+⅝(scfQ(n+1)−scfQ(n)) for n=0 . . . 14
scfQint(4n+5)=scfQ(n)+⅞(scfQ(n+1)−scfQ(n)) for n=0 . . . 14
scfQint(62)=scfQ(15)+⅛(scfQ(15)−scfQ(14))
scfQint(63)=scfQ(15)+⅜(scfQ(15)−scfQ(14))
and transformed back into linear domain using
gSNS(b)=2scfQint(b) for b=0 . . . 63
Interpolation may be used to get a smooth noise shaping curve and thus to avoid any big amplitude jumps between adjacent bands.
Step 10: Spectral Shaping
The SNS scale factors gSNS(b) are applied on the MDCT (or MDST, or another transform) frequency lines for each band separately in order to generate the shaped spectrum Xs(k)
At step S71, selection information regarding the selected bandwidth (e.g., parameter Pbw) may be obtained from the encoder bandwidth detector and controller 39, for example.
According to the selection information (bandwidth information), the behaviour of the TNS is different for different bandwidths (NB, WB, SSWB, SWB, FB). An example is provided by the following table:
For example, when the selection information is SWB, the TNS will perform a filtering twice (see num_tns_filters). As can be seen from the tables, different indexes are associated to different bandwidths (e.g., for NB the stop frequency is different than for WB, and so on).
Therefore, as can be seen, the TNS tool 33 may operate at a different bandwidth on the basis of the selection set out by the controller 39. Notably, other FD tools of the same encoder apparatus 40 or 40a may continue perform processes at a different frequency.
The TNS encoding steps are described below. First, an analysis estimates a set of reflection coefficients for each TNS Filter (step S72). Then, these reflection coefficients are quantized (step S73). And finally, the MDCT- or MDST-spectrum is filtered using the quantized reflection coefficients (step S73).
With reference to the step S72, a complete TNS analysis described below may be repeated for every TNS filter f, with f=0 . . . num_tns_filters−1 (num_filters is given in Table 2). Other TNS analysis operations may be performed, which provide reflection coefficients.
The TNS tool may be configured to perform an autocorrelation on a TNS input value. A normalized autocorrelation function may be calculated as follows, for each k=0 . . . 8 (for example)
with sub_start(f,s) and sub_stop(f,s) given Table 2. e (s) is an energy sum over a spectral subsection (a normalization factor between the start and the stop frequency of each filter).
The normalized autocorrelation function may be lag-windowed using, for example:
In some examples, it is possible to perform a decision to turn on/off the TNS filter fin the current frame is based on the prediction gain
If predGain>thresh, then turn on the TNS filter f
with thresh=1.5 and the prediction gain may be computed by
The additional steps described below are performed only if the TNS filter f is turned on (or in the example which do not use the turning on/off).
In some examples, a weighting factor may be computed by
with thresh2=2, γmin=0.85 and
The LPC coefficients may be weighted using the factor γ
aw(k)=γka(k) for k=0 . . . 8
The weighted LPC coefficients may be converted to reflection coefficients using the following procedure:
wherein rc(k,f)=rc(k) are the final estimated reflection coefficients for the TNS filter f.
If the TNS filter f is turned off, then the reflection coefficients may be simply set to 0: rc(k,f)=0, k=0 . . . 8.
At step S73, a quantization step may be performed. For example, for each TNS filter f, reflection coefficients (e.g., as obtained at step S72) may quantized. For example, scalar uniform quantization in the arcsine domain may be used:
and/or
rcq(k,f)=sin[Δ(rci(k,f)−8)] for k=0 . . . 8
with
and nint(.) being the rounding-to-nearest-integer function, for example; rci(k,f) the quantizer output indices; and rcq(k,f) the quantized reflection coefficients.
An order of the quantized reflection coefficients may be calculated using
k=7
while k≥0 and rcq(k,f)=0 do
k=k−1
rcorder(f)=k+1
A total number of bits consumed by TNS in the current frame may be computed as follows
┌ . . . ┐ means a rounding operation to the integer over.
The tables tab_nbits_TNS_order and tab_nbits_TNS_coef may be pre-defined.
At step S74, a digital representation of an information signal in the FD (e.g., as provided by the LPC tool 32 or SNS tool 32a) may be filtered. This representation may be, in examples, in the form of a modified discrete cosine or sine transform (MDCT or MDST). The MDCT spectrum Xs(n) may filtered using the following algorithm, for example:
s0(start_freq(0)−1)=s1(start_freq(0)−1)= . . . =s7(start_freq(0)−1)=0
for f=0 to num_tns_filters−1 do
where Xf(n) is the TNS filtered MDCT or MDST spectrum.
Other filtering techniques may be used. However, it may be seen that the TNS is applied to the particular bandwidth (e.g., NB, WB, SSWB, SWB, FB) chosen by the controller 39 on the basis of the signal characteristics.
A spectrum quantizer tool 34 is here discussed. The MDCT or MDST spectrum after TNS filtering (Xf(n)) may be quantized using dead-zone plus uniform threshold scalar quantization and the quantized MDCT or MDST spectrum Xq(n) may then be encoded using arithmetic encoding. A global gain gg may control the step size of the quantizer. This global gain is quantized with 7 bits and the quantized global gain index ggind is then an integer, for example, between 0 and 127. The global gain index may be chosen such that the number of bits needed to encode the quantized MDCT or MDST spectrum is as close as possible to the available bit budget.
In one example, a number of bits available for coding the spectrum may be given by
with nbits being the number of bits available in one TD frame for the original information signal, nbitsbw provided in Table 1, nbitsTNS provided by the TNS (total number of bits consumed by TNS in a current frame), nbitsLTPF being associated to the LTPF 38b (number of bits consumed by LTPF), nbitsLPC/SNS=38, nbitsgain=7 and nbitsnf=3, for example. In examples, also protection bits (e.g., cyclical redundancy code, CRC, bits) may be taken into consideration.
An offset may first be computed using
nbitsoffset=0.8*nbitsoffsetold+0.2*min(40,max(−40,nbitsoffsetold+nbitsspecold−nbitsestold))
with nbitsoffsetold is the value of nbitsoffset in the previous frame, nbitsspecold is the value of nbitsspec in the previous frame and nbitsestold is the value of nbitsest in the previous frame.
This offset may then be used to adjust the number of bits available for coding the spectrum
nbitsspec=nint(nbitsspec+nbitsoffset)
A global gain index may then be estimated such that the number of bits needed to encode the quantized MDCT or MDST spectrum is as close as possible to the available bit budget. This estimation is based on a low-complexity bisection search which coarsely approximates the number of bits needed to encode the quantized spectrum. The algorithm can be described as follows
with E[k] being the energy (in dB) of blocks of 4 MDCT or MDST coefficients given by
The global gain index above is first unquantized using
The spectrum Xf may then be quantized using, for example:
The number of bits nbitsest needed to encode the quantized MDCT or MDST (or, anyway, FD) spectrum Xq(n) can be accurately estimated using the algorithm below.
A bitrate flag is first computed using, for example:
Then the index of the last non-zeroed 2-tuple is obtained by
The number of bits nbitsest may be then computed as follows
with ac_lookup and ac_bits are tables which may be predefined.
The number of bits nbitsest may be compared with the available bit budget nbitsspec. If they are far from each other, then the quantized global gain index ggind is adjusted and the spectrum is requantized. A procedure used to adjust the quantized global gain index ggind is given below
As can see from above, the spectral quantization is not controlled by the controller 39: there is no restriction to a particular band.
All or part of the encoded data (TNS data, LTPF data, global gain, quantized spectrum . . . ) may be entropy coded, e.g., by compression according to any algorithm.
A portion of this data may be composed by pure bits which are directly put in the bitstream starting from the end of the bitstream and going backward.
The rest of data may be encoded using arithmetic encoding starting from the beginning of the bitstream and going forward.
The two data fields above may be exchanged regarding starting point and direction of reading/writing of the bit stream.
An example in pseudo code may be:
A noise estimation tool 36 (noise level estimator) may control the noise filing on the decoder side. At the encoder side, the noise level parameter may be estimated, quantized and transmitted or stored in a bitstream.
The noise level may be estimated based on the spectral coefficients which have been quantized to zero, i.e. Xq(k)==0. The indices for the relevant spectral coefficients are given by
where bwstop may depend on the bandwidth detected at step S62 and/or by the bandwidth detector and controller 39 as defined, for example, in the following table:
For the identified indices, the mean level of missing coefficients is estimated based on the spectrum after TNS filtering (Xf(k)), for example, and normalized by the global gain.
The final noise level may be quantized to eight steps:
FNF=min(max(└8−16·LNF┘,0),7)
Therefore, the noise level estimator tool 36 may be controlled by the controller 39, e.g., on the basis of bandwidth information 39a.
For example, an electronic version of Table 3 may be stored in a storage unit so that, when the bandwidth selection for a particular bandwidth is obtained, the parameter bwstop is easily derived.
All the encoded data (TNS data, LTPF data, global gain, quantized spectrum . . . ) may be entropy decoded at the decoder side, e.g., using the decoder tool 42. A bitstream provided by an encoder may, therefore, be decompressed according to any algorithm.
A decoder noise filling tool 43 is here discussed. The decoder noise filling tool 43 may be controlled, inter alia, by the decoder bandwidth controller 49 (and/or by the controller 39 via information 39a encoded in the bitstream, such as the control data field Nbw and/or Pwb of Table 1).
The indices for the relevant spectral coefficients may be given by
where bwstop may be given in Table 3.
The noise filling may be applied on the identified relevant spectral lines INF(k) using a transmitted noise factor FNF obtained from the encoder. FNF may be calculated at the noise estimator on encoder side. FNF may be a 3 bit value coded as side information in the bit stream. FNF may be obtained, for example, using the following procedure:
A procedure is here provided:
= (8-FNF)/16;
How to obtain the nf_seed may be described, for example, by the following pseudocode:
As can be seen from above, the decoder noise filter tool 43 may make use of the parameter bwstop.
In some examples, the parameter bwstop explicitly obtained as a value in the bitstream. In examples, the parameter bwstop is obtained by the controller 49 on the basis of the bandwidth information 39a (Pbw) in a control field of the bitstream encoded by the encoder. The decoder may have an electronic version of Table 3 stored in a non-transitory storage unit. Accordingly, the bitstream length is reduced.
Therefore, the bandwidth controller 49 (and/or the bandwidth detector and controller 39 of the decoder via the control data 39a) may control the decoder noise filling tool 43.
A global gain may be applied on the spectrum after the noise filling has been applied using, for example, a formula such as
where ggind is a global gain index, e.g., obtained from the encoder.
A TNS decoder tool 45 is here discussed. The quantized reflection coefficients may be obtained for each TNS filter f using
rcq(k,f)=sin[Δ(rci(k,f)−8)] k=0 . . . 8
where rci(k,f) are the quantizer output indices.
The MDCT or MDST spectrum (n) (e.g., as generated by the global gain tool) may then be filtered using a following procedure such as:
where (n) is the output of the TNS decoder.
The parameters num_tns_filters, start_freq and stop_freq may be provided, on the basis of control information provided by the encoder.
In some examples num_tns_filters, start_freq and/or stop_freq are not explicitly provided in the bitstream. In examples, num_tns_filters, start_freq and stop_freq are derived on the basis of the Nbw value in a control field of the bitstream encoded by the encoder. For example, the decoder may have an electronic version of Table 2 (or at least a portion thereof) stored therein. Accordingly, the bitstream length is reduced.
Therefore, the TNS decoder tool 45 may be controlled by the bandwidth detected at the encoder side.
5.11.1 MDCT or MDST Shaping at the Decoder
An MDCT or MDST shaping tool 46 is here discussed. The LPC or SNS shaping may be performed in the MDCT (FD) domain by applying gain factors computed from the decoded LP filter coefficients transformed to the MDCT or MDST spectrum.
To compute the NB LPC shaping gains, the decoded LP filter coefficients ã may be first transformed into the frequency domain using an odd DFT.
The LPC shaping gains gLPC(b) may then be computed as the reciprocal absolute values of GLPC(b).
The LPC shaping gains gLPC(b) may be applied on the TNS filtered MDCT frequency lines for each band separately as outlined in order to generate the shaped spectrum {circumflex over (X)}(k) as outlined, for example, by the following code:
As can be seen above, the MDCT or MDST shaping tool 46 does not need to be restricted to a particular bandwidth and, therefore, does not need to be controlled by the controller 49 or 39.
5.11.2 SNS at the Decoder
The following steps may be performed at the noise shaper decoder, SNS, tool 46a:
Step 1: Quantization
The vector quantizer indices produced in encoder step 8 (see section 5.3.2) are read from the bitstream and used to decode the quantized scale factors scfQ (n).
Step 2: Interpolation
Same as Step 9 at section 5.3.2.
Step 3: Spectral Shaping
The SNS scale factors gSNS(b) are applied on the quantized MDCT (or MDST, or another transform) frequency lines for each band separately in order to generate the decoded spectrum {circumflex over (X)}(k) as outlined by the following code.
{circumflex over (X)}(k)=(k)·gSNS(b) for k=If
An inverse MDCT or MDST tool 48a is here discussed (other tools based on other transformations, such as lapped transformations, may be used).
A reconstructed spectrum {circumflex over (X)}(k) may be transformed to time domain by the following steps:
1. Generation of time domain aliasing buffer {circumflex over (t)}(n)
2. Windowing of time-aliased buffer
{circumflex over (t)}(n)=wN(2N−1−n)·{circumflex over (t)}(n) for n=0 . . . 2NF−1
3. Conduct overlap-add operation to get reconstructed time samples {circumflex over (x)}(n)
{circumflex over (x)}(n)=mem_ola_add(n)+{circumflex over (t)}(Z+n) for n=0 . . . NF−Z−1
{circumflex over (x)}(n)={circumflex over (t)}(Z+n) for n=NF−Z . . . NF−1
mem_ola_add(n)={circumflex over (t)}(NF+Z+n) for n=0 . . . NF−Z−1
with mem_ola_add(n) is initialized to 0 before decoding the first frame.
With reference to step 1, an MDST may be performed by exchanging the cos function by a sine function, e.g., to have:
As can be seen above, the inverse MDCT or MDST tool 48a is not controlled on the basis of the bandwidth determined at the encoder side.
The apparatus 120 may comprise an output unit 127 for transmitting data, e.g., wirelessly, e.g., using a particular protocol, such as Bluetooth. The apparatus 120 may also comprise an input unit 126 for obtaining data, e.g., wirelessly, e.g., using a particular protocol, such as Bluetooth. For example, the apparatus 120 may obtain, by executing the instructions stored in the non-transitory memory unit 122, a bitstream transmitted by a decoder.
In examples, the apparatus 110 and 120 may be the same device. In examples, the composition of different apparatus 110 and 120 form a system.
Depending on certain implementation requirements, examples may be implemented in hardware. The implementation may be performed using a digital storage medium, for example a floppy disk, a Digital Versatile Disc (DVD), a Blu-Ray Disc, a Compact Disc (CD), a Read-only Memory (ROM), a Programmable Read-only Memory (PROM), an Erasable and Programmable Read-only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM) or a flash memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
Generally, examples may be implemented as a computer program product with program instructions, the program instructions being operative for performing one of the methods when the computer program product runs on a computer. The program instructions may for example be stored on a machine readable medium.
Other examples comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier. In other words, an example of method is, therefore, a computer program having a program instructions for performing one of the methods described herein, when the computer program runs on a computer.
A further example of the methods is, therefore, a data carrier medium (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier medium, the digital storage medium or the recorded medium are tangible and/or non-transitionary, rather than signals which are intangible and transitory.
A further example comprises a processing unit, for example a computer, or a programmable logic device performing one of the methods described herein.
A further example comprises a computer having installed thereon the computer program for performing one of the methods described herein.
A further example comprises an apparatus or a system transferring (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
In some examples, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some examples, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods may be performed by any appropriate hardware apparatus.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
17201082 | Nov 2017 | EP | regional |
This application is a continuation of copending International Application No. PCT/EP2018/080335, filed Nov. 6, 2018, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No. EP 17201082.9, filed Nov. 10, 2017, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4972484 | Link et al. | Nov 1990 | A |
5012517 | Chhatwal et al. | Apr 1991 | A |
5581653 | Todd | Dec 1996 | A |
5651091 | Chen et al. | Jul 1997 | A |
5781888 | Herre | Jul 1998 | A |
5812971 | Herre | Sep 1998 | A |
5819209 | Inoue | Oct 1998 | A |
5909663 | Iijima et al. | Jun 1999 | A |
5999899 | Robinson | Dec 1999 | A |
6018706 | Huang et al. | Jan 2000 | A |
6148288 | Park | Nov 2000 | A |
6167093 | Tsutsui et al. | Dec 2000 | A |
6507814 | Gao | Jan 2003 | B1 |
6570991 | Scheirer et al. | May 2003 | B1 |
6665638 | Kang et al. | Dec 2003 | B1 |
6735561 | Johnston et al. | May 2004 | B1 |
7009533 | Wegener | Mar 2006 | B1 |
7302396 | Cooke | Nov 2007 | B1 |
7353168 | Chen et al. | Apr 2008 | B2 |
7395209 | Dokic et al. | Jul 2008 | B1 |
7539612 | Chen et al. | May 2009 | B2 |
7546240 | Wei-Ge et al. | Jun 2009 | B2 |
8015000 | Chen et al. | Sep 2011 | B2 |
8095359 | Boehm et al. | Jan 2012 | B2 |
8280538 | Kim et al. | Oct 2012 | B2 |
8473301 | Chen et al. | Jun 2013 | B2 |
8543389 | Ragot et al. | Sep 2013 | B2 |
8554549 | Oshikiri et al. | Oct 2013 | B2 |
8612240 | Fuchs et al. | Dec 2013 | B2 |
8682681 | Fuchs et al. | Mar 2014 | B2 |
8738385 | Chen | May 2014 | B2 |
8751246 | Bayer et al. | Jun 2014 | B2 |
8847795 | Faure et al. | Sep 2014 | B2 |
8891775 | Mundt et al. | Nov 2014 | B2 |
8898068 | Fuchs et al. | Nov 2014 | B2 |
9026451 | Kleijn et al. | May 2015 | B1 |
9123350 | Zhao et al. | Sep 2015 | B2 |
9489961 | Kovesi et al. | Nov 2016 | B2 |
9595262 | Fuchs et al. | Mar 2017 | B2 |
10296959 | Chernikhova et al. | May 2019 | B1 |
10726854 | Ghido et al. | Jul 2020 | B2 |
20010026327 | Schreiber et al. | Oct 2001 | A1 |
20030088408 | Thyssen et al. | May 2003 | A1 |
20030101050 | Vladimir et al. | May 2003 | A1 |
20040158462 | Rutledge et al. | Aug 2004 | A1 |
20040162866 | Malvar et al. | Aug 2004 | A1 |
20050010395 | Chiu et al. | Jan 2005 | A1 |
20050015249 | Wei-Ge et al. | Jan 2005 | A1 |
20050192799 | Kim et al. | Sep 2005 | A1 |
20050246178 | Fejzo | Nov 2005 | A1 |
20060288851 | Naoki et al. | Dec 2006 | A1 |
20070033056 | Groeschl et al. | Feb 2007 | A1 |
20070078646 | Lei et al. | Apr 2007 | A1 |
20070118361 | Sinha et al. | May 2007 | A1 |
20070118369 | Chen | May 2007 | A1 |
20070124136 | Den Brinker et al. | May 2007 | A1 |
20070127729 | Breebaart et al. | Jun 2007 | A1 |
20070129940 | Geyersberger et al. | Jun 2007 | A1 |
20070154031 | Carlos et al. | Jul 2007 | A1 |
20070276656 | Solbach et al. | Nov 2007 | A1 |
20080033718 | Zopf et al. | Feb 2008 | A1 |
20080091418 | Laaksonen et al. | Apr 2008 | A1 |
20080126086 | Kandhadai et al. | May 2008 | A1 |
20080126096 | Ki-Hyun et al. | May 2008 | A1 |
20090076805 | Zhengzhong et al. | Mar 2009 | A1 |
20090076830 | Taleb | Mar 2009 | A1 |
20090089050 | Mo et al. | Apr 2009 | A1 |
20090138267 | Davidson et al. | May 2009 | A1 |
20090248424 | Koishida et al. | Oct 2009 | A1 |
20090254352 | Zhao | Oct 2009 | A1 |
20100010810 | Morii | Jan 2010 | A1 |
20100070270 | Gao | Mar 2010 | A1 |
20100094637 | Vinton | Apr 2010 | A1 |
20100115370 | Sakari et al. | May 2010 | A1 |
20100198588 | Masataka et al. | Aug 2010 | A1 |
20100223061 | Ojanpera | Sep 2010 | A1 |
20100312552 | Kandhadai et al. | Dec 2010 | A1 |
20100312553 | Fang et al. | Dec 2010 | A1 |
20100324912 | Mi et al. | Dec 2010 | A1 |
20110015768 | Soo et al. | Jan 2011 | A1 |
20110022924 | Malenovsky et al. | Jan 2011 | A1 |
20110035212 | Briand et al. | Feb 2011 | A1 |
20110060597 | Wei-Ge et al. | Mar 2011 | A1 |
20110071839 | Budnikov et al. | Mar 2011 | A1 |
20110095920 | Ashley et al. | Apr 2011 | A1 |
20110096830 | Ashley et al. | Apr 2011 | A1 |
20110116542 | Marc et al. | May 2011 | A1 |
20110125505 | Philleppe et al. | May 2011 | A1 |
20110145003 | Bessette | Jun 2011 | A1 |
20110196673 | Jin et al. | Aug 2011 | A1 |
20110200198 | Stefan et al. | Aug 2011 | A1 |
20110238425 | Jeremie et al. | Sep 2011 | A1 |
20110238426 | Borsum et al. | Sep 2011 | A1 |
20120010879 | Kei et al. | Jan 2012 | A1 |
20120022881 | Geiger et al. | Jan 2012 | A1 |
20120072209 | Krishnan et al. | Mar 2012 | A1 |
20120109659 | Guoming et al. | May 2012 | A1 |
20120214544 | Rodriguez et al. | Aug 2012 | A1 |
20120245947 | Neuendorf et al. | Sep 2012 | A1 |
20120265540 | Fuchs et al. | Oct 2012 | A1 |
20120265541 | Geiger et al. | Oct 2012 | A1 |
20130030819 | Pontus et al. | Jan 2013 | A1 |
20130096912 | Resch et al. | Apr 2013 | A1 |
20130226594 | Fuchs et al. | Aug 2013 | A1 |
20130282369 | Sang-Ut et al. | Oct 2013 | A1 |
20140052439 | Tejaswi et al. | Feb 2014 | A1 |
20140067404 | Baumgarte | Mar 2014 | A1 |
20140074486 | Martin et al. | Mar 2014 | A1 |
20140108020 | Yang et al. | Apr 2014 | A1 |
20140142957 | Nam-Suk et al. | May 2014 | A1 |
20140172141 | Mangold | Jun 2014 | A1 |
20140223029 | Bhaskar et al. | Aug 2014 | A1 |
20140358531 | Vos | Dec 2014 | A1 |
20150010155 | Yue et al. | Jan 2015 | A1 |
20150081312 | Fuchs et al. | Mar 2015 | A1 |
20150142452 | Nam-Suk et al. | May 2015 | A1 |
20150154969 | Craven et al. | Jun 2015 | A1 |
20150162011 | Zexin et al. | Jun 2015 | A1 |
20150170668 | Kovesi et al. | Jun 2015 | A1 |
20150221311 | Jeon et al. | Aug 2015 | A1 |
20150228287 | Bruhn et al. | Aug 2015 | A1 |
20150255079 | Huang et al. | Sep 2015 | A1 |
20150302859 | Aguilar et al. | Oct 2015 | A1 |
20150302861 | Salami et al. | Oct 2015 | A1 |
20150325246 | Philip et al. | Nov 2015 | A1 |
20150371647 | Faure et al. | Dec 2015 | A1 |
20160019898 | Schreiner et al. | Jan 2016 | A1 |
20160027450 | Gao | Jan 2016 | A1 |
20160078878 | Ravelli et al. | Mar 2016 | A1 |
20160111094 | Martin et al. | Apr 2016 | A1 |
20160163326 | Resch et al. | Jun 2016 | A1 |
20160189721 | Johnston et al. | Jun 2016 | A1 |
20160225384 | Kristofer et al. | Aug 2016 | A1 |
20160285718 | Bruhn | Sep 2016 | A1 |
20160293174 | Atti | Oct 2016 | A1 |
20160293175 | Atti et al. | Oct 2016 | A1 |
20160307576 | Stefan et al. | Oct 2016 | A1 |
20160365097 | Guan et al. | Dec 2016 | A1 |
20160372125 | Atti et al. | Dec 2016 | A1 |
20160372126 | Atti et al. | Dec 2016 | A1 |
20160379649 | Lecomte et al. | Dec 2016 | A1 |
20160379655 | Truman et al. | Dec 2016 | A1 |
20170011747 | Faure et al. | Jan 2017 | A1 |
20170053658 | Atti et al. | Feb 2017 | A1 |
20170078794 | Bongiovi et al. | Mar 2017 | A1 |
20170103769 | Laaksonen | Apr 2017 | A1 |
20170110135 | Disch et al. | Apr 2017 | A1 |
20170133029 | Markovic et al. | May 2017 | A1 |
20170140769 | Ravelli et al. | May 2017 | A1 |
20170154631 | Bayer et al. | Jun 2017 | A1 |
20170154635 | Doehla et al. | Jun 2017 | A1 |
20170221495 | Sung | Aug 2017 | A1 |
20170236521 | Venkatraman et al. | Aug 2017 | A1 |
20170249387 | Hatami-Hanza | Aug 2017 | A1 |
20170256266 | Sung | Sep 2017 | A1 |
20170294196 | Bradley et al. | Oct 2017 | A1 |
20170303114 | Johansson | Oct 2017 | A1 |
20190027156 | Sung | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
101140759 | Mar 2008 | CN |
102779526 | Nov 2012 | CN |
107103908 | Aug 2017 | CN |
0716787 | Jun 1996 | EP |
0732687 | Sep 1996 | EP |
1791115 | May 2007 | EP |
2676266 | Dec 2013 | EP |
2980796 | Feb 2016 | EP |
2980799 | Feb 2016 | EP |
3111624 | Jan 2017 | EP |
2944664 | Oct 2010 | FR |
H05-281996 | Oct 1993 | JP |
H07-28499 | Jan 1995 | JP |
H0811644 | Jan 1996 | JP |
H9-204197 | Aug 1997 | JP |
H10-51313 | Feb 1998 | JP |
H1091194 | Apr 1998 | JP |
H11-330977 | Nov 1999 | JP |
2004-138756 | May 2004 | JP |
2006-527864 | Dec 2006 | JP |
2007519014 | Jul 2007 | JP |
2007-525718 | Sep 2007 | JP |
2009-003387 | Jan 2009 | JP |
2009-008836 | Jan 2009 | JP |
2009-538460 | Nov 2009 | JP |
2010-500631 | Jan 2010 | JP |
2010-501955 | Jan 2010 | JP |
2012-533094 | Dec 2012 | JP |
2016-523380 | Aug 2016 | JP |
2016-200750 | Dec 2016 | JP |
2017-522604 | Aug 2017 | JP |
2017-528752 | Sep 2017 | JP |
100261253 | Jul 2000 | KR |
20030031936 | Apr 2003 | KR |
1020050007853 | Jan 2005 | KR |
1020090077951 | Jul 2009 | KR |
10-2010-0136890 | Dec 2010 | KR |
20130019004 | Feb 2013 | KR |
10-2016-0079056 | Jul 2016 | KR |
1020160144978 | Dec 2016 | KR |
20170000933 | Jan 2017 | KR |
2337414 | Oct 2008 | RU |
2376657 | Dec 2009 | RU |
2413312 | Feb 2011 | RU |
2419891 | May 2011 | RU |
2439718 | Jan 2012 | RU |
2483365 | May 2013 | RU |
2520402 | Jun 2014 | RU |
2568381 | Nov 2015 | RU |
2596594 | Sep 2016 | RU |
2596596 | Sep 2016 | RU |
2015136540 | Mar 2017 | RU |
2628162 | Aug 2017 | RU |
2016105619 | Aug 2017 | RU |
200809770 | Feb 2008 | TW |
201005730 | Feb 2010 | TW |
201126510 | Aug 2011 | TW |
201131550 | Sep 2011 | TW |
201207839 | Feb 2012 | TW |
201243832 | Nov 2012 | TW |
201612896 | Apr 2016 | TW |
201618080 | May 2016 | TW |
201618086 | May 2016 | TW |
201642246 | Dec 2016 | TW |
201642247 | Dec 2016 | TW |
201705126 | Feb 2017 | TW |
201711021 | Mar 2017 | TW |
201713061 | Apr 2017 | TW |
201724085 | Jul 2017 | TW |
201732779 | Sep 2017 | TW |
9916050 | Apr 1999 | WO |
2004072951 | Aug 2004 | WO |
2005086138 | Sep 2005 | WO |
2005086139 | Sep 2005 | WO |
2007073604 | Jul 2007 | WO |
2007138511 | Dec 2007 | WO |
2008025918 | Mar 2008 | WO |
2008046505 | Apr 2008 | WO |
2009066869 | May 2009 | WO |
2011048118 | Apr 2011 | WO |
2011086066 | Jul 2011 | WO |
2011086067 | Jul 2011 | WO |
2012000882 | Jan 2012 | WO |
2012000882 | Jan 2012 | WO |
2012126893 | Sep 2012 | WO |
2014165668 | Oct 2014 | WO |
2014202535 | Dec 2014 | WO |
2014202535 | Dec 2014 | WO |
2015063045 | May 2015 | WO |
2015063227 | May 2015 | WO |
2015071173 | May 2015 | WO |
2015174911 | Nov 2015 | WO |
2016016121 | Feb 2016 | WO |
2016142002 | Sep 2016 | WO |
2016142337 | Sep 2016 | WO |
Entry |
---|
Dietz, Martin, et al. “Overview of the EVS codec architecture.” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. (Year: 2015). |
Tetsuyuki Okumachi, “Office Action for JP Application 2020-118837”, dated Jul. 16, 2021, JPG, Japan. |
Tetsuyuki Okumachi, “Office Action for JP Application 2020-118838”, dated Jul. 16, 2021, JPO, Japan. |
John Tan, “Office Action for SG Application 11202004173P”, dated Jul. 23, 2021, IPOS, Singapore. |
Guojun Lu et al., “A Technique towards Automatic Audio Classification and Retrieval, Forth International Conference on Signal Processing”, 1998, IEEE, Oct. 12, 1998, pp. 1142 to 1145. |
Hiroshi Ono, “Office Action for JP Application No. 2020-526135”, dated May 21, 2021, JPO Japan. |
“Decision on Grant Patent for Invention for RU Application No. 2020118949”, dated Nov. 11, 2020, Rospatent, Russia. |
Takeshi Yamashita, “Office Action for JP Application 2020-524877”, dated Jun. 24, 2021, JPO, Japan. |
P.A. Volkov, “Office Action for RU Application No. 2020120251”, dated Oct. 28, 2020, Rospatent, Russia. |
P.A. Volkov, “Office Action for RU Application No. 2020120256”, dated Oct. 28, 2020, Rospatent, Russia. |
D.V.Travnikov, “Decision on Grant for RU Application No. 2020118969”, dated Nov. 2, 2020, Rospatent, Russia. |
Lakshmi Narayana Chinta, “Office Action for IN Application No. 202037018098”, dated Jul. 13, 2021, Intellectual Property India, India. |
ETSI TS 126 445 V13.2.0 (Aug. 2016), Universal Mobile Telecommunications System (UMTS); LTE; Codec for Enhanced Voice Services (EVS); Detailed algorithmic description (3GPP TS 26.445 version 13.2.0 Release 13) [Online]. Available: http://www.3gpp.org/ftp/Specs/archive/26_series/26.445/26445-d00.zip. |
Geiger, “Audio Coding based on integer transform”, Ilmenau: https://www.db-thueringen.de/receive/dbt_mods_00010054, 2004. |
Henrique S Malvar, “Biorthogonal and Nonuniform Lapped Transforms for Transform Coding with Reduced Blocking and Ringing Artifacts”, IEEE Transactions on Signal Processing, IEEE Service Center, New York, NY, US, (Apr. 1998), vol. 46, No. 4, ISSN 1053-587X, XP011058114. |
Anonymous, “ISO/IEC 14496-3:2005/FDAM 9, AAC-ELD”, 82. MPEG Meeting;Oct. 22, 2007-Oct. 26, 2007; Shenzhen; (Motion Picture Expert Group or ISO/IEC JTC1/SC29/WG11),, (Feb. 21, 2008), No. N9499, XP030015994. |
Virette, “Low Delay Transform for High Quality Low Delay Audio Coding”, Université de Rennes 1, (Dec. 10, 2012), pp. 1-195, URL: https://hal.inria.fr/tel-01205574/document, (Mar. 30, 2016), XP055261425. |
ISO/IEC 14496-3:2001; Information technology—Coding of audio-visual objects—Part 3: Audio. |
3GPP TS 26.403 v14.0.0 (Mar. 2017); General audio codec audio processing functions; Enhanced acPlus general audio codec; Encoder specification; Advanced Audio Coding (AAC) part; (Release 14). |
ISO/IEC 23003-3; Information technology—MPEG audio technologies—Part 3: Unified speech and audio coding, 2011. |
3GPP TS 26.445 V14.1.0 (Jun. 2017), 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Codec for Enhanced Voice Services (EVS); Detailed Algorithmic Description (Release 14), http://www.3gpp.org/ftp//Specs/archive/26_series/26.445/26445-e10.zip, Section 5.1.6 “Bandwidth detection”. |
Eksler Vaclav et al, “Audio bandwidth detection in the EVS codec”, 2015 IEEE Global Conference on Signal and Information Processing (GLOBALSIP), IEEE, (Dec. 14, 2015), doi:10.1109/GLOBALSIP.2015.7418243, pp. 488-492, XP032871707. |
Oger M et al, “Transform Audio Coding with Arithmetic-Coded Scalar Quantization and Model-Based Bit Allocation”, International Conference on Acoustics, Speech, and Signalprocessing, IEEE, XX, Apr. 15, 2007 (Apr. 15, 2007), page IV-545, XP002464925. |
Asad et al., “An enhanced least significant bit modification technique for audio steganography”, International Conference on Computer Networks and Information Technology, Jul. 11-13, 2011. |
Makandar et al, “Least Significant Bit Coding Analysis for Audio Steganography”, Journal of Future Generation Computing, vol. 2, No. 3, Mar. 2018. |
ISO/IEC 23008-3:2015; Information technology—High efficiency coding and media delivery in heterogeneous environments—Part 3: 3D audio. |
ITU-T G.718 (Jun. 2008): Series G: Transmission Systems and Media, Digital Systems and Networks, Digital terminal equipments—Coding of voice and audio signals, Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s. |
3GPP TS 26.447 V14.1.0 (Jun. 2017), Technical Specification, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Codec for Enhanced Voice Services (EVS); Error Concealment of Lost Packets (Release 14). |
DVB Organization, “ISO-IEC 23008-3_A3_(E)_(H 3DA FDAM3).docx”, DVB, Digital Video Broadcasting, C/O EBI—17A Ancienne Route—CH-1218 Grand Saconnex, Geneva—Switzerland, (Jun. 13, 2016), XP017851888. |
Hill et al., “Exponential stability of time-varying linear systems,” IMA J Numer Anal, pp. 865-885, 2011. |
3GPP TS 26.090 V14.0.0 (Mar. 2017), 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Mandatory Speech Codec speech processing functions; Adaptive Multi-Rate (AMR) speech codec; Transcoding functions (Release 14). |
3GPP TS 26.190 V14.0.0 (Mar. 2017), Technical Specification, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Speech codec speech processing functions; Adaptive Multi-Rate—Wideband (AMR-WB) speech codec; Transcoding functions (Release 14). |
3GPP TS 26.290 V14.0.0 (Mar. 2017), Technical Specification, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Audio codec processing functions; Extended Adaptive Multi-Rate—Wideband (AMR-WB+) codec; Transcoding functions (Release 14). |
Edler et al., “Perceptual Audio Coding Using a Time-Varying Linear Pre- and Post-Filter,” in AES 109th Convention, Los Angeles, 2000. |
Gray et al., “Digital lattice and ladder filter synthesis,” IEEE Transactions on Audio and Electroacoustics, vol. vol. 21, No. No. 6, pp. 491-500, 1973. |
Lamoureux et al., “Stability of time variant filters,” CREWES Research Report—vol. 19, 2007. |
Herre et al., “Enhancing the performance of perceptual audio coders by using temporal noise shaping (TNS).” Audio Engineering Society Convention 101. Audio Engineering Society, 1996. |
Herre et al., “Continuously signal-adaptive filterbank for high-quality perceptual audio coding.” Applications of Signal Processing to Audio and Acoustics, 1997. 1997 IEEE ASSP Workshop on. IEEE, 1997. |
Herre, “Temporal noise shaping, quantization and coding methods in perceptual audio coding: A tutorial introduction.” Audio Engineering Society Conference: 17th International Conference: High-Quality Audio Coding. Audio Engineering Society, 1999. |
Fuchs Guillaume et al, “Low delay LPC and MDCT-based audio coding in the EVS codec”, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, (Apr. 19, 2015), doi: 10.1109/ICASSP.2015.7179068, pp. 5723-5727, XP033187858. |
Niamut et al, “RD Optimal Temporal Noise Shaping for Transform Audio Coding”, Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on Toulouse, France May 14-19, 2006, Piscataway, NJ, USA,IEEE, Piscataway, NJ, USA, (Jan. 1, 2006), doi:10.1109/ICASSP.2006.1661244, ISBN 978-1-4244-0469-8, pages V-V, XP031015996. |
ITU-T G.711 (Sep. 1999): Series G: Transmission Systems and Media, Digital Systems and Networks, Digital transmission systems—Terminal equipments—Coding of analogue signals by pulse code modulation, Pulse code modulation (PCM) of voice frequencies, Appendix I: A high quality low-complexity algorithm for packet loss concealment with G.711. |
Cheveigne et al.,“YIN, a fundamental frequency estimator for speech and music.” The Journal of the Acoustical Society of America 111.4 (2002): 1917-1930. |
Ojala P et al, “A novel pitch-lag search method using adaptive weighting and median filtering”, Speech Coding Proceedings, 1999 IEEE Workshop on Porvoo, Finland Jun. 20-23, 1999, Piscataway, NJ, USA, IEEE, US, (Jun. 20, 1999), doi:10.1109/SCFT.1999.781502, ISBN 978-0-7803-5651-1, pp. 114-116, XP010345546. |
“5 Functional description of the encoder”, Dec. 10, 2014 (Dec. 10, 2014), 3GPP Standard; 26445-C10_1_S05_S0501, 3rd Generation Partnership Project (3GPP)?, Mobile Competence Centre ; 650, Route Des Lucioles ; F-06921 Sophia-Antipolis Cedex; France Retrieved from the Internet:URL http://www.3gpp.org/ftp/Specs/2014-12/Rel-12/26_series/ XP050907035. |
Hiroshi Ono, “Office Action for JP Application No. 2020-526081”, dated Jun. 22, 2021, JPO, Japan. |
Hiroshi Ono, “Office Action for JP Application No. 2020-526084”, dated Jun. 23, 2021, JPO, Japan. |
Tomonori Kikuchi, “Office Action for JP Application No. 2020-524874”, dated Jun. 2, 2021, JPO Japan. |
O.E. Groshev, “Office Action for RU Application No. 2020118947”, dated Dec. 1, 2020, Rospatent, Russia. |
O.I. Starukhina, “Office Action for RU Application No. 2020118968”, dated Dec. 23, 2020, Rospatent, Russia. |
Sujoy Sarkar, “Examination Report for IN Application No. 202037018091”, dated Jun. 1, 2021, Intellectual Property India, India. |
Miao Xiaohong, “Examination Report for SG Application No. 11202004228V”, dated Sep. 2, 2021, IPOS, Singapore. |
Miao Xiaohong, “Search Report for SG Application No. 11202004228V”, dated Sep. 3, 2021, IPOS, Singapore. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7015512”, dated Sep. 9, 2021, KIPO, Republic of Korea. |
Santosh Mehtry, “Office Action for IN Application No. 202037019203”, dated Mar. 19, 2021, Intellectual Property India, India. |
Khalid Sayood, “Introduction to Data Compression”, Elsevier Science & Technology, 2005, Section 16.4, Figure 16. 13, p. 526. |
Patterson et al., “Computer Organization and Design”, The hardware/software Interface, Revised Fourth Edition, Elsevier, 2012. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016424”, dated Feb. 9, 2022, KIPO, Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016503”, dated Feb. 9, 2022, KIPO, Korea. |
International Telecommunication Union, “G. 729-based embedded variable bit-rate coder: An 8-32 kbit/s scalable wideband coder bitstream interoperable with G.729”. ITU-T Recommendation G.729.1., May 2006 |
3GGP TS 26.445, “Universal Mobile TElecommunications System (UMTS; LTE; Codec for Enhanced Voice Services (EVS); Detailed algorithmic description (3GPP TS 26.445 version 13.4.1 Release 13)”, ETSI TS 126 445 V13.4.1., Apr. 2017. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016100”, dated Jan. 13, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016224”, dated Jan. 13, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7015835”, dated Jan. 13, 2022, KIPO, Republic of Korea. |
Kazunori Mochimura, “Decision to Grant a Patent for JP application No. 2020-524579”, dated Nov. 29, 2021, JPO, Japan. |
ETSI TS 126 445 V12.0.0, “Universal Mobile Telecommunications System (UMTS); LTE; EVS Codec Detailed Algorithmic Description (3GPP TS 26.445 version 12.0.0 Release 12)”, Nov. 2014. |
ETSI TS 126 403 V6.0.0, “Universal Mobile Telecommunications System (UMTS); General audio codec audio processing functions; Enhanced aacPIus general audio codec; Encoder specification; Advanced Audio Coding (AAC) part (3GPP TS 26.403 version 6.0.0 Release 6)”, Sep. 2004. |
ETSI TS 126 401 V6.2.0, “Universal Mobile Telecommunications System (UMTS); General audio codec audio processing functions; Enhanced aacPlus general audio codec; General description (3GPP TS 26.401 version 6.2.0 Release 6)”, Mar. 2005. |
3GPP TS 26.405, “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects General audio codec audio processing functions; Enhanced aacPlus general audio codec; Encoder specification parametric stereo part (Release 6)”, Sep. 2004. |
3GPP TS 26.447 V12.0.0, “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Codec for Enhanced Voice Services (EVS); Error Concealment of Lost Packets (Release 12)”, Sep. 2014. |
ISO/IEC Fdis 23003-3:2011 (E), “Information technology—MPEG audio technologies—Part 3: Unified speech and audio coding”, ISO/IEC JTC 1/SC 29/WG 11, Sep. 20, 2011. |
Valin et al., “Definition of the Opus Audio Codec”, Internet Engineering Task Force (IETF) RFC 6716, Sep. 2012. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7015511”, dated Apr. 19, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7016100”, dated Apr. 21, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7015836”, dated Apr. 28, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7015512”, dated Apr. 20, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7015835”, dated Apr. 22, 2022, KIPO, Republic of Korea. |
Xiong-Malvar, “A Nonuniform Modulated Complex Lapped Transform”, IEEE Signal Processing Letters, vol. 8, No. 9, Sep. 2001. (Year: 2001). |
Raj et al., “An Overview of MDCT for Time Domain Aliasing Cancellation”, 2014 International Conference on Communication and Network Technologies (ICCNT). (Year: 2014). |
Malvar, “Biorthogonal and Nonuniform Lapped Transforms for Transform Coding with Reduced Blocking and Ringing Artifacts”, IEEE Transactions on Signal Processing, vol. 46, No. 4, Apr. 1998. (Year: 1998). |
Malvar, “Lapped Transforms for Efficient Transform/Subband Coding”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, No. 6, Jun. 1990. (Year: 1990). |
Malvar, “Fast Algorithms for Orthogonal and Biorthogonal Modulated Lapped Transforms”, Microsoft Research, 1998. (Year: 1998). |
Princen-Bradley, “Analysis/Synthesis Filter Bank Design Based on Time Domain Aliasing Cancellation”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-34, No. 5, Oct. 1986. (Year: 1986). |
Shlien, “The Modulated Lapped Transform, Its Time-Varying Forms, and Its Applications to Audio Coding Standards”, IEEE Transactions on Speech and Audio Processing, vol. 5, No. 4, Jul. 1997. (Year: 1997). |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7016224”, dated Jul. 25, 2022, KIPO, Republic of Korea. |
Number | Date | Country | |
---|---|---|---|
20200265852 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2018/080335 | Nov 2018 | US |
Child | 16866280 | US |