Examples of methods and apparatus are here provided which are capable of performing a low complexity pitch detection procedure, e.g., for long term postfiltering, LTPF, encoding.
For example, examples are capable of selecting a pitch lag for an information signal, e.g. audio signal, e.g., for performing LTPF.
Transform-based audio codecs generally introduce inter-harmonic noise when processing harmonic audio signals, particularly at low delay and low bitrate. This inter-harmonic noise is generally perceived as a very annoying artefact, significantly reducing the performance of the transform-based audio codec when subjectively evaluated on highly tonal audio material.
Long Term Post Filtering (LTPF) is a tool for transform-based audio coding that helps at reducing this inter-harmonic noise. It relies on a post-filter that is applied on the time-domain signal after transform decoding. This post-filter is essentially an infinite impulse response (IIR) filter with a comb-like frequency response controlled by two parameters: a pitch lag and a gain.
For better robustness, the post-filter parameters (a pitch lag and/or a gain per frame) are estimated at the encoder-side and encoded in a bitstream when the gain is non-zero. The case of the zero gain is signalled with one bit and corresponds to an inactive post-filter, used when the signal does not contain a harmonic part.
LTPF was first introduced in the 3GPP EVS standard [1] and later integrated to the MPEG-H 3D-audio standard [2]. Corresponding patents are [3] and [4].
A pitch detection algorithm estimates one pitch lag per frame. It is usually performed at a low sampling rate (e.g. 6.4 kHz) in order to reduce the complexity. It should ideally provide an accurate, stable and continuous estimation.
When used for LTPF encoding, it is most important to have a continuous pitch contour, otherwise some instability artefacts could be heard in the LTPF filtered output signal. Not having a true fundamental frequency F0 (for example by having a multiple of it) is of less importance, because it does not result in severe artefacts but instead results in a slight degradation of the LTPF performance.
Another important characteristic of a pitch detection algorithm is its computational complexity. When implemented in an audio codec targeting low power devices or even ultra-low power devices, its computational complexity should be as low as possible.
There is an example of a LTPF encoder that can be found in the public domain. It is described in the 3GPP EVS standard [1]. This implementation is using a pitch detection algorithm described in Sec. 5.1.10 of the standard specifications. This pitch detection algorithm has a good performance and works nicely with LTPF because it gives a very stable and continuous pitch contour. Its main drawback is however its relatively high complexity.
Even though they were never used for LTPF encoding, other existing pitch detection algorithms could in theory be used for LTPF. One example is YIN [6], a pitch detection algorithm often recognized as one of the most accurate. YIN is however very complex, even significantly more than the one in [1].
Another example worth mentioning is the pitch detection algorithm used in the 3GPP AMR-WB standard [7], which has a significantly lower complexity than the one in [1], but also worse performance, it particularly gives a less stable and continuous pitch contour. Conventional technology comprises the following disclosures:
There are some cases, however, for which the pitch lag estimation should be ameliorated. Current low complexity pitch detection algorithms (like the one in [7]) have a performance which is not satisfactory for LTPF, particularly for complex signals, like polyphonic music. The pitch contour can be very unstable, even during stationary tones. This is due to jumps between the local maxima of the weighted autocorrelation function.
Therefore, there is the need of obtaining pitch lag estimations which better adapt to complex signals, with the same or lower complexity than conventional technology.
According to an embodiment, an apparatus for encoding an information signal including a plurality of frames may have: a first estimator configured to obtain a first estimate, the first estimate being an estimate of a pitch lag for a current frame, wherein the first estimate is obtained as the lag that maximizes a first correlation function associated to the current frame; a second estimator configured to obtain a second estimate, the second estimate being another estimate of a pitch lag for the current frame, wherein the second estimator is conditioned by the pitch lag selected at the previous frame so as to obtain the second estimate for the current frame, wherein the second estimator is configured to obtain the second estimate by searching the lag which maximizes a second correlation function in a second subinterval which contains the pitch lag selected for the previous frame, a selector configured to choose a selected value by performing a selection between the first estimate and the second estimate on the basis of a first and a second correlation measurements, wherein the selector is configured to perform a comparison between: a downscaled version of a first normalized autocorrelation measurement associated to the current frame and obtained at a lag corresponding to the first estimate; and a second normalized autocorrelation measurement associated to the current frame and obtained at a lag corresponding to the second estimate, so as to select the first estimate when the second normalized autocorrelation measurement is less than the downscaled version of the first normalized autocorrelation measurement, and/or to select the second estimate when the second normalized autocorrelation measurement is greater than the downscaled version of the first normalized autocorrelation measurement.
According to another embodiment, a system may have an encoder side and a decoder side, the encoder side including the inventive apparatus, the decoder side including a long term postfiltering tool controlled on the basis of the pitch lag estimate selected by the selector.
According to another embodiment, a method for determining a pitch lag for a signal divided into frames may have the steps of: performing a first estimation for a current frame to obtain first estimate as the lag that maximizes a first correlation function associated to the current frame; performing a second estimation for the current frame obtained by searching for the lag which maximizes a second correlation function in a second subinterval which contains the pitch lag selected for the previous frame, wherein performing the second estimation is obtained on the basis of the result of a selecting step performed at the previous frame; and selecting between the first estimate obtained at the first estimation and the second estimate obtained at the second estimation on the basis of a first and a second normalized autocorrelation measurements, wherein selecting includes performing a comparison between: a downscaled version of the first normalized autocorrelation measurement, associated to the current frame and obtained at a lag corresponding to the first estimate; the second normalized autocorrelation measurement, associated to the current frame and obtained at a lag corresponding to the second estimate; and selecting the first estimate when the second normalized autocorrelation measurement is less than the downscaled version of the first normalized autocorrelation measurement, and/or selecting the second estimate when the second normalized autocorrelation measurement is greater than the downscaled version of the first normalized autocorrelation measurement.
According to another embodiment, a method for encoding a bitstream for a signal divided into frames may have the steps of: performing the inventive method for determining a pitch lag; and encoding data useful for performing LTPF at the decoder, the data useful for performing LTPF including the selected value.
Another embodiment may have non-transitory digital storage medium having a computer program stored thereon to perform any of the inventive methods when said computer program is run by a computer.
In accordance to examples, there is provided an apparatus for encoding an information signal including a plurality of frames, the apparatus comprising:
characterized in that the selector is configured to:
In accordance to examples, there is provided an apparatus for encoding an information signal into a bitstream (63) including a plurality of frames, the apparatus (60a) comprising:
In accordance to examples, there is provided an apparatus for encoding an information signal including a plurality of frames, the apparatus comprising:
In accordance to examples, the selector is configured to perform a comparison between:
In accordance to examples, the selector is configured to perform a comparison between:
In accordance to examples, the selector is configured to:
so as to select the first estimate when the second correlation measurement is less than the downscaled version of the first correlation measurement, and/or
to select the second estimate when the second correlation measurement is greater than the downscaled version of the first correlation measurement.
In accordance to examples, at least one of the first and second correlation measurement is an autocorrelation measurement and/or a normalized autocorrelation measurement.
A transform coder to generate a representation of the information signal or a processed version thereof may be implemented.
In accordance to examples, the second estimator is configured to:
In accordance to examples, the second subinterval contains lags within a distance less than a pre-defined lag number threshold from the pitch lag selected for the previous frame.
In accordance to examples, the second estimator is configured to:
In accordance to examples, the first estimator is configured to:
In accordance to examples, the first correlation function is restricted to lags in a first subinterval.
In accordance to examples, the first subinterval contains a number of lags greater than the second subinterval, and/or at least some of the lags in the second subinterval are comprised in the first subinterval.
In accordance to examples, the first estimator) is configured to:
In accordance to examples, at least one of the second and first correlation function is an autocorrelation function and/or a normalized autocorrelation function.
In accordance to examples, the first estimator is configured to obtain the first estimate T1 by performing at least some of the following operations:
w(k) being a weighting function, kmin and kmax being associated to a minimum lag and a maximum lag, R being an autocorrelation measurement value estimated on the basis of the information signal or a processed version thereof and N being the frame length.
In accordance to examples, the second estimator is configured to obtain the second estimate T2 by performing:
with k′min=max (kmin, Tprev−δ), k′max=min (kmax, Tprev+δ), Tprev being the selected estimate in the preceding frame, and δ is a distance from Tprev, kmin and kmax being associated to a minimum lag and a maximum lag.
In accordance to examples, the selector is configured to perform a selection of the pitch lag estimate Tcurr in terms of
with T1 being the first estimate, T2 being the second estimate, x being a value of the information signal or a processed version thereof, normcorr(x, N, T) being the normalized correlation measurement of the signal x of length N at lag T, α being a downscaling coefficient.
In accordance to examples, there is provided, downstream to the selector, a long term postfiltering, LTPF, tool for controlling a long term postfilter at a decoder apparatus.
In accordance to examples, the information signal is an audio signal.
In accordance to examples, the apparatus is configured to obtain the first correlation measurement as a measurement of harmonicity of the current frame and the second correlation measurement as a measurement of harmonicity of the current frame restricted to a subinterval defined for the previous frame.
In accordance to examples, the apparatus is configured to obtain the first and second correlation measurements using the same correlation function up to a weighting function.
In accordance to examples, the apparatus is configured to obtain the first correlation measurement as the normalized version of the first estimate up to a weighting function.
In accordance to examples, the apparatus is configured to obtain the second correlation measurement as the normalized version of the second estimate.
In accordance to examples, there is provided a system comprising an encoder side and a decoder side, the encoder side being as above, the decoder side comprising a long term postfiltering tool controlled on the basis of the pitch lag estimate selected by the selector.
In accordance to examples, there is provided a method for determining a pitch lag for a signal divided into frames, comprising:
In accordance to examples, the method may comprise using the selected lag for long term postfiltering, LTPF.
In accordance to examples, the method may comprise using the selected lag for packet lost concealment, PLC.
In accordance to examples, there is provided a method for determining a pitch lag for a signal divided into frames, comprising:
characterized in that selecting includes performing a comparison between:
wherein at least one of the first and second correlation measurement is an autocorrelation measurement and/or a normalized autocorrelation measurement.
In accordance to examples, there is provided a method for encoding a bitstream for a signal divided into frames, comprising:
wherein selecting includes performing a comparison between:
the method further comprising encoding data useful for performing LTPF at the decoder the selected value.
In accordance to examples, there is provided a program comprising instructions which, when executed by a processor, cause the processor to perform any of the methods above or below.
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
a-5d show a diagrams of correlation functions.
Examples of low-complexity pitch detection procedures, systems, and apparatus, e.g., for LTPF encoding and/or decoding, are disclosed.
An information signal may be described in the time domain, TD, as a succession of samples (e.g., x(n)) acquired at different discrete time instants (n). The TD representation may comprise a plurality of frames, each associated to a plurality of samples. The frames may be seen in a sequence one after the other one, so that a current frame is temporally before a subsequent frame and is temporally after a previous frame. It is possible to operate iteratively, so that operations performed on the previous frame are repeated for the current frame.
During an iteration associated to a current frame, it is possible to perform at least for some operations (e.g., a second estimate) which are conditioned by the selection performed at the previous iteration associated to the previous frame. Therefore, the history of the signal at the previous frame is taken into account, e.g., for selecting the pitch lag to be used by the decoder for performing long term postfiltering (LTPF).
The final estimate (selected value) 19 may also be input to a register 19′ and be used, when performing an iteration on the subsequent frame, as an input 19″ (Tprev) to the second estimator 12 regarding a previously operated selection. For each frame 13, the second estimator 12 obtains the second estimate 16 on the basis of the previously final estimate 19″ for the previous frame.
Subsequently, at step S104, the frames are updated: the frame that was the “current frame” becomes the “previous frame”, while a new (subsequent) frame becomes the new “current frame”. After the update, the method may be iterated.
Operations of the first estimator 11 which may be used, in examples, for providing a first estimate 14 on the basis of the current frame 13 are here discussed. The method 30 is shown in
An input signal x(n) at sampling rate F is resampled to a lower sampling rate F1 (e.g. F1=12.8 kHz). The resampling can be implemented using e.g. a classic upsampling+low-pass+downsampling approach. The present step is optional in some examples.
The resampled signal is then high-pass filtered using e.g. a 2-order IIR filter with 3 dB-cutoff at 50 Hz. The resulting signal is noted x1(n). The present step is optional in some examples.
The signal x1 (n) is further downsampled by a factor of 2 using e.g. a 4-order FIR low-pass filter followed by a decimator. The resulting signal at sampling rate F2=F1/2 (e.g. F2=6.4 kHz) is noted x2 (n). The present step is optional in some examples.
An autocorrelation process may be performed. For example, an autocorrelation may be processed on x2 (n) by
with N is the frame size. Tmin and Tmax are the minimum and maximum values for retrieving the pitch lag (e.g. Tmin=32 and Tmax=228). Tmin and Tmax may therefore constitute the extremities of a first interval where the first estimate (pitch lag of the current frame) is to be found.
The autocorrelation may be weighted in order to emphasize the lower pitch lags
Rw(T)=R(T)w(T),T=Tmin, . . . ,Tmax
with w(T) is a decreasing function (e.g., a monotonically decreasing function), given e.g. by
The first estimate T1 is the value that maximizes the weighted autocorrelation:
The first estimate T1 may be provided as output 14 of the first estimator 11. This may be an estimate of pitch lag for the present frame.
R (T) (or its weighted version Rw(T)) is an example of a first correlation function whose maximum value is associated to the first pitch lag estimate 14 (T1).
Operations of the second estimator 12 (and/or step S102) which may be used, in examples, for providing a second estimate 16 on the basis of the current frame 13 and the previously selected (output) estimate 19″ (pitch lag obtained for the previous frame) are here discussed. The method 40 is shown in
With reference to
According to examples, at step S42, autocorrelation values within the subinterval 52 are calculated, e.g., by the second measurer 22.
According to examples, at step S42, the maximum value among the results of the autocorrelation is retrieved. The second estimate T2 is the value that maximizes the autocorrelation in the neighborhood of the pitch lag of the current frame among the lags within the second subinterval centered in the previously selected value 19″, e.g.:
where Tprev is the final pitch lag 51 (19″) as previously selected (by the selector 17) and δ the constant (e.g. δ=4) which defines the subinterval 52. The value T2 may be provided as output 16 of the second estimator 12.
Notably, the first estimate 14 and the second estimate 16 may be significantly different from each other.
R (T) (whose domain is here restricted between Tprev−δ and Tprev+δ) is an example of a second correlation function whose maximum value is associated to the second pitch lag estimate 16 (T2).
The first measurer 21 and/or the second measurer 22 may perform correlation measurements. The first measurer 21 and/or the second measurer 22 may perform autocorrelation measurements. The correlation and/or autocorrelation measurements may be normalized. An example, is here provided. normcorr(T) may be the normalized correlation of the signal x at pitch lag T
Therefore, the first correlation measurement 23 may be normcorr(T1), where T1 is the first estimate 14, and the second correlation measurement 25 may be normcorr(T2), where T2 is the second estimate 16.
Notably, first correlation measurement 23 is the normalized value of R(T1) (or Rw(T1)), while the second correlation measurement 25 is the normalized value of R(T2).
It is now possible to give an example of how to compare the correlation for performing the selection. As example is provided by the following formula:
αnormcorr(T1) may be seen as a pitch lag selection threshold 24: if normcorr(T2)≤αnormcorr(T1), the selector chooses T1, otherwise the selector chooses T2. The value Tbest (or an information associated thereto) may be therefore the selected output value 19 (as either T1 or T2) and provided to the decoder (e.g., for LTPF) and that will be used, as 19″, by the second estimator 12 for obtaining the second estimate 16.
The method 40, associated to the method 30, increases the performances with respect to a technique only based on the method 30.
With small additional complexity, it is possible to significantly improve the performance by making the pitch contour more stable and continuous.
The method 40 finds a second maximum for the autocorrelation function. It is not the global maximum like in the method 30, but a local maximum in the neighbourhood of the pitch lag of the previous frame. This second pitch lag, if selected, produces a smooth and continuous pitch contour. We however don't select this second pitch lag in all cases. If there is an expected change in the fundamental frequency for example, it is better to keep the global maximum.
The final selection is whether to select the first pitch lag T1 (14) found with method 30 or the second pitch lag T2 (16) found with method 40. This decision is based on a measure of periodicity. We choose the normalized correlation as measure of periodicity. It is 1 if the signal is perfectly periodic and 0 if it is aperiodic. The second pitch lag T2 is then chosen if its corresponding normalized correlation is higher than the normalized correlation of the first pitch lag T1, scaled by a parameter α. This parameter α<1 makes the decision even smoother by selecting T2 (16) even when its normalized correlation is slightly below the normalized correlation of the first pitch lag T1 (14).
Reference is made to
An example of first estimation is shown in
It is based on the fact that the auto-correlation of a harmonic signal (with some given pitch) contains peaks at the position of the pitch-lag and all multiples of this pitch-lag.
To avoid selecting a peak corresponding to a multiple of the pitch-lag, the auto-correlation function is weighted, as in
The global maximum of the weighted autocorrelation is then assumed to correspond to the pitch-lag of the signal.
In general, the first estimation taken alone works satisfactorily: it gives the correct pitch in the great majority of frames.
The first estimation has also the advantage of a relatively low complexity if the number of lags of the autocorrelation function (first subinterval) is relatively low.
There are five peaks: the first peak 53 corresponds to the pitch-lag, and the other ones correspond to multiples 53′ of this pitch-lag.
Taking the global maximum of the (non-weighted) autocorrelation would give in this case the wrong pitch-lag: it would choose a multiple of it, in this case 4 times the correct pitch-lag.
However, the global maximum of the weighted autocorrelation (
The first estimation works in several cases. However, there are some cases where it produces an unstable estimate.
One of this cases is a polyphonic music signal which contains a mix of several tones with different pitches. In this case, it is difficult to extract a single pitch from a multi-pitch signal. The first estimator 11 could in that case estimate in one frame the pitch of one of the tones (or even maybe a multiple of it), and in the next frame possibly the pitch of another tone (or a multiple of it). So even if the signal is stable (the pitch of the different tones does not change from one frame to the next), the pitch detected by the first estimation can be unstable (the pitch changes significantly from one frame to the next).
This unstable behaviour is a major problem for LTPF. When the pitch is used for LTPF, it is most important to have a continuous pitch contour, otherwise some artefacts could be heard in the LTPF filtered output signal.
In this case, the first three peaks 54′, 54″, and 54′″ have a very close amplitude. So very slight changes between the two consecutive frames can significantly change the global maximum and the estimated pitch-lag.
The solution adopted in the present invention solves these instability problems.
The present solution selects, besides the pitch lag associated to the peak in the frame, a pitch-lag which is close to the pitch-lag of the previous frame.
For example,
To do so, a second estimation is performed (e.g., by the second estimator 12) by estimating a second pitch-lag T2 which maximizes the autocorrelation function around a subinterval 52 the pitch-lag of the previous frame (Tprev−δ, Tprev+δ). In the case of
However, we don't want to select in all cases this second pitch-lag T2. We want to select either the first pitch-lag T1 or the second pitch-lag T2 based on some criteria. This criteria is based on the normalized correlation (NC), e.g., as measured by the selector 17, which is generally considered a good measure of how periodic is a signal at some particular pitch-lag (a NC of 0 means not periodic at all, a NC of 1 means perfectly periodic).
There are then several cases:
The additional steps provided on top of the first estimation (second estimation and selection) have a very low complexity. Therefore, the proposed invention has low complexity.
The encoder 60a may generate, using a transform coder 62, a frequency domain representation 63a (or a processed version thereof) of the information signal 61 and provide it to the decoder 60b in the bitstream 63. The decoder 60b may comprise a transform decoder for obtaining outputs signal 64a.
The encoder 60a may generate, using a detection unit 65, data useful for performing LTPF at the decoder 60b. These data may comprise a pitch lag estimate (e.g., 19) and/or a gain information. These data may be encoded in the bitstream 63 as data 63b in control fields. The data 63b (which may comprise the final estimate 19 of the pitch lag) may be prepared by a LTPF coder 66 (which, in some examples, may decide whether to encode the data 63b). These data may be used by an LTPF decoder 67 which may apply them to the output signal 64a from the transform decoder 64 to obtain the outputs signal 68.
Examples of the calculations of the LTPF parameters (or other types of parameters) are here provided.
An example of preparing the information for the LTPF is provided in the next subsections.
An example of (optional) resampling technique is here discussed (other techniques may be used).
The input signal at sampling rate fs may be resampled to a fixed sampling rate of 12.8 kHz. The resampling is performed using an upsampling+low-pass-filtering+downsampling approach that can be formulated as follows
with └ ┘ indicating a trucked value (rounded to the integer below), x(n) is the input signal, x12.8(n) is the resampled signal at 12.8 kHz,
is the upsampling factor and h6.4 is the impulse response of a FIR low-pass filter given by
An example, of tab_resamp_filter is provided in the following table:
An example of (optional) high-pass filter technique is here discussed (other techniques may be used).
The resampled signal may be high-pass filtered using a 2-order IIR filter whose transfer function may be given by
An example of pitch detection technique is here discussed (other techniques may be used).
The signal x12.8(n) may be (optionally) downsampled by a factor of 2 using
with h2={0.1236796411180537, 0.2353512128364889, 0.2819382920909148, 0.2353512128364889, 0.1236796411180537}.
The autocorrelation of x6.4(n) may be computed by
with kmin=17 and kmax=114 are the minimum and maximum lags which define the first subinterval (other values for kmin and kmax may be provided).
The autocorrelation may be weighted using
R6.4w(k)=R6.4(k)w(k) for k=kmin . . . kmax
with w(k) is defined as follows
The first estimate 14 of the pitch lag T1 may be the lag that maximizes the weighted autocorrelation
The second estimate 16 of the pitch lag T2 may be the lag that maximizes the non-weighted autocorrelation in the neighborhood of the pitch lag (19″) estimated in the previous frame
with k′min=max (kmin, Tprev−4), k′max=min (kmax, Tprev+4) and Tprev is the final pitch lag estimated in the previous frame (and its selection therefore conditioned by the previously selected pitch lag).
The final estimate 19 of the pitch lag in the current frame 13 may then be given by
with normcorr(x, L, T) is the normalized correlation of the signal x of length L at lag T
Each normalized correlation 23 or 25 may be at least one of the measurements obtained by the signal first or second measurer 21 or 22.
In some examples, the first bit of the LTPF bitstream signals the presence of the pitch-lag parameter in the bitstream. It is obtained by
(Instead of 0.6, a different threshold, e.g., between 0.4 and 0.8, or 0.5 and 0.7, or 0.55 and 0.65 could be used, for example.)
If pitch_present is 0, no more bits are encoded, resulting in a LTPF bitstream of only one bit.
If pitch_present is 1, two more parameters are encoded, one pitch-lag parameter encoded on 9 bits, and one bit to signal the activation of LTPF. In that case, the LTPF bitstream is composed by 11 bits.
An example for obtaining an LTPF pitch lag parameters is here discussed (other techniques may be used).
The integer part of the LTPF pitch lag parameter may be given by
The fractional part of the LTPF pitch lag may then be given by
and h4 is the impulse response of a FIR low-pass filter given by
tab_ltpf_interp_R may be, for example:
If pitch_fr<0 then both pitch_int and pitch_fr are modified according to
pitch_int=pitch_int−1
pitch_fr=pitch_fr+4
Finally, the pitch lag parameter index is given by
A normalized correlation is first computed as follows
and hi is the impulse response of a FIR low-pass filter given by
with tab_ltpf_interp_x12k8 is given by:
The LTPF activation bit is then set according to:
with mem_ltpf_active is the value of ltpf_active in the previous frame (it is 0 if pitch_present=0 in the previous frame), mem_nc is the value of nc in the previous frame (it is 0 if pitch_present=0 in the previous frame), pitch=pitch_int+pitch_fr/4 and mem_pitch is the value of pitch in the previous frame (it is 0 if pitch_present=0 in the previous frame).
The decoded signal in the frequency domain (FD), e.g., after MDCT (Modified Discrete Cosine Transformation) synthesis, MDST (Modified Discrete Sine Transformation) synthesis, or a synthesis based on another transformation, may be postfiltered in the time-domain using a IIR filter whose parameters may depend on LTPF bitstream data “pitch_index” and “ltpf_active”. To avoid discontinuity when the parameters change from one frame to the next, a transition mechanism may be applied on the first quarter of the current frame.
In examples, an LTPF IIR filter can be implemented using
where {circumflex over (x)}(n) is the filter input signal (i.e. the decoded signal after MDCT synthesis) and (n) is the filter output signal.
The integer part pint and the fractional part pfr of the LTPF pitch lag may be computed as follows. First the pitch lag at 12.8 kHz is recovered using
The pitch lag may then be scaled to the output sampling rate fs and converted to integer and fractional parts using
where fs is the sampling rate.
The filter coefficients cnum(k) and cden (k, pfr) may be computed as follows
and gain_ltpf and gain_ind may be obtained according to
and the tables tab_ltpf_num_fs[gain_ind] [k] and tab_ltpf_den_fs[pfr] [k] are predetermined.
Examples of tab_ltpf_num_fs[gain_ind] [k] are here provided (instead of “fs”, the sampling rate is indicated):
Examples of tab_ltpf_den_fs[pfr][k] are here provided (instead of “fs”, the sampling rate is indicated):
With reference to the transition handling, five different cases are considered.
First case: ltpf_active=0 and mem_ltpf_active=0
Second case: ltpf_active=1 and mem_ltpf_active=0
Third case: ltpf_active=0 and mem_ltpf_active=1
with cnummem, cdenmem, pintmem and pfrmem are the filter parameters computed in the previous frame.
Fourth case: ltpf_active=1 and mem_ltpf_active=1 and pint=pintmem and pfr=pfrmem
Fifth case: ltpf_active=1 and mem_ltpf_active=1 and (pint≠pintmem or pfr≠pfrmem
with Nf being the number of samples in one frame.
As may be understood, the solution according to the examples above are transparent to the decoder. There is no need for signalling to the decoder, for example, that the first estimate or the second estimate has been selected.
Accordingly, there is no increased payload in the bitstream 63.
Further, there is no need for modifying the decoders to adapt to the new processed performed at the encoder. The decoder does not need to know that the present invention has been implemented. Therefore, the invention permits to increase the compatibility with the legacy systems.
The pitch lag Tbest (19) as obtained by the apparatus 10, 60a, or 110 above may be used, at the decoder (e.g., 60b) for implementing a packet loss concealment (PLC) (also known as error concealment). PLC is used in audio codecs to conceal lost or corrupted packets during the transmission from the encoder to the decoder. In conventional technology, PLC may be performed at the decoder side and extrapolate the decoded signal either in the transform-domain or in the time-domain.
The pitch lag may be the main parameter used in pitch-based PLC. This parameter can be estimated at the encoder-side and encoded into the bitstream. In this case, the pitch lag of the last good frames are used to conceal the current lost frame.
A corrupted frame does not provide a correct audible output and shall be discarded.
For each decoded frame at the decoder, its validity may be verified. For example, each frame may have a field carrying a cyclical redundancy code (CRC) which is verified by performing predetermined operations provided by a predetermined algorithm. The procedure may be repeated to verify whether the calculated result corresponds to the value on the CRC field. If a frame has not been properly decoded (e.g., in view of interference in the transmission), it is assumed that some errors have affected the frame. Therefore, if the verification provides a result of incorrect decoding, the frame is held non-properly decoded (invalid, corrupted).
When a frame is acknowledged as non-properly decoded, a concealment strategy may be used to provide an audible output: otherwise, something like an annoying audible hole could be heard. Therefore, it is needed to find some form of frame which “fills the gap” kept open by the non-properly decoded frame. The purpose of the frame loss concealment procedure is to conceal the effect of any unavailable or corrupted frame for decoding.
A frame loss concealment procedure may comprise concealment methods for the various signal types. Best possible codec performance in error-prone situations with frame losses may be obtained through selecting the most suitable method. One of the packet loss concealment methods may be, for example, TCX Time Domain Concealment.
The TCX Time Domain Concealment method is a pitch-based PLC technique operating in the time domain. It is best suited for signals with a dominant harmonic structure. An example of the procedure is as follow: the synthesized signal of the last decoded frames is inverse filtered with the LP filter as described in Section 8.2.1 to obtain the periodic signal as described in Section 8.2.2. The random signal is generated by a random generator with approximately uniform distribution in Section 8.2.3. The two excitation signals are summed up to form the total excitation signal as described in Section 8.2.4, which is adaptively faded out with the attenuation factor described in Section 8.2.6 and finally filtered with the LP filter to obtain the synthesized concealed time signal. If LTPF has been used in the last good frame, the LTPF may also be applied on the synthesized concealed time signal as described in Section 8.3. To get a proper overlap with the first good frame after a lost frame, the time domain alias cancelation signal is generated in Section 8.2.5.
The TCX Time Domain Concealment method is operating in the excitation domain. An autocorrelation function may be calculated on 80 equidistant frequency domain bands. Energy is pre-emphasized with the fixed pre-emphasis factor μ
The autocorrelation function is lag windowed using the following window
before it is transformed to time domain using an inverse evenly stacked DFT. Finally a Levinson Durbin operation may be used to obtain the LP filter, ac(k), for the concealed frame. An example is provided below:
The LP filter may be calculated only in the first lost frame after a good frame and remains in subsequently lost frames.
The last
decoded time samples are first pre-emphasized with the pre-emphasis factor from Section 8.2.1 using the filter
Hpre-emph(z)=1−μ−1
to obtain the signal xpre(k), where Tc is the pitch lag value pitch_int or pitch_int+1 if pitch_fr>0. The values pitch_int and pitch_fr are the pitch lag values transmitted in the bitstream.
The pre-emphasized signal, xprev(k), is further filtered with the calculated inverse LP filter to obtain the prior excitation signal exc′p(k). To construct the excitation signal, excp(k), for the current lost frame, exc′p(k) is repeatedly copied with Tc as follows
excp(k)=exc′p(E−Tc+k), for k=0 . . . N−1
where E corresponds to the last sample in exc′p(k). If the stability factor θ is lower than 1, the first pitch cycle of exc′p(k) is first low pass filtered with an 11-tap linear phase FIR (finite impulse response) filter described in the table below
The gain of pitch, g′p, may be calculated as follows
If pitch_fr=0 then gp=g′p. Otherwise, a second gain of pitch, g″p, may be calculated as follows
and gp=max (g′p, g″p). If g″p>g′p then Tc is reduced by one for further processing. Finally, gp is bounded by 0≤gp≤1.
The formed periodic excitation, excp(k), is attenuated sample-by-sample throughout the frame starting with one and ending with an attenuation factor, α, to obtain (k). The gain of pitch is calculated only in the first lost frame after a good frame and is set to a for further consecutive frame losses.
The random part of the excitation may be generated with a random generator with approximately uniform distribution as follows
excn,FB(k)=extract(excn,FB(k−1)·12821+16831), for k=0 . . . N−1
where excn,FB(−1) is initialized with 24607 for the very first frame concealed with this method and extract( )extracts the 16 LSB of the value. For further frames, excn,FB(−1) is stored and used as next excn,FB(−1).
To shift the noise more to higher frequencies, the excitation signal is high pass filtered with an 11-tap linear phase FIR filter described in the table below to get excn,HP(k).
To ensure that the noise may fade to full band noise with the fading speed dependently on the attenuation factor a, the random part of the excitation, excn(k), is composed via a linear interpolation between the full band, excn,FB(k), and the high pass filtered version, excn,HP(k), as
excn(k)=(1−β)·excn,FB(k)+β·excn,HP(k), for k=0 . . . N−1
where β=1 for the first lost frame after a good frame and
β=β_1·α
for the second and further consecutive frame losses, where β_1 is β of the previous concealed frame.
For adjusting the noise level, the gain of noise, g′n, is calculated as
If Tc=pitch_int after Section 8.2.2, then gn=g′n. Otherwise, a second gain of noise, g″n, is calculated as in the equation above, but with Tc being pitch_int. Following, gn=min (g′n, g″n).
For further processing, gn is first normalized and then multiplied by (1.1−0.75 gp) to get .
The formed random excitation, excn(k), is attenuated uniformly with , from the first sample to sample five and following sample-by-sample throughout the frame starting with and ending with ·α to obtain (k). The gain of noise, gn, is calculated only in the first lost frame after a good frame and is set to gn·α for further consecutive frame losses.
The random excitation, (k), is added to the periodic excitation, (k), to form the total excitation signal exct(k). The final synthesized signal for the concealed frame is obtained by filtering the total excitation with the LP filter from Section 8.2.1 and post-processed with the de-emphasis filter.
To get a proper overlap-add in the case the next frame is a good frame, the time domain alias cancelation part, xTDAC(k), may be generated. For that, N−Z additional samples are created the same as described above to obtain the signal x(k) for k=0 . . . 2N−Z. On that, the time domain alias cancelation part is created by the following steps:
Zero filling the synthesized time domain buffer x(k)
Windowing {circumflex over (x)}(k) with the MDCT window wN(k)
(k)=wN(k)·{circumflex over (x)}(k),0≤k<2N
Reshaping from 2N to N
Reshaping from N to 2N
Windowing ŷ(k) with the flipped MDCT (Modified Discrete Cosine Transformation) (or MDST, Modified Discrete Sine Transformation, in other examples) window wN(k)
xTDAC(k)=wN(2N−1−k)·{circumflex over (y)}(k),0≤k<2N
The constructed signal fades out to zero. The fade out speed is controlled by an attenuation factor, α, which is dependent on the previous attenuation factor, α_1, the gain of pitch, gp, calculated on the last correctly received frame, the number of consecutive erased frames, nbLostCmpt, and the stability, θ. The following procedure may be used to compute the attenuation factor, α
The factor θ (stability of the last two adjacent scalefactor vectors scf−2(k) and scf−3(k)) may be obtained, for example, as:
where scf−2(k) and scf−1(k) are the scalefactor vectors of the last two adjacent frames. The factor θ is bounded by 0≤θ≤1, with larger values of θ corresponding to more stable signals. This limits energy and spectral envelope fluctuations. If there are no two adjacent scalefactor vectors present, the factor θ is set to 0.8.
To prevent rapid high energy increase, the spectrum is low pass filtered with Xs(0)=Xs(0)·0.2 and Xs(1)=Xs(1)·0.5.
At step S102′, the validity of the frame is checked (for example with CRC, parity, etc.). If the invalidity of the frame is acknowledged, concealment is performed (see below).
Otherwise, if the frame is held valid, at step S103′ it is checked whether pitch information is encoded in the frame. In some examples, the pitch information is encoded only if the harmonicity has been acknowledged as being over a particular threshold (which may indicate, for example, harmonicity sufficiently high for performing LTPF and/or PLC, for example).
If at S103′ it is acknowledged that the pitch information is actually encoded, then the pitch information is decoded and stored at step S104′. Otherwise, the cycle ends and a new frame may be decoded at S101′.
Subsequently, at step S105′, it is checked whether the LTPF is enabled. If it is verified that the LTPF is enabled, then LTPF is performed at step S106. Otherwise, the LTPF is skipped; the cycle ends; and a new frame may be decoded at S101′.
With reference to the concealment, the latter may be subdivided into steps. At step S107′, it is verified whether the pitch information of the previous frame (or a pitch information of one of the previous frames) is stored in the memory (i.e., it is at disposal).
If it is verified that the searched pitch information is stored, then error concealment may be performed at step S108. MDCT (or MDST) frame resolution repetition with signal scrambling, and/or TCX time domain concealment, and/or phase ECU may be performed.
Otherwise, if at S107′ it is verified that no fresh pitch information is stored (as a consequence that the decoder had not transmitted the pitch lag, for example) a different concealment technique, per se known and not implying the use of a pitch information provided by the encoder, may be used at step S109′. Some of these techniques may be based on estimating the pitch information and/or other harmonicity information at the decoder. In some examples, no concealment technique may be performed in this case.
After having performed the concealment, the cycle ends and a new frame may be decoded at S101′.
It is to be noted that the pitch lag used by the PLC is the value 19 (tbest) prepared by the apparatus 10 and/or 60b, on the basis of the selection between the estimations T1 and T2, as discussed above.
In examples, the systems 110 and 120 may be the same device.
Depending on certain implementation requirements, examples may be implemented in hardware. The implementation may be performed using a digital storage medium, for example a floppy disk, a Digital Versatile Disc (DVD), a Blu-Ray Disc, a Compact Disc (CD), a Read-only Memory (ROM), a Programmable Read-only Memory (PROM), an Erasable and Programmable Read-only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM) or a flash memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
Generally, examples may be implemented as a computer program product with program instructions, the program instructions being operative for performing one of the methods when the computer program product runs on a computer. The program instructions may for example be stored on a machine readable medium.
Other examples comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier. In other words, an example of method is, therefore, a computer program having a program instructions for performing one of the methods described herein, when the computer program runs on a computer.
A further example of the methods is, therefore, a data carrier medium (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier medium, the digital storage medium or the recorded medium are tangible and/or non-transitionary, rather than signals which are intangible and transitory.
A further example comprises a processing unit, for example a computer, or a programmable logic device performing one of the methods described herein.
A further example comprises a computer having installed thereon the computer program for performing one of the methods described herein.
A further example comprises an apparatus or a system transferring (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
In some examples, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some examples, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods may be performed by any appropriate hardware apparatus.
While this invention has been described in terms of several advantageous embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
17201091 | Nov 2017 | EP | regional |
This application is a continuation of copending International Application No. PCT/EP2018/080195, filed Nov. 5, 2018, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No. EP 17201091.0, filed Nov. 10, 2017, which is also incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4972484 | Link et al. | Nov 1990 | A |
5012517 | Chhatwal et al. | Apr 1991 | A |
5581653 | Todd | Dec 1996 | A |
5651091 | Chen | Jul 1997 | A |
5781888 | Herre | Jul 1998 | A |
5812971 | Herre | Sep 1998 | A |
5819209 | Inoue | Oct 1998 | A |
5909663 | Iijima et al. | Jun 1999 | A |
5999899 | Robinson | Dec 1999 | A |
6018706 | Huang et al. | Jan 2000 | A |
6148288 | Park | Nov 2000 | A |
6167093 | Tsutsui et al. | Dec 2000 | A |
6507814 | Gao | Jan 2003 | B1 |
6570991 | Scheirer et al. | May 2003 | B1 |
6665638 | Kang | Dec 2003 | B1 |
6735561 | Johnston et al. | May 2004 | B1 |
7009533 | Wegener | Mar 2006 | B1 |
7353168 | Chen et al. | Apr 2008 | B2 |
7395209 | Miroslav et al. | Jul 2008 | B1 |
7539612 | Wei-Ge et al. | May 2009 | B2 |
7546240 | Wei-Ge et al. | Jun 2009 | B2 |
8015000 | Juin-Hwey et al. | Sep 2011 | B2 |
8095359 | Boehm et al. | Jan 2012 | B2 |
8280538 | Kim et al. | Oct 2012 | B2 |
8473301 | Chen et al. | Jun 2013 | B2 |
8543389 | Ragot et al. | Sep 2013 | B2 |
8554549 | Oshikiri et al. | Oct 2013 | B2 |
8612240 | Fuchs et al. | Dec 2013 | B2 |
8682681 | Fuchs et al. | Mar 2014 | B2 |
8738385 | Chen | May 2014 | B2 |
8751246 | Stefan et al. | Jun 2014 | B2 |
8847795 | Faure et al. | Sep 2014 | B2 |
8891775 | Mundt et al. | Nov 2014 | B2 |
8898068 | Fuchs et al. | Nov 2014 | B2 |
9026451 | Kleijn et al. | May 2015 | B1 |
9123350 | Zhao et al. | Sep 2015 | B2 |
9489961 | Balazs et al. | Nov 2016 | B2 |
9595262 | Fuchs et al. | Mar 2017 | B2 |
9978381 | Venkatraman et al. | May 2018 | B2 |
10242688 | Martin et al. | Mar 2019 | B2 |
10296959 | Chernikhova et al. | May 2019 | B1 |
10726854 | Ghido et al. | Jul 2020 | B2 |
20010026327 | Schreiber et al. | Oct 2001 | A1 |
20030101050 | Vladimir et al. | May 2003 | A1 |
20040158462 | Rutledge et al. | Aug 2004 | A1 |
20040162866 | Malvar et al. | Aug 2004 | A1 |
20050010395 | Chiu et al. | Jan 2005 | A1 |
20050015249 | Wei-Ge et al. | Jan 2005 | A1 |
20050192799 | Kim et al. | Sep 2005 | A1 |
20050246178 | Fejzo | Nov 2005 | A1 |
20060288851 | Naoki et al. | Dec 2006 | A1 |
20070033056 | Alexander et al. | Feb 2007 | A1 |
20070078646 | Lei et al. | Apr 2007 | A1 |
20070118361 | Sinha et al. | May 2007 | A1 |
20070118369 | Chen | May 2007 | A1 |
20070124136 | Den Brinker et al. | May 2007 | A1 |
20070127729 | Breebaart et al. | Jun 2007 | A1 |
20070129940 | Geyersberger et al. | Jun 2007 | A1 |
20070154031 | Avendano et al. | Jul 2007 | A1 |
20070276656 | Solbach et al. | Nov 2007 | A1 |
20080033718 | Zopf et al. | Feb 2008 | A1 |
20080091418 | Laaksonen et al. | Apr 2008 | A1 |
20080126086 | Kandhadai et al. | May 2008 | A1 |
20080126096 | Choo et al. | May 2008 | A1 |
20090076805 | Zhengzhong et al. | Mar 2009 | A1 |
20090076830 | Taleb | Mar 2009 | A1 |
20090089050 | Mo | Apr 2009 | A1 |
20090138267 | Davidson et al. | May 2009 | A1 |
20090248424 | Koishida et al. | Oct 2009 | A1 |
20090254352 | Zhao | Oct 2009 | A1 |
20100010810 | Morii | Jan 2010 | A1 |
20100070270 | Gao | Mar 2010 | A1 |
20100094637 | Stuart | Apr 2010 | A1 |
20100115370 | Sakari et al. | May 2010 | A1 |
20100198588 | Osada et al. | Aug 2010 | A1 |
20100223061 | Ojanpera | Sep 2010 | A1 |
20100312552 | Kandhadai et al. | Dec 2010 | A1 |
20100312553 | Fang et al. | Dec 2010 | A1 |
20100324912 | Mi et al. | Dec 2010 | A1 |
20110015768 | Soo et al. | Jan 2011 | A1 |
20110022924 | Malenovsky et al. | Jan 2011 | A1 |
20110035212 | Briand et al. | Feb 2011 | A1 |
20110060597 | Wei-Ge et al. | Mar 2011 | A1 |
20110071839 | Budnikov et al. | Mar 2011 | A1 |
20110095920 | Ashley et al. | Apr 2011 | A1 |
20110096830 | Ashley et al. | Apr 2011 | A1 |
20110116542 | Marc et al. | May 2011 | A1 |
20110125505 | Philleppe et al. | May 2011 | A1 |
20110145003 | Bruno | Jun 2011 | A1 |
20110196673 | Jin et al. | Aug 2011 | A1 |
20110200198 | Stefan et al. | Aug 2011 | A1 |
20110238425 | Jeremie et al. | Sep 2011 | A1 |
20110238426 | Borsum et al. | Sep 2011 | A1 |
20120010879 | Kei et al. | Jan 2012 | A1 |
20120022881 | Geiger et al. | Jan 2012 | A1 |
20120072209 | Krishnan | Mar 2012 | A1 |
20120109659 | Guoming et al. | May 2012 | A1 |
20120214544 | Rodriguez et al. | Aug 2012 | A1 |
20120245947 | Neuendorf et al. | Sep 2012 | A1 |
20120265540 | Fuchs et al. | Oct 2012 | A1 |
20120265541 | Geiger et al. | Oct 2012 | A1 |
20130030819 | Pontus et al. | Jan 2013 | A1 |
20130096912 | Resch et al. | Apr 2013 | A1 |
20130226594 | Fuchs et al. | Aug 2013 | A1 |
20130282369 | Sang-Ut et al. | Oct 2013 | A1 |
20140052439 | Tejaswi et al. | Feb 2014 | A1 |
20140067404 | Baumgarte | Mar 2014 | A1 |
20140074486 | Dietz et al. | Mar 2014 | A1 |
20140108020 | Yang et al. | Apr 2014 | A1 |
20140142957 | Nam-Suk et al. | May 2014 | A1 |
20140172141 | Mangold | Jun 2014 | A1 |
20140223029 | Bhaskar et al. | Aug 2014 | A1 |
20140358531 | Vos | Dec 2014 | A1 |
20150010155 | Yue et al. | Jan 2015 | A1 |
20150081312 | Fuchs et al. | Mar 2015 | A1 |
20150142452 | Nam-Suk et al. | May 2015 | A1 |
20150154969 | Craven et al. | Jun 2015 | A1 |
20150170668 | Kovesi et al. | Jun 2015 | A1 |
20150221311 | Jeon et al. | Aug 2015 | A1 |
20150228287 | Bruhn et al. | Aug 2015 | A1 |
20150255079 | Huang et al. | Sep 2015 | A1 |
20150302859 | Aguilar et al. | Oct 2015 | A1 |
20150302861 | Salami et al. | Oct 2015 | A1 |
20150325246 | Philip et al. | Nov 2015 | A1 |
20150371647 | Faure et al. | Dec 2015 | A1 |
20160019898 | Schreiner et al. | Jan 2016 | A1 |
20160027450 | Gao | Jan 2016 | A1 |
20160078878 | Ravelli | Mar 2016 | A1 |
20160111094 | Martin et al. | Apr 2016 | A1 |
20160189721 | Johnston et al. | Jun 2016 | A1 |
20160225384 | Kjörling et al. | Aug 2016 | A1 |
20160285718 | Bruhn | Sep 2016 | A1 |
20160293174 | Atti et al. | Oct 2016 | A1 |
20160293175 | Atti et al. | Oct 2016 | A1 |
20160307576 | Doehla et al. | Oct 2016 | A1 |
20160365097 | Guan et al. | Dec 2016 | A1 |
20160372125 | Atti et al. | Dec 2016 | A1 |
20160372126 | Atti et al. | Dec 2016 | A1 |
20160379655 | Truman et al. | Dec 2016 | A1 |
20170011747 | Faure et al. | Jan 2017 | A1 |
20170053658 | Atti et al. | Feb 2017 | A1 |
20170078794 | Bongiovi et al. | Mar 2017 | A1 |
20170294196 | Bradley et al. | Mar 2017 | A1 |
20170103769 | Laaksonen et al. | Apr 2017 | A1 |
20170110135 | Disch et al. | Apr 2017 | A1 |
20170133029 | Markovic et al. | May 2017 | A1 |
20170140769 | Ravelli | May 2017 | A1 |
20170154631 | Stefan et al. | Jun 2017 | A1 |
20170154635 | Doehla et al. | Jun 2017 | A1 |
20170221495 | Sung et al. | Aug 2017 | A1 |
20170236521 | Venkatraman et al. | Aug 2017 | A1 |
20170249387 | Hatami-Hanza | Aug 2017 | A1 |
20170256266 | Sung et al. | Sep 2017 | A1 |
20170303114 | Johansson et al. | Oct 2017 | A1 |
20190027156 | Sung et al. | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
101140759 | Mar 2008 | CN |
102779526 | Nov 2012 | CN |
107103908 | Aug 2017 | CN |
0716787 | Jun 1996 | EP |
0732687 | Sep 1996 | EP |
1791115 | May 2007 | EP |
2676266 | Dec 2013 | EP |
2980796 | Feb 2016 | EP |
2980799 | Feb 2016 | EP |
3111624 | Jan 2017 | EP |
2944664 | Oct 2010 | FR |
H05-281996 | Oct 1993 | JP |
H07-28499 | Jan 1995 | JP |
H0811644 | Jan 1996 | JP |
H9-204197 | Aug 1997 | JP |
H10-51313 | Feb 1998 | JP |
H1091194 | Apr 1998 | JP |
H11-330977 | Nov 1999 | JP |
2004-138756 | May 2004 | JP |
2006-527864 | Dec 2006 | JP |
2007519014 | Jul 2007 | JP |
2007-525718 | Sep 2007 | JP |
2009-003387 | Jan 2009 | JP |
2009-008836 | Jan 2009 | JP |
2009-538460 | Nov 2009 | JP |
2010-500631 | Jan 2010 | JP |
2010-501955 | Jan 2010 | JP |
2012-533094 | Dec 2012 | JP |
2016-523380 | Aug 2016 | JP |
2016-200750 | Dec 2016 | JP |
2017-522604 | Aug 2017 | JP |
2017-528752 | Sep 2017 | JP |
100261253 | Jul 2000 | KR |
20030031936 | Apr 2003 | KR |
1020050007853 | Jan 2005 | KR |
1020090077951 | Jul 2009 | KR |
10-2010-0136890 | Dec 2010 | KR |
20130019004 | Feb 2013 | KR |
1020160144978 | Dec 2016 | KR |
20170000933 | Jan 2017 | KR |
2337414 | Oct 2008 | RU |
2376657 | Dec 2009 | RU |
2413312 | Feb 2011 | RU |
2419891 | May 2011 | RU |
2439718 | Jan 2012 | RU |
2483365 | May 2013 | RU |
2520402 | Jun 2014 | RU |
2568381 | Nov 2015 | RU |
2596594 | Sep 2016 | RU |
2596596 | Sep 2016 | RU |
2015136540 | Mar 2017 | RU |
2628162 | Aug 2017 | RU |
2016105619 | Aug 2017 | RU |
200809770 | Feb 2008 | TW |
201005730 | Feb 2010 | TW |
201126510 | Aug 2011 | TW |
201131550 | Sep 2011 | TW |
201207839 | Feb 2012 | TW |
201243832 | Nov 2012 | TW |
201612896 | Apr 2016 | TW |
201618080 | May 2016 | TW |
201618086 | May 2016 | TW |
201642246 | Dec 2016 | TW |
201642247 | Dec 2016 | TW |
201705126 | Feb 2017 | TW |
201711021 | Mar 2017 | TW |
201713061 | Apr 2017 | TW |
201724085 | Jul 2017 | TW |
201732779 | Sep 2017 | TW |
9916050 | Apr 1999 | WO |
2004072951 | Aug 2004 | WO |
2005086138 | Sep 2005 | WO |
2005086139 | Sep 2005 | WO |
2007073604 | Jul 2007 | WO |
2007138511 | Dec 2007 | WO |
2008025918 | Mar 2008 | WO |
2008046505 | Apr 2008 | WO |
2009066869 | May 2009 | WO |
2011048118 | Apr 2011 | WO |
2011086066 | Jul 2011 | WO |
2011086067 | Jul 2011 | WO |
2012000882 | Jan 2012 | WO |
2012000882 | Jan 2012 | WO |
2012126893 | Sep 2012 | WO |
2014072951 | May 2014 | WO |
2014165668 | Oct 2014 | WO |
2014202535 | Dec 2014 | WO |
2014202535 | Dec 2014 | WO |
2015063045 | May 2015 | WO |
2015063227 | May 2015 | WO |
2015071173 | May 2015 | WO |
2015174911 | Nov 2015 | WO |
2016016121 | Feb 2016 | WO |
2016142002 | Sep 2016 | WO |
2016142337 | Sep 2016 | WO |
Entry |
---|
Sujoy Sarkar, “Examination Report for IN Application No. 202037018091”, dated Jun. 1, 2021, Intellectual Property India, India. |
ITU-T G.718: Frame error robust narrow-band and wideband embedded variable bitrate coding of speech and audio from 8-32 kbit/s. |
Alain De Cheveignéet al.: “YIN, a fundamental frequency estimator for speech and music.” The Journal of the Acoustical Society of America 111.4 (2002): 1917-1930. |
3GPP TS 26.190; Speech codec speech processing functions; Adaptive Multi-Rate -Wideband (AMR-WB) speech codec; Transcoding functions. |
“5 Functional description of the encoder”; 3GPP Standard; 26445-C10 1 S05 S0501, 3rd Generation Partnership PROJECT-(3GPP)?, Mobile Competence Centre; 650, Route Des Lucioles; F-06921 Sophiaantipoli S Cedex; France; Dec. 10, 2014 (Dec. 10, 2014), XP050907035, Retrieved from the Internet: URL:http://www.3gpp.org/ftp/Specs/2014-12/Rel-12/26 series/ [retrieved on Dec. 10, 2014]. |
Ojala P et al: “A novel pitch-lag search method using adaptive weighting and median filtering”; Speech Coding Proceedings, 1999 IEEE Workshop on PORVOO, Finland Jun. 20-23, 1999, Piscataway, NJ, USA, IEEE, US, Jun. 20, 1999 (Jun. 20, 1999), pp. 114-116, XP010345546. |
3GPP TS 26.445; Codec for Enhanced Voice Services (EVS); Detailed algorithmic description. |
ISO/IEC 23008-3:2015; Information technology—High efficiency coding and mediadelivery in heterogeneous environments—Part 3: 3D audio. |
O.E. Groshev, “Office Action for RU Application No. 2020118947”, dated Dec. 1, 2020, ROSPATENT, Russia. |
O.I. Starukhina, “Office Action for RU Application No. 2020118968”, dated Dec. 23, 2020, ROSPATENT, Russia. |
P.A. Volkov, “Office Action for RU Application No. 2020120251”, dated Oct. 28, 2020, ROSPATENT, Russia. |
P.A. Volkov, “Office Action for RU Application No. 2020120256”, dated Oct. 28, 2020, ROSPATENT, Russia. |
D.V.TRAVNIKOV, “Decision on Grant for RU Application No. 2020118969”, dated Nov. 2, 2020, ROSPATENT, Russia. |
Hiroshi Ono, “Office Action for JP Application No. 2020-526081”, dated Jun. 22, 2021, JPO, Japan. |
Hiroshi Ono, “Office Action for JP Application No. 2020-526084”, dated Jun. 23, 2021, JPO, Japan. |
ETSI TS 126 445 V13.2.0 (Aug. 2016), Universal Mobile Telecommunications System (UMTS); LTE; Codec for Enhanced Voice Services (EVS); Detailed algorithmic description (3GPP TS 26.445 version 13.2.0 Release 13) [Online]. Available: http://www.3gpp.org/ftp/Specs/archive/26_series/26.445/26445-d00.zip. |
Geiger, “Audio Coding based on integer transform”, Ilmenau: https://www.db-thueringen.de/receive/dbt_mods_00010054, 2004. |
Henrique S Malvar, “Biorthogonal and Nonuniform Lapped Transforms for Transform Coding with Reduced Blocking and Ringing Artifacts”, IEEE Transactions on Signal Processing, IEEE Service Center, New York, NY, US, (Apr. 1998), vol. 46, No. 4, ISSN 1053-587X, XP011058114. |
Anonymous, “ISO/IEC 14496-3:2005/FDAM 9, AAC-ELD”, 82. MPEG MEETING;Oct. 22, 2007-Oct. 26, 2007; Shenzhen; (Motion Picture Expert Group or ISO/IEC JTC1/SC29/WG11),, (Feb. 21, 2008), No. N9499, XP030015994. |
Virette, “Low Delay Transform for High Quality Low Delay Audio Coding”, Universite de Rennes 1, (Dec. 10, 2012), pp. 1-195, URL: https://hal.inria.fr/tel-01205574/document, (Mar. 30, 2016), XP055261425. |
ISO/IEC 14496-3:2001; Information technology—Coding of audio-visual objects—Part 3: Audio. |
3GPP TS 26.403 v14.0.0 (Mar. 2017); General audio codec audio processing functions; Enhanced acPIus general audio codec; Encoder specification; Advanced Audio Coding (AAC) part; (Release 14). |
ISO/IEC 23003-3; Information technology—MPEG audio technologies—Part 3: Unified speech and audio coding, 2011. |
3GPP TS 26.445 V14.1.0 (Jun. 2017), 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Codec for Enhanced Voice Services (EVS); Detailed Algorithmic Description (Release 14), http://www.3gpp.org/ftp//Specs/archive/26_series/26.445/26445-e10.zip, Section 5.1.6 “Bandwidth detection”. |
Eksler Vaclav et al., “Audio bandwidth detection in the EVS codec”, 2015 IEEE Global Conference on Signal and Information Processing (GLOBALSIP), IEEE, (Dec. 14, 2015), doi:10.1109/GLOBALSIP.2015.7418243, pp. 488-492, XP032871707. |
Oger M et al, “Transform Audio Coding with Arithmetic-Coded Scalar Quantization and Model-Based Bit Allocation”, International Conference on Acoustics, Speech, and Signalprocessing, IEEE, XX, Apr. 15, 2007 (Apr. 15, 2007), p. IV-545, XP002464925. |
Asad et al., “An enhanced least significant bit modification technique for audio steganography”, International Conference on Computer Networks and Information Technology, Jul. 11-13, 2011. |
Makandar et al, “Least Significant Bit Coding Analysis for Audio Steganography”, Journal of Future Generation Computing, vol. 2, No. 3, Mar. 2018. |
ITU-T G.718 (Jun. 2008): Series G: Transmission Systems and Media, Digital Systems and Networks, Digital terminal equipments—Coding of voice and audio signals, Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s. |
3GPP TS 26.447 V14.1.0 (Jun. 2017), Technical Specification, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Codec for Enhanced Voice Services (EVS); Error Concealment of Lost Packets (Release 14). |
DVB Organization, “ISO-IEC_23008-3_A3_(E)_(H 3DA FDAM3).docx”, DVB, Digital Video Broadcasting, C/O EBU—17A Ancienne Route-CH-1218 Grand Saconnex, Geneva—SWITZERLAND, (Jun. 13, 2016), XP017851888. |
Hill et al., “Exponential stability of time-varying linear systems,” IMA J Numer Anal, pp. 865-885, 2011. |
3GPP TS 26.090 V14.0.0 (Mar. 2017), 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Mandatory Speech Codec speech processing functions; Adaptive Multi-Rate (AMR) speech codec; Transcoding functions (Release 14). |
3GPP TS 26.190 V14.0.0 (Mar. 2017), Technical Specification, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Speech codec speech processing functions; Adaptive Multi-Rate-Wideband (AMR-WB) speech codec; Transcoding functions (Release 14). |
3GPP TS 26.290 V14.0.0 (Mar. 2017), Technical Specification, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Audio codec processing functions; Extended Adaptive Multi-Rate-Wideband (AMR-WB+) codec; Transcoding functions (Release 14). |
Edler et al., “Perceptual Audio Coding Using a Time-Varying Linear Pre-and Post-Filter,” in AES 109th Convention, Los Angeles, 2000. |
Cray et al., “Digital lattice and ladder filter synthesis,” IEEE Transactions on Audio and Electroacoustics, vol. vol. 21, No. No. 6, pp. 491-500, 1973. |
Lamoureux et al., “Stability of time variant filters,” CREWES Research Report—vol. 19, 2007. |
Herre et al., “Enhancing the performance of perceptual audio coders by using temporal noise shaping (TNS).” Audio Engineering Society Convention 101. Audio Engineering Society, 1996. |
Herre et al., “Continuously signal-adaptive filterbank for high-quality perceptual audio coding.” Applications of Signal Processing to Audio and Acoustics, 1997. 1997 IEEE ASSP Workshop on. IEEE, 1997. |
Herre, “Temporal noise shaping, quantization and coding methods in perceptual audio coding: A tutorial introduction.” Audio Engineering Society Conference: 17th International Conference: High-Quality Audio Coding. Audio Engineering Society, 1999. |
Fuchs Guillaume et al, “Low delay LPC and MDCT-based audio coding in the EVS codec”, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, (Apr. 19, 2015), doi: 10.1109/ICASSP.2015.7179068, pp. 5723-5727, XP033187858. |
Niamut et al., “RD Optimal Temporal Noise Shaping for Transform Audio Coding”, Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on Toulouse, France May 14-19, 2006, Piscataway, NJ, USA,IEEE, Piscataway, NJ, USA, (Jan. 1, 2006), doi:10.1109/ICASSP.2006.1661244, ISBN 978-1-4244-0469-8, pp. V-V, XP031015996. |
ITU-T G.711 (Sep. 1999): Series G: Transmission Systems and Media, Digital Systems and Networks, Digital transmission systems—Terminal equipments—Coding of analogue signals by pulse code modulation, Pulse code modulation (PCM) of voice frequencies, Appendix I: A high quality low-complexity algorithm for packet loss concealment with G.711. |
Cheveigne et al.,“YIN, a fundamental frequency estimator for speech and music.” The Journal of the Acoustical Society of America 111.4 (2002): 1917-1930. |
Ojala P et al, “A novel pitch-lag search method using adaptive weighting and median filtering”, Speech Coding Proceedings, 1999 IEEE Workshop on Porvoo, Finland Jun. 20-23, 1999, Piscataway, NJ, USA, IEEE, US, (Jun. 20, 1999), doi:10.1109/SCFT.1999.781502, ISBN 978-0-7803-5651-1, pp. 114-116, XP010345546. |
“5 Functional description of the encoder”, Dec. 10, 2014 (Dec. 10, 2014), 3GPP Standard; 26445-C10_1_S05_S0501, 3rd Generation Partnership Project (3GPP)?, Mobile Compeience Centre; 650, Route Des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France Retrieved from the Internet:URL: http://www.3gpp.org/ftp/Specs/2014-12/Rel-12/26_series/ XP050907035. |
Mao Xiaohong, “Examination Report for SG Application No. 11202004228V”, dated Sep. 2, 2021, IPOS, Singapore. |
Mao Xiaohong, “Search Report for SG Application No. 11202004228V”, Sep. 3, 2021, IPOS, Singapore. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7015512”, Sep. 9, 2021, KIPO, Republic of Korea. |
John Tan, “Office Action for SG Application 11202004173P”, Jul. 23, 2021, IPOS, Singapore. |
“Decision on Grant Patent for Invention for RU Application No. 2020118949”, Nov. 11, 2020, ROSPATENT, Russia. |
Tetsuyuki Okumachi, “Office Action for JP Application 2020-118837”, dated Jul. 16, 2021, JPO, Japan. |
Tetsuyuki Okumachi, “Office Action for JP Application 2020-118838”, dated Jul. 16, 2021, JPO, Japan. |
Takeshi Yamashita, “Office Action for JP Application 2020-524877”, dated Jun. 24, 2021, JPO, Japan. |
Tomonori Kikuchi, “Office Action for JP Application No. 2020-524874”, dated Jun. 2, 2021, JPO Japan. |
Guojun Lu et al., “A Technique towards Automatic Audio Classification and Retrieval, Forth International Conference on Signal Processing”, 1998, IEEE, Oct. 12, 1998, pp. 1142 to 1145. |
Hiroshi Ono, “Office Action for JP Application No. 2020-526135”, dated May 21, 2021, JPO Japan. |
Santosh Mehtry, “Office Action for IN Application No. 202037019203”, dated Mar. 19, 2021, Intellectual Property India, India. |
Khalid Sayood, “Introduction to Data Compression”, Elsevier Science & Technology, 2005, Section 16.4, Figure 16. 13, p. 526. |
Patterson et al., “Computer Organization and Design”, The hardware/software Interface, Revised Fourth Edition, Elsevier, 2012. |
International Telecommunication Union, “G. 729-based embedded variable bit-rate coder: An 8-32 kbit/s scalable wideband coder bitstream interoperable with G.729”. ITU-T Recommendation G.729.1., May 2006. |
3GGP TS 26.445, “Universal Mobile TElecommunications System (UMTS); LTE; Codec for Enhanced Voice Services (EVS); Detailed algorithmic description (3GPP TS 26.445 version 13.4.1 Release 13)”, ETSI TS 126 445 V13.4.1., Apr. 2017. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016100”, dated Jan. 13, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016224”, dated Jan. 13, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7015835”, dated Jan. 13, 2022, KIPO, Republic of Korea. |
Kazunori Mochimura, “Decision to Grant a Patent for JP application No. 2020-524579”, Nov. 29, 2021, JPO, Japan. |
Dietz, Martin et al., “Overview of the EVS codec architecture.” 2015 IEEE International Conference on Acoustics, Signal Processing (ICASSP), IEEE, 2015. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016424”, dated Feb. 9, 2022, KIPO, Korea. |
Nam Sook Lee, “Office Action for KR Application No. 10-2020-7016503”, dated Feb. 9, 2022, KIPO, Korea. |
ETSI TS 126 445 V12.0.0, “Universal Mobile Telecommunications System (UMTS); LTE; EVS Codec Detailed Algorithmic Description (3GPP TS 26.445 version 12.0.0 Release 12)”, Nov. 2014. |
ETSI TS 126 403 V6.0.0, “Universal Mobile Telecommunications System (UMTS); General audio codec audio processing functions; Enhanced aacPlus general audio codec; Encoder specification; Advanced Audio Coding (AAC) part (3GPP TS 26.403 version 6.0.0 Release 6)”, Sep. 2004. |
ETSI TS 126 401 V6.2.0, “Universal Mobile Telecommunications System (UMTS); General audio codec audio processing functions; Enhanced aacPlus general audio codec; General description (3GPP TS 26.401 version 6.2.0 Release 6)”, Mar. 2005. |
3GPP TS 26.405, “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects General audio codec audio processing functions; Enhanced aacPlus general audio codec; Encoder specification parametric stereo part (Release 6)”, Sep. 2004. |
3GPP TS 26.447 V12.0.0, “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Codec for Enhanced Voice Services (EVS); Error Concealment of Lost Packets (Release 12)”, Sep. 2014. |
ISO/IEC FDIS 23003-3:2011(E), “Information technology—MPEG audio technologies—Part 3: Unified speech and audio coding”, ISO/IEC JTC 1/SC 29/WG 11, Sep. 20, 2011. |
Valin et al., “Definition of the Opus Audio Codec”, Internet Engineering Task Force (IETF) RFC 6716, Sep. 2012. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7015511”, dated Apr. 19, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7016100”, dated Apr. 21, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7015836”, dated Apr. 28, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7015512”, dated Apr. 20, 2022, KIPO, Republic of Korea. |
Nam Sook Lee, “Decision to Grant a Patent for KR Application No. 10-2020-7015835”, dated Apr. 22, 2022, KIPO, Republic of Korea. |
Xiong-Malvar, “A Nonuniform Modulated Complex Lapped Transform”, IEEE Signal Processing Letters, vol. 8, No. 9, Sep. 2001. (Year: 2001). |
Raj et al., “An Overview of MDCT for Time Domain Aliasing Cancellation”, 2014 International Conference on Communication and Network Technologies (ICCNT). (Year: 2014). |
Malvar, “Biorthogonal and Nonuniform Lapped Transforms for Transform Coding with Reduced Blocking and Ringing Artifacts”, IEEE Transactions on Signal Processing, vol. 46, No. 4, Apr. 1998. (Year: 1998). |
Malvar, “Lapped Transforms for Efficient Transform/Subband Coding”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, No. 6, Jun. 1990. (Year: 1990). |
Malvar, “Fast Algorithms for Orthogonal and Biorthogonal Modulated Lapped Transforms”, Microsoft Research, 1998. (Year: 1998). |
Princen-Bradley, “Analysis/Synthesis Filter Bank Design Based on Time Domain Aliasing Cancellation”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-34, No. 5, Oct. 1986. (Year: 1986). |
Shlien, “The Modulated Lapped Transform, Its Time-Varying Forms, and Its Applications to Audio Coding Standards”, IEEE Transactions on Speech and Audio Processing, vol. 5, No. 4, Jul. 1997. (Year: 1997). |
Number | Date | Country | |
---|---|---|---|
20200273475 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2018/080195 | Nov 2018 | US |
Child | 16869000 | US |