The present invention relates to audio signal processing, in particular to speech processing, and, more particularly, to an apparatus and a method for improved concealment of the adaptive codebook in ACELP-like concealment (ACELP=Algebraic Code Excited Linear Prediction).
Audio signal processing becomes more and more important. In the field of audio signal processing, concealment techniques play an important role. When a frame gets lost or is corrupted, the lost information from the lost or corrupted frame has to be replaced. In speech signal processing, in particular, when considering ACELP- or ACELP-like-speech codecs, pitch information is very important. Pitch prediction techniques and pulse resynchronization techniques are needed.
Regarding pitch reconstruction, different pitch extrapolation techniques exist in conventional technology.
One of these techniques is a repetition based technique. Most of the state of the art codecs apply a simple repetition based concealment approach, which means that the last correctly received pitch period before the packet loss is repeated, until a good frame arrives and new pitch information can be decoded from the bitstream. Or, a pitch stability logic is applied according to which a pitch value is chosen which has been received some more time before the packet loss. Codecs following the repetition based approach are, for example, G.719 (see G.719: Low-complexity, full-band audio coding for high-quality, conversational applications, Recommendation ITU-T G.719, Telecommunication Standardization Sector of ITU, June 2008, 8.6), G.729 (see G.719: Low-complexity, full-band audio coding for high-quality, conversational applications, Recommendation ITU-T G.719, Telecommunication Standardization Sector of ITU, June 2008, 4.4]), AMR (see Adaptive multi-rate (AMR) speech codec; error concealment of lost frames (release 11), 3GPP TS 26.091, 3rd Generation Partnership Project, September 2012, 6.2.3.1; ITU-T, Wideband coding of speech at around 16 kbit/s using adaptive multi-rate wideband (amr-wb), Recommendation ITU-T G.722.2, Telecommunication Standardization Sector of ITU, July 2003), AMR-WB (see Speech codec speech processing functions; adaptive multi-rate-wideband (AMRWB) speech codec; error concealment of erroneous or lost frames, 3GPP TS 26.191, 3rd Generation Partnership Project, September 2012, 6.2.3.4.2) and AMR-WB+(ACELP and TCX20 (ACELP like) concealment) (see 3GPP; Technical Specification Group Services and System Aspects, Extended adaptive multi-rate-wideband (AMR-WB+) codec, 3GPP TS 26.290, 3rd Generation Partnership Project, 2009); (AMR=Adaptive Multi-Rate; AMR-WB=Adaptive Multi-Rate-Wideband).
Another pitch reconstruction technique of conventional technology is pitch derivation from time domain. For some codecs, the pitch may be used for concealment, but not embedded in the bitstream. Therefore, the pitch is calculated based on the time domain signal of the previous frame in order to calculate the pitch period, which is then kept constant during concealment. A codec following this approach is, for example, G.722, see, in particular G.722 Appendix 3 (see ITU-T, Wideband coding of speech at around 16 kbit/s using adaptive multi-rate wideband (amr-wb), Recommendation ITU-T G.722.2, Telecommunication Standardization Sector of ITU, July 2003, III.6.6 and III.6.7) and G.722 Appendix 4 (see G.722 Appendix IV: A low-complexity algorithm for packet loss concealment with G.722, ITU-T Recommendation, ITU-T, August 2007, IV.6.1.2.5).
A further pitch reconstruction technique of conventional technology is extrapolation based. Some state of the art codecs apply pitch extrapolation approaches and execute specific algorithms to change the pitch accordingly to the extrapolated pitch estimates during the packet loss. These approaches will be described in more detail as follows with reference to G.718 and G.729.1.
At first, G.718 considered (see G.718: Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s, Recommendation ITU-T G.718, Telecommunication Standardization Sector of ITU, June 2008). An estimation of the future pitch is conducted by extrapolation to support the glottal pulse resynchronization module. This information on the possible future pitch value is used to synchronize the glottal pulses of the concealed excitation.
The pitch extrapolation is conducted only if the last good frame was not UNVOICED. The pitch extrapolation of G.718 is based on the assumption that the encoder has a smooth pitch contour. Said extrapolation is conducted based on the pitch lags dfr[i] of the last seven subframes before the erasure.
In G.718, a history update of the floating pitch values is conducted after every correctly received frame. For this purpose, the pitch values are updated only if the core mode is other than UNVOICED. In the case of a lost frame, the difference Δdfr[i] between the floating pitch lags is computed according to the formula
d
dfr
[i]
=d
fr
[i]
−d
fr
[i−1] for i=−1, . . . ,−6 (1)
In formula (1), dfr[−1] denotes the pitch lag of the last (i.e. 4th) subframe of the previous frame; dfr[−2] denotes the pitch lag of the 3rd subframe of the previous frame; etc.
According to G.718, the sum of the differences Δdfr[i] is computed as
As the values Δdfr[i] can be positive or negative, the number of sign inversions of Δdfr[i] is summed and the position of the first inversion is indicated by a parameter being kept in memory.
The parameter fcorr is found by
where dmax=231 is the maximum considered pitch lag.
In G.718, a position imax, indicating the maximum absolute difference is found according to the definition
i
max={maxi=−1−6(abs(Δdfr[i]))}
and a ratio for this maximum difference is computed as follows:
If this ratio is greater than or equal to 5, then the pitch of the 4th subframe of the last correctly received frame is used for all subframes to be concealed. If this ratio is greater than or equal to 5, this means that the algorithm is not sure enough to extrapolate the pitch, and the glottal pulse resynchronization will not be done.
If rmax is less than 5, then additional processing is conducted to achieve the best possible extrapolation. Three different methods are used to extrapolate the future pitch. To choose between the possible pitch extrapolation algorithms, a deviation parameter fcorr2 is computed, which depends on the factor fcorr and on the position of the maximum pitch variation imax. However, at first, the mean floating pitch difference is modified to remove too large pitch differences from the mean:
If fcorr<0.98 and if imax=3, then the mean fractional pitch difference
to remove the pitch differences related to the transition between two frames.
If fcorr≥0.98 or if imax≠3, the mean fractional pitch difference
and the maximum floating pitch difference is replaced with this new mean value
Δdfr[i
With this new mean of the floating pitch differences, the normalized deviation fcorr2 is computed as:
wherein Isf is equal to 4 in the first case and is equal to 6 in the second case.
Depending on this new parameter, a choice is made between the three methods of extrapolating the future pitch:
d
ext=round[Δfr[−1]+4·
d
ext=round[dfr[−1]+4·
After this processing, the pitch lag is limited between 34 and 231 (values denote the minimum and the maximum allowed pitch lags).
Now, to illustrate another example of extrapolation based pitch reconstruction techniques, G.729.1 is considered (see G.722 Appendix III: A high-complexity algorithm for packet loss concealment for G.722, ITU-T Recommendation, ITU-T, November 2006).
G.729.1 features a pitch extrapolation approach (see European Patent No. 2 002 427 B1, Yang Gao, “Pitch prediction for packet loss concealment”), in case that no forward error concealment information (e.g., phase information) is decodable. This happens, for example, if two consecutive frames get lost (one superframe consists of four frames which can be either ACELP or TCX20). There are also TCX40 or TCX80 frames possible and almost all combinations of it.
When one or more frames are lost in a voiced region, previous pitch information is used to reconstruct the current lost frame. The precision of the current estimated pitch may directly influence the phase alignment to the original signal, and it is critical for the reconstruction quality of the current lost frame and the received frame after the lost frame. Using several past pitch lags instead of just copying the previous pitch lag would result in statistically better pitch estimation. In the G.729.1 coder, pitch extrapolation for FEC (FEC=forward error correction) consists of linear extrapolation based on the past five pitch values. The past five pitch values are P(i), for i=0, 1, 2, 3, 4, wherein P(4) is the latest pitch value. The extrapolation model is defined according to:
P′(i)=a+i·b (9)
The extrapolated pitch value for the first subframe in a lost frame is then defined as:
P′(5)=a+5·b (10)
In order to determine the coefficients a and b, an error E is minimized, wherein the error E is defined according to:
By setting
a and b result to:
In the following, a frame erasure concealment concept of conventional technology for the AMR-WB codec as presented in Xinwen Mu, Hexin Chen, and Yan Zhao, A frame erasure concealment method based on pitch and gain linear prediction for AMR-WB codec, 2011 IEEE International Conference on Consumer Electronics (ICCE), January 2011, pp. 815-816, is described. This frame erasure concealment concept is based on pitch and gain linear prediction. Said paper proposes a linear pitch inter/extrapolation approach in case of a frame loss, based on a Minimum Mean Square Error Criterion.
According to this frame erasure concealment concept, at the decoder, when the type of the last valid frame before the erased frame (the past frame) is the same as that of the earliest one after the erased frame (the future frame), the pitch P(i) is defined, where i=−N, −N+1, . . . , 0, 1, N+4, N+5, and where N is the number of past and future subframes of the erased frame. P(1), P(2), P(3), P(4) are the four pitches of four subframes in the erased frame, P(0), P(−1), P(−N) are the pitches of the past subframes, and P(5), P(6), . . . , P(N+5) are the pitches of the future subframes. A linear prediction model P′(i)=a+b·i is employed. For i=1, 2, 3, 4; P′(1), P′(2), P′(3), P′(4) are the predicted pitches for the erased frame. The MMS Criterion (MMS=Minimum Mean Square) is taken into account to derive the values of two predicted coefficients a and b according to an interpolation approach. According to this approach, the error E is defined as:
Then, the coefficients a and b can be obtained by calculating
The pitch lags for the last four subframes of the erased frame can be calculated according to:
P′(1)=a+b·1; P′(2)=a+b·2
P′(3)=a+b·3; P′(4)=a+b·4 (14e)
It is found that N=4 provides the best result. N=4 means that five past subframes and five future subframes are used for the interpolation.
However, when the type of the past frames is different from the type of the future frames, for example, when the past frame is voiced but the future frame is unvoiced, just the voiced pitches of the past or the future frames are used to predict the pitches of the erased frame using the above extrapolation approach.
Now, pulse resynchronization in conventional technology is considered, in particular with reference to G.718 and G.729.1. An approach for pulse resynchronization is described in U.S. Pat. No. 8,255,207 B2, Tommy Vaillancourt, Milan Jelinek, Philippe Gournay, and Redwan Salami, “Method and device for efficient frame erasure concealment in speech codecs,” 2012.
At first, constructing the periodic part of the excitation is described.
For a concealment of erased frames following a correctly received frame other than UNVOICED, the periodic part of the excitation is constructed by repeating the low pass filtered last pitch period of the previous frame.
The construction of the periodic part is done using a simple copy of a low pass filtered segment of the excitation signal from the end of the previous frame.
The pitch period length is rounded to the closest integer:
T
c=round (last_pitch) (15a)
Considering that the last pitch period length is Tp, then the length of the segment that is copied, Tr, may, e.g., be defined according to:
T
r
=|T
p+0.5| (15b)
The periodic part is constructed for one frame and one additional subframe.
For example, with M subframes in a frame, the subframe length is
wherein L is the frame length, also denoted as Lframe: L=L_frame.
T [0] is the location of the first maximum pulse in the constructed periodic part of the excitation. The positions of the other pulses are given by:
T[i]=T[0]+iTc (16a)
corresponding to
T[i]=T[0]+iTr (16b)
After the construction of the periodic part of the excitation, the glottal pulse resynchronization is performed to correct the difference between the estimated target position of the last pulse in the lost frame (P), and its actual position in the constructed periodic part of the excitation (T [k]).
The pitch lag evolution is extrapolated based on the pitch lags of the last seven subframes before the lost frame. The evolving pitch lags in each subframe are:
p[i]=round(Tc+(i+1)δ),0≤i<M (17a)
where
and Text (also denoted as dext) is the extrapolated pitch as described above for dext.
The difference, denoted as d, between the sum of the total number of samples within pitch cycles with the constant pitch (Tc) and the sum of the total number of samples within pitch cycles with the evolving pitch, p[i], is found within a frame length. There is no description in the documentation how to find d.
In the source code of G.718 (see G.718: Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s, Recommendation ITU-T G.718, Telecommunication Standardization Sector of ITU, June 2008), d is found using the following algorithm (where M is the number of subframes in a frame):
The number of pulses in the constructed periodic part within a frame length plus the first pulse in the future frame is N. There is no description in the documentation how to find N.
In the source code of G.718 (see G.718: Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s, Recommendation ITU-T G.718, Telecommunication Standardization Sector of ITU, June 2008), N is found according to:
The position of the last pulse T [n] in the constructed periodic part of the excitation that belongs to the lost frame is determined by:
The estimated last pulse position P is:
P=T[n]+d (19a)
The actual position of the last pulse position T [k] is the position of the pulse in the constructed periodic part of the excitation (including in the search the first pulse after the current frame) closest to the estimated target position P:
∀|T[k]−P|≤|T[i]−P|, 0≤i<N (19b)
The glottal pulse resynchronization is conducted by adding or removing samples in the minimum energy regions of the full pitch cycles. The number of samples to be added or removed is determined by the difference:
diff=P−T[k] (19c)
The minimum energy regions are determined using a sliding 5-sample window. The minimum energy position is set at the middle of the window at which the energy is at a minimum. The search is performed between two pitch pulses from T [i]+Tc/8 to T [i+1]−Tc/4. There are Nmin=n−1 minimum energy regions.
If Nmin=1, then there is only one minimum energy region and dif f samples are inserted or deleted at that position.
For Nmin>1, less samples are added or removed at the beginning and more towards the end of the frame. The number of samples to be removed or added between pulses T [i] and T [i+1] is found using the following recursive relation:
If R [i]<R [i−1], then the values of R [i] and R [i−1] are interchanged.
According to an embodiment, an apparatus for determining an estimated pitch lag may have: an input interface for receiving a plurality of original pitch lag values, and a pitch lag estimator for estimating the estimated pitch lag, wherein the pitch lag estimator is configured to estimate the estimated pitch lag depending on a plurality of original pitch lag values and depending on a plurality of information values, wherein for each original pitch lag value of the plurality of original pitch lag values, an information value of the plurality of information values is assigned to said original pitch lag value.
According to another embodiment, a method for determining an estimated pitch lag may have the steps of: receiving a plurality of original pitch lag values, and estimating the estimated pitch lag, wherein estimating the estimated pitch lag is conducted depending on a plurality of original pitch lag values and depending on a plurality of information values, wherein for each original pitch lag value of the plurality of original pitch lag values, an information value of the plurality of information values is assigned to said original pitch lag value.
Another embodiment may have a computer program for implementing a method for determining an estimated pitch lag when being executed on a computer or signal processor.
According to an embodiment, the pitch lag estimator may, e.g., be configured to estimate the estimated pitch lag depending on the plurality of original pitch lag values and depending on a plurality of pitch gain values as the plurality of information values, wherein for each original pitch lag value of the plurality of original pitch lag values, a pitch gain value of the plurality of pitch gain values is assigned to said original pitch lag value.
In a particular embodiment, each of the plurality of pitch gain values may, e.g., be an adaptive codebook gain.
In an embodiment, the pitch lag estimator may, e.g., be configured to estimate the estimated pitch lag by minimizing an error function.
According to an embodiment, the pitch lag estimator may, e.g., be configured to estimate the estimated pitch lag by determining two parameters a, b, by minimizing the error function
wherein a is a real number, wherein b is a real number, wherein k is an integer with k≥2, and wherein P(i) is the i-th original pitch lag value, wherein gp(i) is the i-th pitch gain value being assigned to the i-th pitch lag value P(i).
In an embodiment, the pitch lag estimator may, e.g., be configured to estimate the estimated pitch lag by determining two parameters a, b, by minimizing the error function
wherein a is a real number, wherein b is a real number, wherein P(i) is the i-th original pitch lag value, wherein gp(i) is the i-th pitch gain value being assigned to the i-th pitch lag value P(i).
According to an embodiment, the pitch lag estimator may, e.g., be configured to determine the estimated pitch lag p according to p=a·i+b.
In an embodiment, the pitch lag estimator may, e.g., be configured to estimate the estimated pitch lag depending on the plurality of original pitch lag values and depending on a plurality of time values as the plurality of information values, wherein for each original pitch lag value of the plurality of original pitch lag values, a time value of the plurality of time values is assigned to said original pitch lag value.
According to an embodiment, the pitch lag estimator may, e.g., be configured to estimate the estimated pitch lag by minimizing an error function.
In an embodiment, the pitch lag estimator may, e.g., be configured to estimate the estimated pitch lag by determining two parameters a, b, by minimizing the error function
wherein a is a real number, wherein b is a real number, wherein k is an integer with k≥2, and wherein P(i) is the i-th original pitch lag value, wherein timepassed(i) is the i-th time value being assigned to the i-th pitch lag value P(i).
According to an embodiment, the pitch lag estimator may, e.g., be configured to estimate the estimated pitch lag by determining two parameters a, b, by minimizing the error function
wherein a is a real number, wherein b is a real number, wherein P(i) is the i-th original pitch lag value, wherein timepassed(i) is the i-th time value being assigned to the i-th pitch lag value P(i).
In an embodiment, the pitch lag estimator is configured to determine the estimated pitch lag ρ according to ρ=a·i+b.
Moreover, a method for determining an estimated pitch lag is provided. The method comprises:
Receiving a plurality of original pitch lag values, and
Estimating the estimated pitch lag.
Estimating the estimated pitch lag is conducted depending on a plurality of original pitch lag values and depending on a plurality of information values, wherein for each original pitch lag value of the plurality of original pitch lag values, an information value of the plurality of information values is assigned to said original pitch lag value.
Furthermore, a computer program for implementing the above-described method when being executed on a computer or signal processor is provided.
Moreover, an apparatus for reconstructing a frame comprising a speech signal as a reconstructed frame is provided, said reconstructed frame being associated with one or more available frames, said one or more available frames being at least one of one or more preceding frames of the reconstructed frame and one or more succeeding frames of the reconstructed frame, wherein the one or more available frames comprise one or more pitch cycles as one or more available pitch cycles. The apparatus comprises a determination unit for determining a sample number difference indicating a difference between a number of samples of one of the one or more available pitch cycles and a number of samples of a first pitch cycle to be reconstructed. Moreover, the apparatus comprises a frame reconstructor for reconstructing the reconstructed frame by reconstructing, depending on the sample number difference and depending on the samples of said one of the one or more available pitch cycles, the first pitch cycle to be reconstructed as a first reconstructed pitch cycle. The frame reconstructor is configured to reconstruct the reconstructed frame, such that the reconstructed frame completely or partially comprises the first reconstructed pitch cycle, such that the reconstructed frame completely or partially comprises a second reconstructed pitch cycle, and such that the number of samples of the first reconstructed pitch cycle differs from a number of samples of the second reconstructed pitch cycle.
According to an embodiment, the determination unit may, e.g., be configured to determine a sample number difference for each of a plurality of pitch cycles to be reconstructed, such that the sample number difference of each of the pitch cycles indicates a difference between the number of samples of said one of the one or more available pitch cycles and a number of samples of said pitch cycle to be reconstructed. The frame reconstructor may, e.g., be configured to reconstruct each pitch cycle of the plurality of pitch cycles to be reconstructed depending on the sample number difference of said pitch cycle to be reconstructed and depending on the samples of said one of the one or more available pitch cycles, to reconstruct the reconstructed frame.
In an embodiment, the frame reconstructor may, e.g., be configured to generate an intermediate frame depending on said one of the of the one or more available pitch cycles. The frame reconstructor may, e.g., be configured to modify the intermediate frame to obtain the reconstructed frame.
According to an embodiment, the determination unit may, e.g., be configured to determine a frame difference value (d; s) indicating how many samples are to be removed from the intermediate frame or how many samples are to be added to the intermediate frame. Moreover, the frame reconstructor may, e.g., be configured to remove first samples from the intermediate frame to obtain the reconstructed frame, when the frame difference value indicates that the first samples shall be removed from the frame. Furthermore, the frame reconstructor may, e.g., be configured to add second samples to the intermediate frame to obtain the reconstructed frame, when the frame difference value (d; s) indicates that the second samples shall be added to the frame.
In an embodiment, the frame reconstructor may, e.g., be configured to remove the first samples from the intermediate frame when the frame difference value indicates that the first samples shall be removed from the frame, so that the number of first samples that are removed from the intermediate frame is indicated by the frame difference value. Moreover, the frame reconstructor may, e.g., be configured to add the second samples to the intermediate frame when the frame difference value indicates that the second samples shall be added to the frame, so that the number of second samples that are added to the intermediate frame is indicated by the frame difference value.
According to an embodiment, the determination unit may, e.g., be configured to determine the frame difference number s so that the formula:
holds true, wherein L indicates a number of samples of the reconstructed frame, wherein M indicates a number of subframes of the reconstructed frame, wherein Tr indicates a rounded pitch period length of said one of the one or more available pitch cycles, and wherein p[i] indicates a pitch period length of a reconstructed pitch cycle of the i-th subframe of the reconstructed frame.
In an embodiment, the frame reconstructor may, e.g., be adapted to generate an intermediate frame depending on said one of the one or more available pitch cycles. Moreover, the frame reconstructor may, e.g., be adapted to generate the intermediate frame so that the intermediate frame comprises a first partial intermediate pitch cycle, one or more further intermediate pitch cycles, and a second partial intermediate pitch cycle. Furthermore, the first partial intermediate pitch cycle may, e.g., depend on one or more of the samples of said one of the one or more available pitch cycles, wherein each of the one or more further intermediate pitch cycles depends on all of the samples of said one of the one or more available pitch cycles, and wherein the second partial intermediate pitch cycle depends on one or more of the samples of said one of the one or more available pitch cycles. Moreover, the determination unit may, e.g., be configured to determine a start portion difference number indicating how many samples are to be removed or added from the first partial intermediate pitch cycle, and wherein the frame reconstructor is configured to remove one or more first samples from the first partial intermediate pitch cycle, or is configured to add one or more first samples to the first partial intermediate pitch cycle depending on the start portion difference number. Furthermore, the determination unit may, e.g., be configured to determine for each of the further intermediate pitch cycles a pitch cycle difference number indicating how many samples are to be removed or added from said one of the further intermediate pitch cycles. Moreover, the frame reconstructor may, e.g., be configured to remove one or more second samples from said one of the further intermediate pitch cycles, or is configured to add one or more second samples to said one of the further intermediate pitch cycles depending on said pitch cycle difference number. Furthermore, the determination unit may, e.g., be configured to determine an end portion difference number indicating how many samples are to be removed or added from the second partial intermediate pitch cycle, and wherein the frame reconstructor is configured to remove one or more third samples from the second partial intermediate pitch cycle, or is configured to add one or more third samples to the second partial intermediate pitch cycle depending on the end portion difference number.
According to an embodiment, the frame reconstructor may, e.g., be configured to generate an intermediate frame depending on said one of the of the one or more available pitch cycles. Moreover, the determination unit may, e.g., be adapted to determine one or more low energy signal portions of the speech signal comprised by the intermediate frame, wherein each of the one or more low energy signal portions is a first signal portion of the speech signal within the intermediate frame, where the energy of the speech signal is lower than in a second signal portion of the speech signal comprised by the intermediate frame. Furthermore, the frame reconstructor may, e.g., be configured to remove one or more samples from at least one of the one or more low energy signal portions of the speech signal, or to add one or more samples to at least one of the one or more low energy signal portions of the speech signal, to obtain the reconstructed frame.
In a particular embodiment, the frame reconstructor may, e.g., be configured to generate the intermediate frame, such that the intermediate frame comprises one or more reconstructed pitch cycles, such that each of the one or more reconstructed pitch cycles depends on said one of the of the one or more available pitch cycles. Moreover, the determination unit may, e.g., be configured to determine a number of samples that shall be removed from each of the one or more reconstructed pitch cycles. Furthermore, the determination unit may, e.g., be configured to determine each of the one or more low energy signal portions such that for each of the one or more low energy signal portions a number of samples of said low energy signal portion depends on the number of samples that shall be removed from one of the one or more reconstructed pitch cycles, wherein said low energy signal portion is located within said one of the one or more reconstructed pitch cycles.
In an embodiment, the determination unit may, e.g., be configured to determine a position of one or more pulses of the speech signal of the frame to be reconstructed as reconstructed frame. Moreover, the frame reconstructor may, e.g., be configured to reconstruct the reconstructed frame depending on the position of the one or more pulses of the speech signal.
According to an embodiment, the determination unit may, e.g., be configured to determine a position of two or more pulses of the speech signal of the frame to be reconstructed as reconstructed frame, wherein T [0] is the position of one of the two or more pulses of the speech signal of the frame to be reconstructed as reconstructed frame, and wherein the determination unit is configured to determine the position (T [i]) of further pulses of the two or more pulses of the speech signal according to the formula:
T[i]=T[0]+iTr
wherein Tr indicates a rounded length of said one of the one or more available pitch cycles, and wherein i is an integer.
According to an embodiment, the determination unit may, e.g., be configured to determine an index k of the last pulse of the speech signal of the frame to be reconstructed as the reconstructed frame such that
wherein L indicates a number of samples of the reconstructed frame, wherein s indicates the frame difference value, wherein T [0] indicates a position of a pulse of the speech signal of the frame to be reconstructed as the reconstructed frame, being different from the last pulse of the speech signal, and wherein Tr indicates a rounded length of said one of the one or more available pitch cycles.
In an embodiment, the determination unit may, e.g., be configured to reconstruct the frame to be reconstructed as the reconstructed frame by determining a parameter δ, wherein δ is defined according to the formula:
wherein the frame to be reconstructed as the reconstructed frame comprises M subframes, wherein Tp indicates the length of said one of the one or more available pitch cycles, and wherein Text Text indicates a length of one of the pitch cycles to be reconstructed of the frame to be reconstructed as the reconstructed frame.
According to an embodiment, the determination unit may, e.g., be configured to reconstruct the reconstructed frame by determining a rounded length Tr of said one of the one or more available pitch cycles based on formula:
T
r
=|T
p+0.5|
wherein Tp indicates the length of said one of the one or more available pitch cycles.
In an embodiment, the determination unit may, e.g., be configured to reconstruct the reconstructed frame by applying the formula:
wherein Tp indicates the length of said one of the one or more available pitch cycles, wherein Tr indicates a rounded length of said one of the one or more available pitch cycles, wherein the frame to be reconstructed as the reconstructed frame comprises M subframes, wherein the frame to be reconstructed as the reconstructed frame comprises L samples, and wherein δ is a real number indicating a difference between a number of samples of said one of the one or more available pitch cycles and a number of samples of one of one or more pitch cycles to be reconstructed.
Moreover, a method for reconstructing a frame comprising a speech signal as a reconstructed frame is provided, said reconstructed frame being associated with one or more available frames, said one or more available frames being at least one of one or more preceding frames of the reconstructed frame and one or more succeeding frames of the reconstructed frame, wherein the one or more available frames comprise one or more pitch cycles as one or more available pitch cycles. The method comprises:
Reconstructing the reconstructed frame is conducted, such that the reconstructed frame completely or partially comprises the first reconstructed pitch cycle, such that the reconstructed frame completely or partially comprises a second reconstructed pitch cycle, and such that the number of samples of the first reconstructed pitch cycle differs from a number of samples of the second reconstructed pitch cycle.
Furthermore, a computer program for implementing the above-described method when being executed on a computer or signal processor is provided.
Moreover, a system for reconstructing a frame comprising a speech signal is provided. The system comprises an apparatus for determining an estimated pitch lag according to one of the above-described or below-described embodiments, and an apparatus for reconstructing the frame, wherein the apparatus for reconstructing the frame is configured to reconstruct the frame depending on the estimated pitch lag. The estimated pitch lag is a pitch lag of the speech signal.
In an embodiment, the reconstructed frame may, e.g., be associated with one or more available frames, said one or more available frames being at least one of one or more preceding frames of the reconstructed frame and one or more succeeding frames of the reconstructed frame, wherein the one or more available frames comprise one or more pitch cycles as one or more available pitch cycles. The apparatus for reconstructing the frame may, e.g., be an apparatus for reconstructing a frame according to one of the above-described or below-described embodiments.
The present invention is based on the finding that conventional technology has significant drawbacks. Both G.718 (see G.718: Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s, Recommendation ITU-T G.718, Telecommunication Standardization Sector of ITU, June 2008) and G.729.1 (see G.722 Appendix III: A high-complexity algorithm for packet loss concealment for G.722, ITU-T Recommendation, ITU-T, November 2006) use pitch extrapolation in case of a frame loss. This is useful because in case of a frame loss, also the pitch lags are lost. According to G.718 and G.729.1, the pitch is extrapolated by taking the pitch evolution during the last two frames into account. However, the pitch lag being reconstructed by G.718 and G.729.1 is not very accurate and, e.g., often results in a reconstructed pitch lag that differs significantly from the real pitch lag.
Embodiments of the present invention provide a more accurate pitch lag reconstruction. For this purpose, in contrast to G.718 and G.729.1, some embodiments take information on the reliability of the pitch information into account.
According to conventional technology, the pitch information on which the extrapolation is based comprises the last eight correctly received pitch lags, for which the coding mode was different from UNVOICED. However, in conventional technology, the voicing characteristic might be quite weak, indicated by a low pitch gain (which corresponds to a low prediction gain). In conventional technology, in case the extrapolation is based on pitch lags which have different pitch gains, the extrapolation will not be able to output reasonable results or even fail at all and will fall back to a simple pitch lag repetition approach.
Embodiments are based on the finding that the reason for these shortcomings of conventional technology are that on the encoder side, the pitch lag is chosen with respect to maximize the pitch gain in order to maximize the coding gain of the adaptive codebook, but that, in case the speech characteristic is weak, the pitch lag might not indicate the fundamental frequency precisely, since the noise in the speech signal causes the pitch lag estimation to become imprecise.
Therefore, during concealment, according to embodiments, the application of the pitch lag extrapolation is weighted depending on the reliability of the previously received lags used for this extrapolation.
According to some embodiments, the past adaptive codebook gains (pitch gains) may be employed as a reliability measure.
According to some further embodiments of the present invention, weighting according to how far in the past, the pitch lags were received, is used as a reliability measure. For example, high weights are put to more recent lags and less weights are put to lags being received longer ago.
According to embodiments, weighted pitch prediction concepts are provided. In contrast to conventional technology, the provided pitch prediction of embodiments of the present invention uses a reliability measure for each of the pitch lags it is based on, making the prediction result much more valid and stable. Particularly, the pitch gain can be used as an indicator for the reliability. Alternatively or additionally, according to some embodiments, the time that has been passed after the correct reception of the pitch lag may, for example, be used as an indicator.
Regarding pulse resynchronization, the present invention is based on the finding that one of the shortcomings of conventional technology regarding the glottal pulse resynchronization is, that the pitch extrapolation does not take into account, how many pulses (pitch cycles) should be constructed in the concealed frame.
According to conventional technology, the pitch extrapolation is conducted such that changes in the pitch are only expected at the borders of the subframes.
According to embodiments, when conducting glottal pulse resynchronization, pitch changes which are different from continuous pitch changes can be taken into account.
Embodiments of the present invention are based on the finding that G.718 and G.729.1 have the following drawbacks.
At first, in conventional technology, when calculating d, it is assumed that there is an integer number of pitch cycles within the frame. Since d defines the location of the last pulse in the concealed frame, the position of the last pulse will not be correct, when there is a non-integer number of the pitch cycles within the frame. This is depicted in
Moreover, the calculation of conventional technology uses the number of pulses N in the constructed periodic part of the excitation. This adds not needed computational complexity.
Furthermore, in conventional technology, the calculation of the number of pulses N in the constructed periodic part of the excitation does not take the location of the first pulse into account.
The signals presented in
In contrast,
These examples illustrated by
Moreover, according to conventional technology, it is checked, if T [N−1], the location of the Nth pulse in the constructed periodic part of the excitation is within the frame length, even though N is defined to include the first pulse in the following frame.
Furthermore, according to conventional technology, no samples are added or removed before the first and after the last pulse. Embodiments of the present invention are based on the finding that this leads to the drawback that there could be a sudden change in the length of the first full pitch cycle, and moreover, this furthermore leads to the drawback that the length of the pitch cycle after the last pulse could be greater than the length of the last full pitch cycle before the last pulse, even when the pitch lag is decreasing (see
Embodiments are based on the finding that the pulses T [k]=P−dif f and T [n]=P−d are not equal when:
In this case diff=Tc−d and the number of removed samples will be diff instead of d.
This will lead to wrong position of pulses in the concealed frame.
Moreover, embodiments are based on the finding that in conventional technology, the maximum value of d is limited to the minimum allowed value for the coded pitch lag. This is a constraint that limits the occurrences of other problems, but it also limits the possible change in the pitch and thus limits the pulse resynchronization.
Furthermore, embodiments are based on the finding that in conventional technology, the periodic part is constructed using integer pitch lag, and that this creates a frequency shift of the harmonics and significant degradation in concealment of tonal signals with a constant pitch. This degradation can be seen in
Embodiments are moreover based on the finding that most of the problems of conventional technology occur in situations as illustrated by the examples depicted in
According to embodiments, improved pulse resynchronization concepts are provided. Embodiments provide improved concealment of monophonic signals, including speech, which is advantageous compared to the existing techniques described in the standards G.718 (see Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s, Recommendation ITU-T G.718, Telecommunication Standardization Sector of ITU, June 2008) and G.729.1 (see G.722 Appendix III: A high-complexity algorithm for packet loss concealment for G.722, ITU-T Recommendation, ITU-T, November 2006). The provided embodiments are suitable for signals with a constant pitch, as well as for signals with a changing pitch.
Inter alia, according to embodiments, three techniques are provided.
According to a first technique provided by an embodiment, a search concept for the pulses is provided that, in contrast to G.718 and G.729.1, takes into account the location of the first pulse in the calculation of the number of pulses in the constructed periodic part, denoted as N.
According to a second technique provided by another embodiment, an algorithm for searching for pulses is provided that, in contrast to G.718 and G.729.1, does not need the number of pulses in the constructed periodic part, denoted as N, that takes the location of the first pulse into account, and that directly calculates the last pulse index in the concealed frame, denoted as k.
According to a third technique provided by a further embodiment, a pulse search is not needed. According to this third technique, a construction of the periodic part is combined with the removal or addition of the samples, thus achieving less complexity than previous techniques.
Additionally or alternatively, some embodiments provide the following changes for the above techniques as well as for the techniques of G.718 and G.729.1:
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
According to an embodiment, the pitch lag estimator 120 may, e.g., be configured to estimate the estimated pitch lag depending on the plurality of original pitch lag values and depending on a plurality of pitch gain values as the plurality of information values, wherein for each original pitch lag value of the plurality of original pitch lag values, a pitch gain value of the plurality of pitch gain values is assigned to said original pitch lag value.
In a particular embodiment, each of the plurality of pitch gain values may, e.g., be an adaptive codebook gain.
In an embodiment, the pitch lag estimator 120 may, e.g., be configured to estimate the estimated pitch lag by minimizing an error function.
According to an embodiment, the pitch lag estimator 120 may, e.g., be configured to estimate the estimated pitch lag by determining two parameters a, b, by minimizing the error function
wherein a is a real number, wherein b is a real number, wherein k is an integer with k≥2, and wherein P(i) is the i-th original pitch lag value, wherein gp(i) is the i-th pitch gain value being assigned to the i-th pitch lag value P(i).
In an embodiment, the pitch lag estimator 120 may, e.g., be configured to estimate the estimated pitch lag by determining two parameters a, b, by minimizing the error function
wherein a is a real number, wherein b is a real number, wherein P(i) is the i-th original pitch lag value, wherein gp(i) is the i-th pitch gain value being assigned to the i-th pitch lag value P(i).
According to an embodiment, the pitch lag estimator 120 may, e.g., be configured to determine the estimated pitch lag p according to p=a·i+b.
In an embodiment, the pitch lag estimator 120 may, e.g., be configured to estimate the estimated pitch lag depending on the plurality of original pitch lag values and depending on a plurality of time values as the plurality of information values, wherein for each original pitch lag value of the plurality of original pitch lag values, a time value of the plurality of time values is assigned to said original pitch lag value.
According to an embodiment, the pitch lag estimator 120 may, e.g., be configured to estimate the estimated pitch lag by minimizing an error function.
In an embodiment, the pitch lag estimator 120 may, e.g., be configured to estimate the estimated pitch lag by determining two parameters a, b, by minimizing the error function
wherein a is a real number, wherein b is a real number, wherein k is an integer with k≥2, and wherein P(i) is the i-th original pitch lag value, wherein timepassed(i) is the i-th time value being assigned to the i-th pitch lag value P(i).
According to an embodiment, the pitch lag estimator 120 may, e.g., be configured to estimate the estimated pitch lag by determining two parameters a, b, by minimizing the error function
wherein a is a real number, wherein b is a real number, wherein P(i) is the i-th original pitch lag value, wherein timepassed(i) is the i-th time value being assigned to the i-th pitch lag value P(i).
In an embodiment, the pitch lag estimator 120 is configured to determine the estimated pitch lag p according to p=a·i+b.
In the following, embodiments providing weighted pitch prediction are described with respect to formulae (20)-(24b).
At first, weighted pitch prediction embodiments employing weighting according to the pitch gain are described with reference to formulae (20)-(22c). According to some of these embodiments, to overcome the drawback of conventional technology, the pitch lags are weighted with the pitch gain to perform the pitch prediction.
In some embodiments, the pitch gain may be the adaptive-codebook gain gp as defined in the standard G.729 (see G.719: Low-complexity, full-band audio coding for high-quality, conversational applications, Recommendation ITU-T G.719, Telecommunication Standardization Sector of ITU, June 2008, in particular chapter 3.7.3, more particularly formula (43)). In G.729, the adaptive-codebook gain is determined according to:
There, x(n) is the target signal and y(n) is obtained by convolving v(n) with h(n) according to:
wherein v(n) is the adaptive-codebook vector, wherein y(n) the filtered adaptive-codebook vector, and wherein h(n−i) is an impulse response of a weighted synthesis filter, as defined in G.729 (see G.719: Low-complexity, full-band audio coding for high-quality, conversational applications, Recommendation ITU-T G.719, Telecommunication Standardization Sector of ITU, June 2008).
Similarly, in some embodiments, the pitch gain may be the adaptive-codebook gain gp as defined in the standard G.718 (see G.718: Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s, Recommendation ITU-T G.718, Telecommunication Standardization Sector of ITU, June 2008, in particular chapter 6.8.4.1.4.1, more particularly formula (170)). In G.718, the adaptive-codebook gain is determined according to:
wherein x(n) is the target signal and yk(n) is the past filtered excitation at delay k.
For example, see G.718: Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s, Recommendation ITU-T G.718, Telecommunication Standardization Sector of ITU, June 2008, chapter 6.8.4.1.4.1, formula (171), for a definition, how yk(n) could be defined.
Similarly, in some embodiments, the pitch gain may be the adaptive-codebook gain gp as defined in the AMR standard (see Speech codec speech processing functions; adaptive multi-rate-wideband (AMRWB) speech codec; error concealment of erroneous or lost frames, 3GPP TS 26.191, 3rd Generation Partnership Project, September 2012), wherein the adaptive-codebook gain gp as the pitch gain is defined according to:
wherein y(n) is a filtered adaptive codebook vector.
In some particular embodiments, the pitch lags may, e.g., be weighted with the pitch gain, for example, prior to performing the pitch prediction.
For this purpose, according to an embodiment, a second buffer of length 8 may, for example, be introduced holding the pitch gains, which are taken at the same subframes as the pitch lags. In an embodiment, the buffer may, e.g., be updated using the exact same rules as the update of the pitch lags. One possible realization is to update both buffers (holding pitch lags and pitch gains of the last eight subframes) at the end of each frame, regardless whether this frame was error free or error prone.
There are two different prediction strategies known from conventional technology, which can be enhanced to use weighted pitch prediction.
Some embodiments provide significant inventive improvements of the prediction strategy of the G.718 standard. In G.718, in case of a packet loss, the buffers may be multiplied with each other element wise, in order to weight the pitch lag with a high factor if the associated pitch gain is high, and to weight it with a low factor if the associated pitch gain is low. After that, according to G.718, the pitch prediction is performed like usual (see G.718: Frame error robust narrow-band and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbit/s, Recommendation ITU-T G.718, Telecommunication Standardization Sector of ITU, June 2008, section 7.11.1.3] for details on G.718).
Some embodiments provide significant inventive improvements of the prediction strategy of the G.729.1 standard. The algorithm used in G.729.1 to predict the pitch (see G.722 Appendix III: A high-complexity algorithm for packet loss concealment for G.722, ITU-T Recommendation, ITU-T, November 2006, for details on G.729.1) is modified according to embodiments in order to use weighted prediction.
According to some embodiments, the goal is to minimize the error function:
wherein gp(i) is holding the pitch gains from the past subframes and P(i) is holding the corresponding pitch lags.
In the inventive formula (20), gp(i) is representing the weighting factor. In the above example, each gp(i) is representing a pitch gain from one of the past subframes.
Below, equations according to embodiments are provided, which describe how to derive the factors a and b, which could be used to predict the pitch lag according to: a+i·b, where i is the subframe number of the subframe to be predicted.
For example, to obtain the first predicted subframe based the prediction on the last five subframes P(0), . . . , P(4), the predicted pitch value P(5) would be:
P(5)=a+5·b.
In order to derive the coefficients a and b, the error function may, for example, be derived (derivated) and may be set to zero:
Conventional technology that does not disclose to employ the inventive weighting provided by embodiments. In particular, conventional technology does not employ the weighting factor gp(i).
Thus, in conventional technology, which does not employ a weighting factor gp(i), deriving the error function and setting the derivative of the error function to 0 would result to:
(see G.722 Appendix III: A high-complexity algorithm for packet loss concealment for G.722, ITU-T Recommendation, ITU-T, November 2006, 7.6.5]).
In contrast, when using the weighted prediction approach of the provided embodiments, e.g., the weighted prediction approach of formula (20) with weighting factor gp(i), a and b result to:
According to a particular embodiment, A, B, C, D; E, F, G, H, J, and K may, e.g., have the following values:
A=(3gp3−4gp2−3gp1)gp4·P(4)
B=((2gp2+2gp1)gp3−4gp3gp4)·P(3)
C=(−8gp2gp4−3gp2gp3−gp1gp2)·P(2)
D=(−12gp1gp4−6gp1gp3−2gp1gp2)·P(1)
E=(−16gp0gp4−9gp0gp3−4gp0gp2−gp0gp1)·P(0)
F=(gp3+2gp2+3gp1+4gp0)gp4·P(4)
G=((gp2+2gp1+3gp0)gp3−gp3gp4)·P(3)
H=(−2gp2gp4−gp2gp3+(gp1+2gp0)gp2)·P(2)
I=(−3gp1−gp4−2gp1gp3−gp1gp2+gp0gp1)·P(1)
J=(−4gp0gp4−3gp0gp3−2gp0gp2−gp0gp1)·P(0)
K=(gp3+4gp2+9gp1+16gp0)gp4+(gp2+4gp1+9gp0)gp3+(gp1+4gp0)gp2+gp0gp1 (22c)
There,
In particular,
The abscissa axis denotes the subframe number. The continuous line 1010 shows the encoder pitch lag which is embedded in the bitstream, and which is lost in the area of the grey segment 1030. The left ordinate axis represents a pitch lag axis. The right ordinate axis represents a pitch gain axis. The continuous line 1010 illustrates the pitch lag, while the dashed lines 1021, 1022, 1023 illustrate the pitch gain.
The grey rectangle 1030 denotes the frame loss. Because of the frame loss that occurred in the area of the grey segment 1030, information on the pitch lag and pitch gain in this area is not available at the decoder side and has to be reconstructed.
In
In the following, embodiments employing weighting depending on passed time are described with reference to formulae (23a)-(24b).
To overcome the drawbacks of conventional technology, some embodiments apply a time weighting on the pitch lags, prior to performing the pitch prediction. Applying a time weighting can be achieved by minimizing this error function:
where timepassed(i) is representing the inverse of the amount of time that has passed after correctly receiving the pitch lag and P(i) is holding the corresponding pitch lags.
Some embodiments may, e.g., put high weights to more recent lags and less weight to lags being received longer ago.
According to some embodiments, formula (21a) may then be employed to derive a and b.
To obtain the first predicted subframe, some embodiments may, e.g., conduct the prediction based on the last five subframes, P(0) . . . P(4). For example, the predicted pitch value P(5) may then be obtained according to
P(5)=a+5·b (23b)
For example, if
timepassed=[⅕ ¼ ⅓ ½ 1]
(time weighting according to subframe delay), this would result to:
In the following, embodiments providing pulse resynchronization are described.
The apparatus comprises a determination unit 210 for determining a sample number difference (Δ0p; Δi; Δk+1p) indicating a difference between a number of samples of one of the one or more available pitch cycles and a number of samples of a first pitch cycle to be reconstructed.
Moreover, the apparatus comprises a frame reconstructor for reconstructing the reconstructed frame by reconstructing, depending on the sample number difference (Δ0p; Δi; Δk+1p) and depending on the samples of said one of the one or more available pitch cycles, the first pitch cycle to be reconstructed as a first reconstructed pitch cycle.
The frame reconstructor 220 is configured to reconstruct the reconstructed frame, such that the reconstructed frame completely or partially comprises the first reconstructed pitch cycle, such that the reconstructed frame completely or partially comprises a second reconstructed pitch cycle, and such that the number of samples of the first reconstructed pitch cycle differs from a number of samples of the second reconstructed pitch cycle.
Reconstructing a pitch cycle is conducted by reconstructing some or all of the samples of the pitch cycle that shall be reconstructed. If the pitch cycle to be reconstructed is completely comprised by a frame that is lost, then all of the samples of the pitch cycle may, e.g., have to be reconstructed. If the pitch cycle to be reconstructed is only partially comprised by the frame that is lost, and if some the samples of the pitch cycle are available, e.g., as they are comprised another frame, than it may, e.g., be sufficient to only reconstruct the samples of the pitch cycle that are comprised by the frame that is lost to reconstruct the pitch cycle.
A first portion of the speech signal 222 is comprised by a frame n−1. A second portion of the speech signal 222 is comprised by a frame n. A third portion of the speech signal 222 is comprised by a frame n+1.
In
In the example of
A pitch cycle, may, for example, be defined as follows. A pitch cycle starts with one of the pulses 211, 212, 213, etc., and ends with the immediately succeeding pulse in the speech signal. For example, pulse 211 and 212 define the pitch cycle 201. Pulse 212 and 213 define the pitch cycle 202. Pulse 213 and 214 define the pitch cycle 203, etc.
Other definitions of the pitch cycle, well known to a person skilled in the art, which employ, for example, other start and end points of the pitch cycle, may alternatively be considered.
In the example of
According to some embodiments, frame n may be reconstructed depending on the samples of at least one pitch cycle (“available pitch cycles”) of the available frames (e.g., preceding frame n−1 or succeeding frame n+1). For example, the samples of the pitch cycle 201 of frame n−1 may, e.g., cyclically repeatedly copied to reconstruct the samples of the lost or corrupted frame. By cyclically repeatedly copying the samples of the pitch cycle, the pitch cycle itself is copied, e.g., if the pitch cycle is c, then
sample(x+i·c)=sample(x); with i being an integer.
In embodiments, samples from the end of the frame n−1 are copied. The length of the portion of the n−1st frame that is copied is equal to the length of the pitch cycle 201 (or almost equal). But the samples from both 201 and 202 are used for copying. This may be especially carefully considered when there is just one pulse in the n−1st frame.
In some embodiments, the copied samples are modified.
The present invention is moreover based on the finding that by cyclically repeatedly copying the samples of a pitch cycle, the pulses 213, 214, 215 of the lost frame n move to wrong positions, when the size of the pitch cycles that are (completely or partially) comprised by the lost frame (n) (pitch cycles 202, 203, 204 and 205) differs from the size of the copied available pitch cycle (here: pitch cycle 201).
E.g., in
In
Based on these findings of the present invention, according to embodiments, the frame reconstructor 220 is configured to reconstruct the reconstructed frame such that the number of samples of the first reconstructed pitch cycle differs from a number of samples of a second reconstructed pitch cycle being partially or completely comprised by the reconstructed frame.
E.g., according to some embodiments, the reconstruction of the frame depends on a sample number difference indicating a difference between a number of samples of one of the one or more available pitch cycles (e.g., pitch cycle 201) and a number of samples of a first pitch cycle (e.g., pitch cycle 202, 203, 204, 205) that shall be reconstructed.
For example, according to an embodiment, the samples of pitch cycle 201 may, e.g., be cyclically repeatedly copied.
Then, the sample number difference indicates how many samples shall be deleted from the cyclically repeated copy corresponding to the first pitch cycle to be reconstructed, or how many samples shall be added to the cyclically repeated copy corresponding to the first pitch cycle to be reconstructed.
In
While above, embodiments have been described where samples of a pitch cycle of a frame preceding the lost or corrupted frame have been cyclically repeatedly copied, in other embodiments, samples of a pitch cycle of a frame succeeding the lost or corrupted frame are cyclically repeatedly copied to reconstruct the lost frame. The same principles described above and below apply analogously.
Such a sample number difference may be determined for each pitch cycle to be reconstructed. Then, the sample number difference of each pitch cycle indicates how many samples shall be deleted from the cyclically repeated copy corresponding to the corresponding pitch cycle to be reconstructed, or how many samples shall be added to the cyclically repeated copy corresponding to the corresponding pitch cycle to be reconstructed.
According to an embodiment, the determination unit 210 may, e.g., be configured to determine a sample number difference for each of a plurality of pitch cycles to be reconstructed, such that the sample number difference of each of the pitch cycles indicates a difference between the number of samples of said one of the one or more available pitch cycles and a number of samples of said pitch cycle to be reconstructed. The frame reconstructor 220 may, e.g., be configured to reconstruct each pitch cycle of the plurality of pitch cycles to be reconstructed depending on the sample number difference of said pitch cycle to be reconstructed and depending on the samples of said one of the one or more available pitch cycles, to reconstruct the reconstructed frame.
In an embodiment, the frame reconstructor 220 may, e.g., be configured to generate an intermediate frame depending on said one of the of the one or more available pitch cycles. The frame reconstructor 220 may, e.g., be configured to modify the intermediate frame to obtain the reconstructed frame.
According to an embodiment, the determination unit 210 may, e.g., be configured to determine a frame difference value (d; s) indicating how many samples are to be removed from the intermediate frame or how many samples are to be added to the intermediate frame. Moreover, the frame reconstructor 220 may, e.g., be configured to remove first samples from the intermediate frame to obtain the reconstructed frame, when the frame difference value indicates that the first samples shall be removed from the frame. Furthermore, the frame reconstructor 220 may, e.g., be configured to add second samples to the intermediate frame to obtain the reconstructed frame, when the frame difference value (d; s) indicates that the second samples shall be added to the frame.
In an embodiment, the frame reconstructor 220 may, e.g., be configured to remove the first samples from the intermediate frame when the frame difference value indicates that the first samples shall be removed from the frame, so that the number of first samples that are removed from the intermediate frame is indicated by the frame difference value. Moreover, the frame reconstructor 220 may, e.g., be configured to add the second samples to the intermediate frame when the frame difference value indicates that the second samples shall be added to the frame, so that the number of second samples that are added to the intermediate frame is indicated by the frame difference value.
According to an embodiment, the determination unit 210 may, e.g., be configured to determine the frame difference number s so that the formula:
holds true, wherein L indicates a number of samples of the reconstructed frame, wherein M indicates a number of subframes of the reconstructed frame, wherein Tr indicates a rounded pitch period length of said one of the one or more available pitch cycles, and wherein p[i] indicates a pitch period length of a reconstructed pitch cycle of the i-th subframe of the reconstructed frame.
In an embodiment, the frame reconstructor 220 may, e.g., be adapted to generate an intermediate frame depending on said one of the one or more available pitch cycles. Moreover, the frame reconstructor 220 may, e.g., be adapted to generate the intermediate frame so that the intermediate frame comprises a first partial intermediate pitch cycle, one or more further intermediate pitch cycles, and a second partial intermediate pitch cycle. Furthermore, the first partial intermediate pitch cycle may, e.g., depend on one or more of the samples of said one of the one or more available pitch cycles, wherein each of the one or more further intermediate pitch cycles depends on all of the samples of said one of the one or more available pitch cycles, and wherein the second partial intermediate pitch cycle depends on one or more of the samples of said one of the one or more available pitch cycles. Moreover, the determination unit 210 may, e.g., be configured to determine a start portion difference number indicating how many samples are to be removed or added from the first partial intermediate pitch cycle, and wherein the frame reconstructor 220 is configured to remove one or more first samples from the first partial intermediate pitch cycle, or is configured to add one or more first samples to the first partial intermediate pitch cycle depending on the start portion difference number. Furthermore, the determination unit 210 may, e.g., be configured to determine for each of the further intermediate pitch cycles a pitch cycle difference number indicating how many samples are to be removed or added from said one of the further intermediate pitch cycles. Moreover, the frame reconstructor 220 may, e.g., be configured to remove one or more second samples from said one of the further intermediate pitch cycles, or is configured to add one or more second samples to said one of the further intermediate pitch cycles depending on said pitch cycle difference number. Furthermore, the determination unit 210 may, e.g., be configured to determine an end portion difference number indicating how many samples are to be removed or added from the second partial intermediate pitch cycle, and wherein the frame reconstructor 220 is configured to remove one or more third samples from the second partial intermediate pitch cycle, or is configured to add one or more third samples to the second partial intermediate pitch cycle depending on the end portion difference number.
According to an embodiment, the frame reconstructor 220 may, e.g., be configured to generate an intermediate frame depending on said one of the of the one or more available pitch cycles. Moreover, the determination unit 210 may, e.g., be adapted to determine one or more low energy signal portions of the speech signal comprised by the intermediate frame, wherein each of the one or more low energy signal portions is a first signal portion of the speech signal within the intermediate frame, where the energy of the speech signal is lower than in a second signal portion of the speech signal comprised by the intermediate frame. Furthermore, the frame reconstructor 220 may, e.g., be configured to remove one or more samples from at least one of the one or more low energy signal portions of the speech signal, or to add one or more samples to at least one of the one or more low energy signal portions of the speech signal, to obtain the reconstructed frame.
In a particular embodiment, the frame reconstructor 220 may, e.g., be configured to generate the intermediate frame, such that the intermediate frame comprises one or more reconstructed pitch cycles, such that each of the one or more reconstructed pitch cycles depends on said one of the of the one or more available pitch cycles. Moreover, the determination unit 210 may, e.g., be configured to determine a number of samples that shall be removed from each of the one or more reconstructed pitch cycles. Furthermore, the determination unit 210 may, e.g., be configured to determine each of the one or more low energy signal portions such that for each of the one or more low energy signal portions a number of samples of said low energy signal portion depends on the number of samples that shall be removed from one of the one or more reconstructed pitch cycles, wherein said low energy signal portion is located within said one of the one or more reconstructed pitch cycles.
In an embodiment, the determination unit 210 may, e.g., be configured to determine a position of one or more pulses of the speech signal of the frame to be reconstructed as reconstructed frame. Moreover, the frame reconstructor 220 may, e.g., be configured to reconstruct the reconstructed frame depending on the position of the one or more pulses of the speech signal.
According to an embodiment, the determination unit 210 may, e.g., be configured to determine a position of two or more pulses of the speech signal of the frame to be reconstructed as reconstructed frame, wherein T [0] is the position of one of the two or more pulses of the speech signal of the frame to be reconstructed as reconstructed frame, and wherein the determination unit 210 is configured to determine the position (T [i]) of further pulses of the two or more pulses of the speech signal according to the formula:
T[i]=T[0]+iTr
wherein Tr indicates a rounded length of said one of the one or more available pitch cycles, and wherein i is an integer.
According to an embodiment, the determination unit 210 may, e.g., be configured to determine an index k of the last pulse of the speech signal of the frame to be reconstructed as the reconstructed frame such that
wherein L indicates a number of samples of the reconstructed frame, wherein s indicates the frame difference value, wherein T [0] indicates a position of a pulse of the speech signal of the frame to be reconstructed as the reconstructed frame, being different from the last pulse of the speech signal, and wherein Tr indicates a rounded length of said one of the one or more available pitch cycles.
In an embodiment, the determination unit 210 may, e.g., be configured to reconstruct the frame to be reconstructed as the reconstructed frame by determining a parameter δ, wherein δ is defined according to the formula:
wherein the frame to be reconstructed as the reconstructed frame comprises M subframes, wherein Tp indicates the length of said one of the one or more available pitch cycles, and wherein Text Text indicates a length of one of the pitch cycles to be reconstructed of the frame to be reconstructed as the reconstructed frame.
According to an embodiment, the determination unit 210 may, e.g., be configured to reconstruct the reconstructed frame by determining a rounded length Tr of said one of the one or more available pitch cycles based on formula:
T
r
=└T
p+0.5┘
wherein Tp indicates the length of said one of the one or more available pitch cycles.
In an embodiment, the determination unit 210 may, e.g., be configured to reconstruct the reconstructed frame by applying the formula:
wherein Tp indicates the length of said one of the one or more available pitch cycles, wherein Tr indicates a rounded length of said one of the one or more available pitch cycles, wherein the frame to be reconstructed as the reconstructed frame comprises M subframes, wherein the frame to be reconstructed as the reconstructed frame comprises L samples, and wherein δ is a real number indicating a difference between a number of samples of said one of the one or more available pitch cycles and a number of samples of one of one or more pitch cycles to be reconstructed.
Now, embodiments are described in more detail.
In the following, a first group of pulse resynchronization embodiments is described with reference to formulae (25)-(63).
In such embodiments, if there is no pitch change, the last pitch lag is used without rounding, preserving the fractional part. The periodic part is constructed using the non-integer pitch and interpolation as for example in J. S. Marques, I. Trancoso, J. M. Tribolet, and L. B. Almeida, Improved pitch prediction with fractional delays in celp coding, 1990 International Conference on Acoustics, Speech, and Signal Processing, 1990. ICASSP-90, 1990, pp. 665-668 vol. 2. This will reduce the frequency shift of the harmonics, compared to using the rounded pitch lag and thus significantly improve concealment of tonal or voiced signals with constant pitch.
The advantage is illustrated by
There will be an increased computational complexity when using the fractional part of the pitch. This should not influence the worst case complexity as there is no need for the glottal pulse resynchronization.
If there is no predicted pitch change then there is no need for the processing explained below.
If a pitch change is predicted, the embodiments described with reference to formulae (25)-(63) provide concepts for determining d, being the difference, between the sum of the total number of samples within pitch cycles with the constant pitch (Tc) and the sum of the total number of samples within pitch cycles with the evolving pitch p[i].
In the following, Tc is defined as in formula (15a): Tc=round (last_pitch).
According to embodiments, the difference, d may be determined using a faster and more precise algorithm (fast algorithm for determining d approach) as described in the following.
Such an algorithm may, e.g., be based on the following principles:
pitch cycles in each subframe.
samples should be removed.
According to some embodiments, no rounding is conducted and a fractional pitch is used. Then:
p[i]=Tc+(i+1)δ,
samples should be removed if δ<0 (or added if δ>0).
(where M is the number of subframes in a frame).
According to some other embodiments, rounding is conducted. For the integer pitch (M is the number of subframes in a frame), d is defined as follows:
According to an embodiment, an algorithm is provided for calculating d accordingly:
In another embodiment, the last line of the algorithm is replaced by:
d=(short)floor(L_frame−ftmp*(float)L_subfr/T_c+0.5);
According to embodiments the last pulse T[n] is found according to:
n=i|T[0]+iTc<L_frameΛT[0]+(i+1)Tc≥L_frame (26)
According to an embodiment, a formula to calculate N is employed. This formula is obtained from formula (26) according to:
and the last pulse has then the index N−1.
According to this formula, N may be calculated for the examples illustrated by
In the following, a concept without explicit search for the last pulse, but taking pulse positions into account, is described. Such a concept that does not need N, the last pulse index in the constructed periodic part.
Actual last pulse position in the constructed periodic part of the excitation (T[k]) determines the number of the full pitch cycles k, where samples are removed (or added).
In the example of
After removing d samples from the signal of length L_frame+d, there are no samples from the original signal beyond L_frame+d samples. Thus T[k] is within L_frame+d samples and k is thus determined by
k=i|T[i]<Lframe+d≤T[i+1] (28)
From formula (17) and formula (28), it follows that
T[0]+kTc<Lframe+d≤T[0]+(k+1)Tc (29)
That is
From formula (30) it follows that
In a codec that, e.g., uses frames of at least 20 ms and, where the lowest fundamental frequency of speech is, e.g., at least 40 Hz, in most cases at least one pulse exists in the concealed frame other than UNVOICED.
In the following, a case with at least two pulses (k≥1) is described with reference to formulae (32)-(46).
Assume that in each full ith pitch cycle between pulses, A samples shall be removed, wherein Δi is defined as:
Δi=Δ+(i−1)a, 1≤i≤k, (32)
where a is an unknown variable that needs to be expressed in terms of the known variables.
Assume that Δ0 samples shall be removed before the first pulse, wherein Δ0 is defined as:
Assume that Δk+1 samples shall be removed after the last pulse, wherein Δk+1 is defined as:
The last two assumptions are in line with formula (32) taking into account the length of the partial first and last pitch cycles.
Each of the Δi values is a sample number difference. Moreover, Δ0 is a sample number difference. Furthermore, Δk+1 is a sample number difference.
The total number of samples to be removed, d, is then related to Δi as:
From formulae (32)-(35), d can be obtained as:
Formula (36) is equivalent to:
Assume that the last full pitch cycle in a concealed frame has p[M−1] length, that is:
Δk=Tc−p[M−1] (38)
From formula (32) and formula (38) it follows that:
Δ=Tc−p[M−1]−(k−1)a (39)
Moreover, from formula (37) and formula (39), it follows that:
Formula (40) is equivalent to:
From formula (17) and formula (41), it follows that:
Formula (42) is equivalent to:
Furthermore, from formula (43), it follows that:
Formula (44) is equivalent to:
Moreover, formula (45) is equivalent to:
According to embodiments, it is now calculated based on formulae (32)-(34), (39) and (46), how many samples are to be removed or added before the first pulse, and/or between pulses and/or after the last pulse.
In an embodiment, the samples are removed or added in the minimum energy regions.
According to embodiments, the number of samples to be removed may, for example, be rounded using:
In the following, a case with one pulse (k=0) is described with reference to formulae (47)-(55).
If there is just one pulse in the concealed frame, then Δ0 samples are to be removed before the pulse:
wherein Δ and a are unknown variables that need to be expressed in terms of the known variables. Δi samples are to be removed after the pulse, where:
Then the total number of samples to be removed is given by:
d=Δ
0+Δ1 (49)
From formulae (47)-(49), it follows that:
Formula (50) is equivalent to:
dT
c=Δ(L+d)−aT[0] (51)
It is assumed that the ratio of the pitch cycle before the pulse to the pitch cycle after the pulse is the same as the ratio between the pitch lag in the last subframe and the first subframe in the previously received frame:
From formula (52), it follows that:
Moreover, from formula (51) and formula (53), it follows that:
Formula (54) is equivalent to:
There are └Δ−a┘ samples to be removed or added in the minimum energy region before the pulse and d−└Δ−a┘ samples after the pulse.
In the following, a simplified concept according to embodiments, which does not require a search for (the location of) pulses, is described with reference to formulae (56)-(63).
t[i] denotes the length of the ith pitch cycle. After removing d samples from the signal, k full pitch cycles and one partial (up to full) pitch cycle are obtained.
Thus:
As pitch cycles of length t[i] are obtained from the pitch cycle of length Tc after removing some samples, and as the total number of removed samples is d, it follows that
kT
c
<L+d≤(k+1)Tc (57)
It follows that:
Moreover, it follows that
According to embodiments, a linear change in the pitch lag may be assumed:
t[i]=Tc−(i+1)Δ, 0≤i≤k
In embodiments, (k+1) Δ samples are removed in the kth pitch cycle.
According to embodiments, in the part of the kth pitch cycle, that stays in the frame after removing the samples,
are removed.
Thus, the total number of the removed samples is:
Formula (60) is equivalent to:
Moreover, formula (61) is equivalent to:
Furthermore, formula (62) is equivalent to:
According to embodiments, (i+1) Δ samples are removed at the position of the minimum energy. There is no need to know the location of pulses, as the search for the minimum energy position is done in the circular buffer that holds one pitch cycle.
If the minimum energy position is after the first pulse and if samples before the first pulse are not removed, then a situation could occur, where the pitch lag evolves as (Tc+Δ), Tc, Tc, (Tc−Δ), (Tc−2Δ) (two pitch cycles in the last received frame and three pitch cycles in the concealed frame). Thus, there would be a discontinuity. The similar discontinuity may arise after the last pulse, but not at the same time when it happens before the first pulse.
On the other hand, the minimum energy region would appear after the first pulse more likely, if the pulse is closer to the concealed frame beginning. If the first pulse is closer to the concealed frame beginning, it is more likely that the last pitch cycle in the last received frame is larger than Tc. To reduce the possibility of the discontinuity in the pitch change, weighting should be used to give advantage to minimum regions closer to the beginning or to the end of the pitch cycle.
According to embodiments, an implementation of the provided concepts is described, which implements one or more or all of the following method steps:
If samples have to be added, the equivalent procedure can be used by taking into account that d<0 and Δ<0 and that we add in total |d| samples, that is (k+1)|Δ| samples are added in the kth cycle at the position of the minimum energy.
The fractional pitch can be used at the subframe level to derive d as described above with respect to the “fast algorithm for determining d approach”, as anyhow the approximated pitch cycle lengths are used.
In the following, a second group of pulse resynchronization embodiments is described with reference to formulae (64)-(113). These embodiments of the first group employ the definition of formula (15b),
T
r
=└T
p+0.5┘
wherein the last pitch period length is Tp, and the length of the segment that is copied is Tr.
If some parameters used by the second group of pulse resynchronization embodiments are not defined below, embodiments of the present invention may employ the definitions provided for these parameters with respect to the first group of pulse resynchronization embodiments defined above (see formulae (25)-(63)).
Some of the formulae (64)-(113) of the second group of pulse resynchronization embodiments may redefine some of the parameters already used with respect to the first group of pulse resynchronization embodiments. In this case, the provided redefined definitions apply for the second pulse resynchronization embodiments.
As described above, according to some embodiments, the periodic part may, e.g., be constructed for one frame and one additional subframe, wherein the frame length is denoted as L=Lframe L=Lframe.
For example, with M subframes in a frame, the subframe length is
As already described, T [0] is the location of the first maximum pulse in the constructed periodic part of the excitation. The positions of the other pulses are given by:
T[i]=T[0]+iTr.
According to embodiments, depending on the construction of the periodic part of the excitation, for example, after the construction of the periodic part of the excitation, the glottal pulse resynchronization is performed to correct the difference between the estimated target position of the last pulse in the lost frame (PP), and its actual position in the constructed periodic part of the excitation (T[k]T[k]).
The estimated target position of the last pulse in the lost frame (P) may, for example, be determined indirectly by the estimation of the pitch lag evolution. The pitch lag evolution is, for example, extrapolated based on the pitch lags of the last seven subframes before the lost frame. The evolving pitch lags in each subframe are:
p[i]=Tp(i+1)δ, 0≤i<M (64)
where
and TextText is the extrapolated pitch and i is the subframe index. The pitch extrapolation can be done, for example, using weighted linear fitting or the method from G.718 or the method from G.729.1 or any other method for the pitch interpolation that, e.g., takes one or more pitches from future frames into account. The pitch extrapolation can also be non-linear. In an embodiment, Text may be determined in the same way as Text is determined above.
The difference within a frame length between the sum of the total number of samples within pitch cycles with the evolving pitch (p[i]) and the sum of the total number of samples within pitch cycles with the constant pitch (Tp) is denoted as s.
According to embodiments, if Text>Tp then s samples should be added to a frame, and if Text<Tp then −s samples should be removed from a frame. After adding or removing |s| samples, the last pulse in the concealed frame will be at the estimated target position (P).
If Text=Tp, there is no need for an addition or a removal of samples within a frame.
According to some embodiments, the glottal pulse resynchronization is done by adding or removing samples in the minimum energy regions of all of the pitch cycles.
In the following, calculating parameter s according to embodiments is described with reference to formulae (66)-(69).
According to some embodiments, the difference, s, may, for example, be calculated based on the following principles:
pitch cycles in each subframe.
samples should be removed.
Therefore, in line with formula (64), according to an embodiment, s may, e.g., be calculated according to formula (66):
Formula (66) is equivalent to:
wherein formula (67) is equivalent to:
and wherein formula (68) is equivalent to:
Note that s is positive if Text>Tp Text>Tp and samples should be added, and that s is negative if Text<Tp Text>Tp and samples should be removed. Thus, the number of samples to be removed or added can be denoted as |s|.
In the following, calculating the index of the last pulse according to embodiments is described with reference to formulae (70)-(73).
The actual last pulse position in the constructed periodic part of the excitation (T[k]) determines the number of the full pitch cycles k, k where samples are removed (or added).
In the example illustrated by
After removing Is' samples from the signal of length L−s, where L=L_frame, or after adding Is' samples to the signal of length L−s, there are no samples from the original signal beyond L−s samples. It should be noted that s is positive if samples are added and that s is negative if samples are removed. Thus L−s<L if samples are added and L−s>L if samples are removed. Thus T [k] T[k] is within L−s samples and k is thus determined by:
k=i|T[i]<L−s≤T[i+1] (70)
From formula (15b) and formula (70), it follows that
T[0]+kTr<L−s≤T[0]+(k+1)Tr (71)
According to an embodiment, k may, e.g., be determined based on formula (72) as:
For example, in a codec employing frames of, for example, at least 20 ms, and employing a lowest fundamental frequency of speech of at least 40 Hz, in most cases at least one pulse exists in the concealed frame other than UNVOICED.
In the following, calculating the number of samples to be removed in minimum regions according to embodiments is described with reference to formulae (74)-(99).
It may, e.g., be assumed that ΔiΔi samples in each full ith ith pitch cycle between pulses shall be removed (or added), where Δi is defined as:
Δi=Δ+(i−1)a, 1≤i≤k (74)
and where a is an unknown variable that may, e.g., be expressed in terms of the known variables.
Moreover, it may, e.g., be assumed that Δ0p samples shall be removed (or added) before the first pulse Δ0p, where Δ0p is defined as:
Furthermore, it may, e.g., be assumed that Δk+0p samples after the last pulse shall be removed (or added), where Δk+0p is defined as:
The last two assumptions are in line with formula (74) taking the length of the partial first and last pitch cycles into account.
The number of samples to be removed (or added) in each pitch cycle is schematically presented in the example in
The total number of samples to be removed (or added), s, is related to Δi according to:
From formulae (74)-(77) it follows that:
Formula (78) is equivalent to:
Moreover, formula (79) is equivalent to:
Furthermore, formula (80) is equivalent to:
Moreover, taking formula (16b) into account formula (81) is equivalent to:
According to embodiments, it may be assumed that the number of samples to be removed (or added) in the complete pitch cycle after the last pulse is given by:
Δk+1=|Tr−p[M−1]|=|Tr−Text| (83)
From formula (74) and formula (83), it follows that:
Δ=|Tr−Text|−ka (84)
From formula (82) and formula (84), it follows that:
Formula (85) is equivalent to:
Moreover, formula (86) is equivalent to:
Furthermore, formula (87) is equivalent to:
From formula (16b) and formula (88), it follows that:
Formula (89) is equivalent to:
Moreover, formula (90) is equivalent to:
Furthermore, formula (91) is equivalent to:
Moreover, formula (92) is equivalent to:
From formula (93), it follows that:
Thus, e.g., based on formula (94), according to embodiments:
According to some embodiments, the samples may, e.g., be removed or added in the minimum energy regions.
From formula (85) and formula (94) follows that:
Formula (95) is equivalent to:
Moreover, from formula (84) and formula (94), it follows that:
Δi=Δ+(i−1)a=|Tr−Text|−ka+(i−1)a, 1≤i≤k (97)
Formula (97) is equivalent to:
Δi=|Tr−Text|−(k+1−i)a, 1≤i≤k (98)
According to an embodiment, the number of samples to be removed after the last pulse can be calculated based on formula (97) according to:
It should be noted that according to embodiments, Δ0p, Δi and Δk+1p are positive and that the sign of s determines if the samples are to be added or removed.
Due to complexity reasons, in some embodiments, it is desired to add or remove integer number of samples and thus, in such embodiments, Δcp, Δi and Δk+1p may, e.g., be rounded. In other embodiments, other concepts using waveform interpolation may, e.g., alternatively or additionally be used to avoid the rounding, but with the increased complexity.
In the following, an algorithm for pulse resynchronization according to embodiments is described with reference to formulae (100)-(113).
According to embodiments, input parameters of such an algorithm may, for example, be:
According to embodiments, such an algorithm may comprise, one or more or all of the following steps:
T
r
=└T
p+0.5┘ (101)
Δ′0=└Δ0p┘ (106)
F=Δ
0
p−Δ′0 (107)
Δi=|Tr−Text|−(k+1−i)a, 1≤i≤k (108)
Δ′i=└Δi+F┘ (109)
F=Δ
i−Δ′i (110)
P
min[i]=Pmin[1]+(i−1)Tr, 1<i≤k (113)
In an embodiment, the reconstructed frame may, e.g., be associated with one or more available frames, said one or more available frames being at least one of one or more preceding frames of the reconstructed frame and one or more succeeding frames of the reconstructed frame, wherein the one or more available frames comprise one or more pitch cycles as one or more available pitch cycles. The apparatus 200 for reconstructing the frame may, e.g., be an apparatus for reconstructing a frame according to one of the above-described embodiments.
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
The inventive decomposed signal can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
Some embodiments according to the invention comprise a non-transitory data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are advantageously performed by any hardware apparatus.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
13173157.2 | Jun 2013 | EP | regional |
14166990.3 | May 2014 | EP | regional |
This application is a continuation of co-pending U.S. patent application Ser. No. 14/977,224 filed Dec. 21, 2015 which is a continuation of International Application No. PCT/EP2014/062589, filed Jun. 16, 2014, which are incorporated herein by reference in their entirety, and additionally claims priority from European Applications Nos. EP13173157.2, filed Jun. 21, 2013, and EP14166990.3, filed May 5, 2014, all of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 14977224 | Dec 2015 | US |
Child | 16445052 | US | |
Parent | PCT/EP2014/062589 | Jun 2014 | US |
Child | 14977224 | US |