This Application is a Section 371 National Stage Application of International Application No. PCT/FR2015/051864, filed Jul. 6, 2015, the content of which is incorporated herein by reference in its entirety, and published as WO 2016/005690 on Jan. 14, 2016, not in English.
The present invention relates to the processing of an audio frequency signal for transmitting or storing it. More particularly, the invention relates to an update of the post-processing states of a decoded audio frequency signal, when the sampling frequency varies from one signal frame to the other.
The invention applies more particularly to the case of a decoding by linear prediction like CELP (“Code-Excited Linear Prediction”) type decoding. Linear prediction codecs, such as ACELP (“Algebraic Code-Excited Linear Prediction”) type codecs, are considered suitable for speech signals, the production of which they model well.
The sampling frequency at which the CELP coding algorithm operates is generally predetermined and identical in each encoded frame; examples of sampling frequencies are:
It will further be noted that in the case of a codec as described in ITU-T Recommendation G.718, a processing module is present for improving the decoded signal by low-frequency noise reduction. It is termed “bass post-filter” in English (BPF) or “low-frequency post-filtering”. It applies at the same sampling frequency as CELP decoding. The purpose of this post-processing is to eliminate the low-frequency noise between the first harmonics of a voiced speech signal. This post-processing is especially important for high-pitched women's voices where the distance between the harmonics is greater and the noise less masked.
Despite the fact that the common term for this post-processing in the field of coding is “low-frequency post-filtering”, it is not, in fact, a simple filtering but rather a fairly complex post-processing that generally contains “Pitch Tracking”, “Pitch Enhancer”, “Low-pass filtering” or “LP-filtering” modules and addition modules. This type of post-processing is described in detail, for example, in Recommendation G.718 (06/2008) “Frame error robust narrowband and wideband embedded variable bit-rate coding of speech and audio from 8-32 kbits/s”, chapter 7.14.1. The block diagram of this post-processing is illustrated in
Here we recall only the principles and elements necessary for understanding the present document. The technique described uses a breakdown into two frequency bands, low band and high band. An adaptive filtering is applied on the low band, determined for covering the lower frequencies at the first harmonics of the synthesized signal. This adaptive filtering is thus parameterized by the period T of the speech signal, termed “pitch”. Indeed, the equations of the operations performed by the “pitch enhancer” module are the following: the signal with the enhanced pitch ŝf(n) is obtained as
ŝf(n)=(1−α){circumflex over (s)}(n)+αsp(n)
where
sp(n)=0.5ŝ(n−T)+0.5ŝ(n+T)
and ŝ(n) is the decoded signal.
This processing requires a memory of the past signal the size of which must cover the various possible values of pitch T (for finding the value ŝ(n−T)). The value of the pitch T is not known for the next frame, thus, generally, for covering the worst possible case, MAXPITCH+1 samples of the past decoded signal are stored for post-processing. MAXPITCH gives the maximum length of the pitch at the given sampling frequency, e.g. generally this value is 289 at 16 kHz or 231 at 12.8 kHz. An additional sample is often stored for subsequently performing an order 1 de-emphasis filtering. This de-emphasis filtering will not be described here in detail as it does not form the subject of the present invention.
When the sampling frequency of the signal at the input or output of the codec is not identical to the CELP coding internal frequency, a resampling is implemented. For example:
Interest is focused here on a category of codecs supporting at least two internal sampling frequencies, the sampling frequency being able to be selected adaptively in time and variable from one frame to the other. Generally, for a range of “low” bitrates, the CELP coder will operate at a lesser sampling frequency, e.g. fs1=12.8 kHz and for a higher range of bitrates, the coder will operate at a higher frequency, e.g. fs2=16 kHz. A change of bitrate over time, from one frame to another, may in this case cause switching between these two frequencies (fs1 and fs2) according to the range of bitrates covered. This switching of frequencies between two frames may cause audible and troublesome artifacts, for several reasons.
One of the reasons causing these artifacts is that switching internal decoding frequencies prevents low-frequency post-filtering from operating at least in the first frame after switching, since the memory of the post-processing (i.e. the past synthesized signal) is found at a sampling frequency different from the newly synthesized signal.
To remedy this problem, one option consists in deactivating the post-processing over the duration of the transition frame (the frame after the change in internal sampling frequency). This option does not produce a desirable result generally, since the noise which was post-filtered reappears abruptly on the transition frame.
Another option is to leave the post-processing active but setting the memories to zero. With this method, the quality obtained is very mediocre.
Another possibility is also to consider a memory at 16 kHz as if it were at 12.8 kHz by only keeping the latest 4/5 samples of this memory or conversely, to consider a memory at 12.8 kHz as if it were at 16 kHz, either by adding 1/5 zeros at the start (toward the past) of this memory in order to have the correct length, or by storing 20% more samples at 12.8 kHz in order to have enough of them in case of a change in internal sampling frequency. The listening tests show that these solutions do not give a satisfactory quality.
There is therefore a need to find a better quality solution for avoiding a break in the post-processing in case of a change in sampling frequency from one frame to the other.
The present invention will improve the situation.
For this purpose, it provides a method of updating post-processing states applied to a decoded audio frequency signal. The method is such that, for a current decoded signal frame, sampled at a different sampling frequency from the preceding frame, it comprises the following steps:
Thus, the post-processing memory is adapted to the sampling frequency of the current frame which is post-processed. This technique allows improvement in the quality of post-processing in the transition frames between two sampling frequencies while minimizing the increase in complexity (calculation load, ROM, RAM and PROM memory).
The various particular embodiments mentioned below may be added independently or in combination with one another, to the steps of the resampling method defined above.
In a particular embodiment, in the case where the sampling frequency of the preceding frame is higher than the sampling frequency of the current frame, the interpolation is performed starting from the most recent sample of the past decoded signal and by interpolating in reverse chronological order and in the case where the sampling frequency of the preceding frame is lower than the sampling frequency of the current frame, the interpolation is performed starting from the oldest sample of the past decoded signal and by interpolating in chronological order.
This mode of interpolation makes it possible to use only a single storage array (of a length corresponding to the maximum signal period for the greatest sampling frequency) for recording the past decoded signal before and after resampling. Indeed, in both resampling directions, the interpolation is adapted to the fact that from the moment that a sample of the past signal is used for an interpolation, it is no longer used for the next interpolation. It may thus be replaced by that interpolated in the storage array.
Thus, in an advantageous embodiment, the resampled past decoded signal is stored in the same buffer memory as the past decoded signal before resampling.
Thus the use of the RAM memory of the device is optimized by implementing this method.
In a particular embodiment the interpolation is of the linear type.
This type of interpolation is of low complexity.
For an effective implementation, the past decoded signal is of fixed length according to a maximum possible speech signal period.
The method of updating states is particularly suited to the case where post-processing is applied to the decoded signal on a low frequency band for reducing low-frequency noise.
The invention also relates to a method of decoding a current frame of an audio frequency signal comprising a step of selecting a decoding sampling frequency, a step of post-processing. The method is such that, in the case where the preceding frame is sampled at a first sampling frequency different from a second sampling frequency of the current frame, it comprises an update of the post-processing states according to a method as described.
The low-frequency processing of the decoded signal is therefore adapted to the internal sampling frequency of the decoder, the quality of this post-processing then being improved.
The invention relates to a device for processing a decoded audio frequency signal, characterized in that it comprises, for a current frame of decoded signal, sampled at a different sampling frequency from the preceding frame:
The present invention is also aimed at an audio frequency signal decoder comprising a module for selecting a decoding sampling frequency and at least one processing device as described.
The invention is aimed at a computer program comprising code instructions for implementing the steps of the method of updating states as described, when these instructions are executed by a processor.
Finally the invention relates to a storage medium, readable by a processor, integrated or not integrated in the processing device, optionally removable, storing a computer program implementing a method of updating states as previously described.
Other features and advantages of the invention will appear more clearly on reading the following description, given solely by way of non-restrictive example, and referring to the attached drawings, in which:
In the embodiment described here, the CELP coder or decoder has two internal sampling frequencies: 12.8 kHz for low bitrates and 16 kHz for high bitrates. Of course, other internal sampling frequencies may be provided within the scope of the invention.
The method of updating post-processing states implemented on a decoded audio frequency signal comprises a first step E101 of retrieving in a buffer memory, a past decoded signal, stored during the decoding of the preceding frame. As previously mentioned, this decoded signal of the preceding frame (Mem. fs1) is at a first internal sampling frequency fs1.
The stored decoded signal length is a function, for example, of the maximum value of the speech signal period (or “pitch”).
For example, at 16 kHz sampling frequency the maximum value of the coded pitch is 289. The length of the stored decoded signal is then len_mem_16=290 samples.
For an internal frequency at 12.8 kHz the stored decoded signal has a length of len_mem_12=(290/5)*4=232 samples.
For optimizing the RAM memory the same buffer memory of 290 samples is used here for both cases, at 16 kHz all the indices from 0 to 289 are necessary, at 12.8 kHz only the indices 58 to 289 are useful. The last sample of the memory (with the index 289) therefore always contains the last sample of the past decoded signal, regardless of the sampling frequency. It should be noted that at both sampling frequencies (12.8 kHz or 16 kHz) the memory covers the same temporal support, 18.125 ms.
It should also be noted that at 12.8 kHz it is also possible to use the indices from 0 to 231 and ignore the samples from 232 to 289. Intermediate positions are also possible, but these solutions are not practical from a programming point of view. In the preferred implementation of the invention the first solution is used (indices 58 to 289).
In step E102, this past decoded signal is resampled at the internal sampling frequency of the current frame fs2. This resampling is performed, for example, by a linear interpolation method of low complexity. Other types of interpolation may be used such as cubic or “splines” interpolation, for example.
In a particular advantageous embodiment, the interpolation used allows using only a single RAM storage array (a single buffer memory).
The case of a change in the internal sampling frequency from 16 kHz to 12.8 kHz is illustrated in
The figure also illustrates how these signals are stored in the buffer memory. In part a.), the samples stored at 12.8 kHz are aligned with the end of the buffer “mem” (according to the preferred implementation). The figures give the location index in the storage array. The empty dotted circle markers of the index 0 to 3 correspond to the locations not used at 12.8 kHz.
It may be observed that by interpolating starting from the most recent sample (therefore that of the index 19 in the figure) and by interpolating in reverse chronological order, the result may be written in the same array since the old value of this location no longer serves for the next interpolation. The solid arrow depicts the interpolation direction, the numbers written in the arrow correspond to the order in which the output samples are interpolated.
It is also seen that the interpolation weights are repeated periodically, in steps of 5 input samples or 4 output samples. Thus, in a particular embodiment, interpolation may take place in blocks of 5 input samples and 4 output samples. There are thus nb_bloc=len_mem_16/5=len_mem_12/4 blocks to be processed.
As an illustration, an example of C language style code instructions is given in Annex 1 for performing this interpolation,
where pf5 is an array (addressing) pointer for the input signal at 16 kHz, pf4 is an array pointer for the output signal at 12.8 kHz. At the start both point to the same place, at the end of the array mem of length len_mem_16 (the indices used are from 0 to len_mem_16-1). nb_bloc contains the number of blocks to be processed in the for loop. pf4[0] is the value of the array pointed to by the pointer pf4, pf4[−1] is the preceding value and so on. The same applies to pf5. At the end of each iteration the pointers pf5 and pf4 move back in steps of 5 and 4 samples respectively.
With this solution the increase in complexity (number of operations, PROM, ROM) is very small and the allocation of a new RAM array is not necessary.
Part b.) of
The figure also depicts how these signals are stored in the buffer memory, the figures give the index of the location in the array. In part a.), the samples stored at 12.8 kHz are aligned with the end of the buffer “mem” (according to the preferred implementation). The empty dotted circle markers of the index 0 to 3 correspond to the locations not available (since not used) at 12.8 kHz.
It may be observed that this time, the interpolation is performed starting from the oldest sample (therefore that with index 0 at the output) in order to be able to overwrite the result of the interpolation in the same memory array since the old value at these locations does not serve for performing the following interpolations. The solid arrow depicts the interpolation direction, the numbers written in the arrow correspond to the order in which the output samples are interpolated.
It is also seen that the interpolation weight is repeated periodically, in steps of 4 input samples or 5 output samples. Thus, it is advantageous to perform the interpolation in blocks of 4 input samples and 5 output samples. There are therefore still nb_bloc=len_mem_16/5=len_mem_12/4 blocks to be processed, except that this time, the last block is special since it also uses the first value of the current frame. It is also interesting to observe that the index of the first sample at 12.8 kHz in the memory “mem” (4 in
As an illustration, an example of C language style code instructions is given in Annex 2 for performing this interpolation:
The last block is processed separately since it also depends on the first sample of the current frame denoted by syn[0].
By analogy with the preceding case, pf4 is an array pointer for the input signal at 12.8 kHz that points to the start of the filter memory, this memory is stored from the nb_blocth sample of the array mem. pf5 is an array pointer for the output signal at 16 kHz, it points to the first element of the array mem. nb_bloc contains the number of blocks to be processed. nb_bloc-1 blocks are processed in the for loop, then the last block is processed separately. pf4[0] is the value of the array pointed to by the pointer pf4, pf4[1] is the next value and so on. The same applies to pf5. At the end of each iteration the pointers pf5 and pf4 move forward in steps of 5 and 4 samples respectively. The decoded signal of the current frame is stored in the array syn, syn[0]is the first sample of the current frame.
With this solution the increase in complexity (number of operations, PROM, ROM) is very small and the allocation of a new RAM array is not necessary.
Part b.) of
Now back to
In a particular embodiment, the post-processing is similar to that described in ITU-T Recommendation G.718. The memory of the resampled past decoded signal is used here for finding the values ŝ(n−T) for n=0 . . . T−1 as previously described in recalling the “bass-post-filter” technique in G.718.
For each frame received, the binary train is demultiplexed in 401 and decoded. In 402 the decoder determines, here according to the bitrate of the current frame, at what frequency fs1 or fs2 to decode the information originating from a CELP coder. According to the sampling frequency, either the decoding module 403 for the frequency fs1 or the decoding module 404 for the frequency fs2 is implemented for decoding the received signal.
The CELP decoder operating at the frequency fs1=12.8 kHz (block 403) is a multi-bitrate extension of the ITU-T G.718 decoding algorithm initially defined between 8 and 32 kbits/s. In particular it includes the decoding of the CELP excitation and a linear prediction synthesis filtering 1/Â1(z).
The CELP decoder operating at the frequency fs2=16 kHz (block 404) is a multi-bitrate extension at 16 kHz of the ITU-T G.718 decoding algorithm initially defined between 8 and 32 kbits/s at 12.8 kHz.
The implementation of CELP decoding at 16 kHz is not detailed here since it is beyond the scope of the invention.
There is no interest here in the problem of updating the states of the CELP decoder when switching from the frequency fs1 to the frequency fs2.
The output of the CELP decoder in the current frame is then post-filtered by the processing device 410 implementing the method of updating post-processing states described with reference to
Conversely, the past decoded signal of the preceding frame (Mem. fs2), sampled at the frequency fs2 is resampled at the frequency fs1 to obtain a resampled past decoded signal (Mem. fs1) used as a post-processing memory of the current frame.
The signal post-processed by the processing device 410 is then resampled at the output frequency fsout, by the resampling modules 411 and 412, with e.g. fsout=32 kHz. This amounts to performing either a resampling of fs1 at fsout , in 411, or a resampling of fs2 at fsout in 412.
In variants, other post-processing operations (high-pass filtering, etc.) may be used in addition to or instead of the blocks 420 and 421.
According to the output frequency fsout, a high-band signal (resampled at the frequency fsout) decoded by the decoding module 405 may be added in 406 to the resampled low-band signal.
The decoder also provides for the use of additional decoding modes such as decoding by inverse frequency transform (block 430) in the case where the input signal to be coded has been coded by a transform coder. Indeed the coder analyzes the type of signal to be coded and selects the most suitable coding technique for this signal. Transform coding is used especially for music signals which are generally poorly coded by a CELP type of predictive coder.
This type of device comprises a processor PROC 506 cooperating with a memory block BM comprising a storage and/or work memory MEM. Such a device comprises an input module 501 capable of receiving audio signal frames and notably a stored part (Bufprec) of a preceding frame at a first sampling frequency fs1.
It comprises an output module 502 capable of transmitting a current frame of a post-processed audio frequency signal s′(n).
The processor PROC controls the module for obtaining 503 a past decoded signal, stored for the preceding frame. Typically, obtaining this past decoded signal is performed by simple reading in a buffer memory, included in the memory block BM. The processor also controls a resampling module 504 for resampling by interpolation the past decoded signal obtained in 503.
It also controls a post-processing module 505 using the resampled past decoded signal as a post-processing memory for performing post-processing of the current frame.
The memory block may advantageously comprise a computer program comprising code instructions for implementing the steps of the method of updating post-processing states within the meaning of the invention, when these instructions are executed by the processor PROC, and notably the steps of obtaining a past decoded signal, stored for the preceding frame, resampling the past decoded signal obtained, by interpolation, and using the resampled past decoded signal as a memory for post-processing the current frame.
Typically, the description of
In a general way the memory MEM stores all the data necessary for implementing the method.
ANNEX 1:
ANNEX 2:
Although the present disclosure has been described with reference to one or more examples, workers skilled in the art will recognize that changes may be made in form and detail without departing from the scope of the disclosure and/or the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
14 56734 | Jul 2014 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2015/051864 | 7/6/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/005690 | 1/14/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5774452 | Wolosewicz | Jun 1998 | A |
8401865 | Ojala | Mar 2013 | B2 |
9489964 | Kovesi | Nov 2016 | B2 |
20070040709 | Sung | Feb 2007 | A1 |
20090002379 | Baeza | Jan 2009 | A1 |
20090323826 | Wu | Dec 2009 | A1 |
20110077945 | Ojala | Mar 2011 | A1 |
20110295598 | Yang | Dec 2011 | A1 |
20150170668 | Kovesi | Jun 2015 | A1 |
20150371647 | Faure | Dec 2015 | A1 |
20160343384 | Ragot | Nov 2016 | A1 |
20170148461 | Daniel | May 2017 | A1 |
Entry |
---|
G.718 (Jun. 2008) “Frame error robust narrowband and wideband, embedded variable bit-rate coding of speech and audio from 8-32 kbits/s”, chapter 7.14.1. |
G. Roy, P. Kabal, “Wideband CELP speech coding at 16 kbits/sec”, ICASSP 1991. |
C. Laflamme et al., “16 kbps wideband speech coding technique based on algebraic CELP”, ICASSP 1991. |
International Search Report dated Sep. 11, 2015 for corresponding International Application No. PCT/FR2015/051864, filed Jul. 6, 2015. |
English translation of the International Written Opinion dated Sep. 11, 2015 for corresponding International Application No. PCT/FR2015/051864, filed Jul. 6, 2015. |
“ISO/IEC 14496-3:2001(E)—Subpart 3: Speech Coding—CELP”, International Standard ISO/IEC, XX, XX, Jan. 1, 2001 (Jan. 1, 2001), pp. 1-172, XP007902532. |
“ITU-T G.718—Frame Error Robust Narrow-Band and Wideband Embedded Variable Bit-Rate Coding of Speech and Audio from 8-32 kbit/s”, Jun. 30, 2008 (Jun. 30, 2008), XP055087883. |
French Search Report and Written Opinion dated Mar. 20, 2015 for corresponding French Application No. 1456734, filed Jul. 11, 2014. |
Number | Date | Country | |
---|---|---|---|
20170148461 A1 | May 2017 | US |