This application is the national phase of International (PCT) Patent Application Serial No. PCT/CA03/00828, filed May 30, 2003, published under PCT Article 21(2) in English, which claims priority to and the benefit of Canadian Patent Application No. 2,388,352, filed May 31, 2002, the disclosures of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to a method and device for post-processing a decoded sound signal in view of enhancing a perceived quality of this decoded sound signal.
This post-processing method and device can be applied, in particular but not exclusively, to digital encoding of sound (including speech) signals. For example, this post-processing method and device can also be applied to the more general case of signal enhancement where the noise source can be from any medium or system, not necessarily related to encoding or quantization noise.
2. Brief Description of the Current Technology
2.1 Speech Encoders
Speech encoders are widely used in digital communication systems to efficiently transmit and/or store speech signals. In digital systems, the analog input speech signal is first sampled at an appropriate sampling rate, and the successive speech samples are further processed in the digital domain. In particular, a speech encoder receives the speech samples as an input, and generates a compressed output bit stream to be transmitted through a channel or stored on an appropriate storage medium. At the receiver, a speech decoder receives the bit stream as an input, and produces an output reconstructed speech signal.
To be useful, a speech encoder must produce a compressed bit stream with a bit rate lower than the bit rate of the digital, sampled input speech signal. State-of-the-art speech encoders typically achieve a compression ratio of at least 16 to 1 and still enable the decoding of high quality speech. Many of these state-of-the-art speech encoders are based on the CELP (Code-Excited Linear Predictive) model, with different variants depending on the algorithm.
In CELP encoding, the digital speech signal is processed in successive blocks of speech samples called frames. For each frame, the encoder extracts from the digital speech samples a number of parameters that are digitally encoded, and then transmitted and/or stored. The decoder is designed to process the received parameters to reconstruct, or synthesize the given frame of speech signal. Typically, the following parameters are extracted from the digital speech samples by a CELP encoder:
Several speech encoding standards are based on the Algebraic CELP (ACELP) model, and more precisely on the ACELP algorithm. One of the main features of ACELP is the use of algebraic codebooks to encode the innovative excitation at each subframe. An algebraic codebook divides a subframe in a set of tracks of interleaved pulse positions. Only a few non-zero-amplitude pulses per track are allowed, and each non-zero-amplitude pulse is restricted to the positions of the corresponding track. The encoder uses fast search algorithms to find the optimal pulse positions and amplitudes for the pulses of each subframe. A description of the ACELP algorithm can be found in the article of R. SALAMI et al., “Design and description of CS-ACELP: a toll quality 8 kb/s speech coder” IEEE Trans. on Speech and Audio Proc., Vol. 6, No. 2, pp. 116-130, March 1998, herein incorporated be reference, and which describes the ITU-T G.729 CS-ACELP narrowband speech encoding algorithm at 8 kbits/second. It should be noted that there are several variations of the ACELP innovation codebook search, depending on the standard of concern. The present invention is not dependent on these variations, since it only applies to post-processing of the decoded (synthesized) speech signal.
A recent standard based on the ACELP algorithm is the ETSI/3GPP AMR-WB speech encoding algorithm, which was also adopted by the ITU-T (Telecommunication Standardization Sector of ITU (International Telecommunication Union)) as recommendation G.722.2 . [ITU-T Recommendation G.722.2 “Wideband coding of speech at around 16 kbit/s using Adaptive Multi-Rate Wideband (AMR-WB)” Geneva, 2002], [3GPP TS 26.190, “AMR Wideband Speech Codec: Transcoding Functions,” 3GPP Technical Specification]. The AMR-WB is a multi-rate algorithm designed to operate at nine different bit rates between 6.6 and 23.85 kbits/second. Those of ordinary skill in the art know that the quality of the decoded speech generally increases with the bit rate. The AMR-WB has been designed to allow cellular communication systems to reduce the bit rate of the speech encoder in the case of bad channel conditions; the bits are converted to channel encoding bits to increase the protection of the transmitted bits. In this manner, the overall quality of the transmitted bits can be kept higher than in the case where the speech encoder operates at a single fixed bit rate.
Whenever a speech encoder is used in a communication system, the synthesized or decoded speech signal is never identical to the original speech signal even in the absence of transmission errors. The higher the compression ratio, the higher the distortion introduced by the encoder. This distortion can be made subjectively small using different approaches. A first approach is to condition the signal at the encoder to better describe, or encode, subjectively relevant information in the speech signal. The use of a formant weighting filter, often represented as W(z), is a widely used example of this first approach [B. Kleijn and K. Paliwal editors, <<Speech Coding and Synthesis, >> Elsevier, 1995]. This filter W(z) is typically made adaptive, and is computed in such a way that it reduces the signal energy near the spectral formants, thereby increasing the relative energy of lower energy bands. The encoder can then better quantize lower energy bands, which would otherwise be masked by encoding noise, increasing the perceived distortion. Another example of signal conditioning at the encoder is the so-called pitch sharpening filter which enhances the harmonic structure of the excitation signal at the encoder. Pitch sharpening aims at ensuring that the inter-harmonic noise level is kept low enough in the perceptual sense.
A second approach to minimize the perceived distortion introduced by a speech encoder is to apply a so-called post-processing algorithm. Post-processing is applied at the decoder, as shown in
The present invention relates to a method for post-processing a decoded sound signal in view of enhancing a perceived quality of this decoded sound signal, comprising dividing the decoded sound signal into a plurality of frequency sub-band signals, and applying post-processing to at least one of the frequency sub-band signals, but not all the frequency sub-band signals.
The present invention is also concerned with a device for post-processing a decoded sound signal in view of enhancing a perceived quality of this decoded sound signal, comprising means for dividing the decoded sound signal into a plurality of frequency sub-band signals, and means for post-processing at least one of the frequency sub-band signals, but not all the frequency sub-band signals.
According to an illustrative embodiment, after post-processing of the above mentioned at least one frequency sub-band signal, the frequency sub-band signals are summed to produce an output post-processed decoded sound signal.
Accordingly, the post-processing method and device make it possible to localize the post-processing in the desired sub-band(s) and to leave other sub-bands virtually unaltered.
The present invention further relates to a sound signal decoder comprising an input for receiving an encoded sound signal, a parameter decoder supplied with the encoded sound signal for decoding sound signal encoding parameters, a sound signal decoder supplied with the decoded sound signal encoding parameters for producing a decoded sound signal, and a post processing device as described above for post-processing the decoded sound signal in view of enhancing a perceived quality of this decoded sound signal.
The foregoing and other objects, advantages and features of the present invention will become more apparent upon reading of the following, non restrictive description of illustrative embodiments thereof, given by way of example only with reference to the accompanying drawings.
In the appended drawings:
a is a graph illustrating an example of spectrum of a pre-processed signal;
b is a graph illustrating an example of spectrum of the post-processed signal obtained when using the method described in
a and 8b are graphs showing an example of the frequency response of a pitch enhancer filter as described by Equation (1), with the special case of a pitch period T=10 samples;
a is a graph showing an example of frequency response for the low-pass filter 404 of
b is a graph showing an example of frequency response for the band-pass filter 407 of
c is a graph showing an example of combined frequency response for the low-pass filter 404 and band-pass filters 407 of
In
In one illustrative embodiment, a two-band decomposition is used and adaptive filtering is applied only to the lower band. This results in a total post-processing that is mostly targeted at frequencies near the first harmonics of the synthesized speech signal.
In the higher branch 308, the decoded speech signal 112 is filtered by a high-pass filter 301 to produce the higher band signal 310 (sH). In this specific example, no adaptive filter is used in the higher branch. In the lower branch 309, the decoded speech signal 112 is first processed through an adaptive filter 307 comprising an optional low-pass filter 302, a pitch tracking module 303, and a pitch enhancer 304, and then filtered through a low-pass filter 305 to obtain the lower band, post processed signal 311 (sLEF). The post-processed decoded speech signal 113 is obtained by adding through an adder 306 the lower 311 and higher 312 band post-processed signals from the output of the low-pass filter 305 and high-pass filter 301, respectively. It should be pointed out that the low-pass 305 and high-pass 301 filters could be of many different types, for example Infinite Impulse Response (UR) or Finite Impulse Response (FIR). In this illustrative embodiment, linear phase FIR filters are used.
Therefore, the adaptive filter 307 of
The low-pass filter 302 can be omitted, but it is included to allow viewing of the post-processing of
where α is a coefficient that controls the inter-harmonic attenuation, T is the pitch period of the input signal x[n], and y[n] is the output signal of the pitch enhancer. A more general equation could also be used where the filter taps at n−T and n+T could be at different delays (for example n−T1 and n+T2). Parameters T and a vary with time and are given by the pitch tracking module 303. With a value of α=1, the gain of the filter described by Equation (1) is exactly 0 at frequencies 1/(2T),3/(2T), 5/(2T), etc, i.e. at the mid-point between the harmonic frequencies 1/T, 3/T, 5/T, etc. When α approaches 0, the attenuation between the harmonics produced by the filter of Equation (1) reduces. With a value of α=0, the filter output is equal to its input.
Since the pitch period of a speech signal varies in time, the pitch value T of the pitch enhancer 304 has to vary accordingly. The pitch tracking module 303 is responsible for providing the proper pitch value T to the pitch enhancer 304, for every frame of the decoded speech signal that has to be processed. For that purpose, the pitch tracking module 303 receives as input not only the decoded speech samples but also the decoded parameters 114 from the parameter decoder 106 of
Since a typical speech encoder extracts, for every speech subframe, a pitch delay which we call T0 and possibly a fractional value T0
Pitch enhanced signal sLE is then low-pass filtered through filter 305 to isolate the low frequencies of the pitch enhanced signal sLE, and to remove the high-frequency components that arise when the pitch enhancer filter of Equation (1) is varied in time, according to the pitch delay T, at the decoded speech frame boundaries. This produces the lower band post-processed signal sLEF, which can now be added to the higher band signal sH in the adder 306. The result is the post-processed decoded speech signal 113, with reduced inter-harmonic noise in the lower band. The frequency band where pitch enhancement will be applied depends on the cut-off frequency of the low-pass filter 305 (and optionally in low-pass filter 302).
a and 6b show an example signal spectrum illustrating the effect of the post-processing described in
The post-processed decoded speech signal 113 at the output of the adder 306 has a spectrum shown in
Application to the AMR-WB Speech Decoder
The present invention can be applied to any speech signal synthesized by a speech decoder, or even to any speech signal corrupted by inter-harmonic noise that needs to be reduced. This section will show a specific, exemplary implementation of the present invention to an AMR-WB decoded speech signal. The post-processing is applied to the low-band synthesized speech signal 712 of
The input signal (AMR-WB low-band synthesized speech (12.8 kHz)) of
An illustrative embodiment of pitch tracking algorithm for the module 401 is the following (the specific thresholds and pitch tracked values are given only by way of example):
It should be noted that the above example of pitch tracking module 401 is given for the purpose of illustration only. Any other pitch tracking method or device could be implemented in module 401 (or 303 and 502) to ensure a better pitch tracking at the decoder.
Therefore, the output of the pitch tracking module is the period T to be used in the pitch filter 402 which, in this preferred embodiment, is described by the filter of Equation (1). Again, a value of α=0 implies no filtering (output of the pitch filter 402 is equal to its input), and a value of α=1 corresponds to the highest amount of pitch enhancement.
Once the enhanced signal SE (
For completeness, the tables of filter coefficients used in this illustrative embodiment of the filters 404 and 407 are given below. Of course, these tables of filter coefficients are given by way of example only. It should be understood that these filters can be replaced without modifying the scope, spirit and nature of the present invention.
The output of the pitch filter 402 of
Alternate Implementation of the Proposed Pitch Enhancer
It should be noted that the negative sign in front of the second term on the right hand side, compared to Equation (1). It should also be noted that the enhancement factor α is not included in Equation (2), but rather it is introduced by means of an adaptive gain by the processor 504 of
The pitch value T for use in the inter-harmonic filter 503 is obtained adaptively by the pitch tracking module 502. Pitch tracking module 502 operates on the decoded speech signal and the decoded parameters, similarly to the previously disclosed methods as shown in
Then, the output 507 of the inter-harmonic filter 503 is a signal formed essentially of the inter-harmonic portion of the input decoded signal 112, with 180° phase shift at mid-point between the signal harmonics. Then, the output 507 of the inter-harmonic filter 503 is multiplied by a gain α (processor 504) and subsequently low-pass filtered (filter 505) to obtain the low frequency band modification that is applied to the input decoded speech signal 112 of
The final post-processed decoded speech signal 509 is obtained by adding through an adder 506 the output of low-pass filter 505 to the input signal (decoded speech signal 112 of
One-Band Alternative Using an Adaptive High-Pass Filter
One last alternative for implementing sub-band post-processing for enhancing the synthesis signal at low frequencies is to use an adaptive high-pass filter, whose cut-off frequency is varied according to the input signal pitch value. Specifically, and without referring to any drawing, the low frequency enhancement using this illustrative embodiment would be performed, at each input signal frame, according to the following steps:
It should be pointed out that the present illustrative embodiment of the present invention is equivalent to using only one processing branch in
Although the present invention has been described in the foregoing description with reference to illustrative embodiments thereof, these embodiments can be modified at will, within the scope of the appended claims without departing from the spirit and nature of the present invention. For example, although the illustrative embodiments have been described in relation to a decoded speech signal, those of ordinary skill in the art will appreciate that the concepts of the present invention can be applied to other types of decoded signals, in particular but not exclusively to other types of decoded sound signals.
Number | Date | Country | Kind |
---|---|---|---|
2388352 | May 2002 | CA | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/CA03/00828 | 5/30/2003 | WO | 00 | 11/23/2004 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO03/102923 | 12/11/2003 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5651092 | Ishii et al. | Jul 1997 | A |
5701390 | Griffin et al. | Dec 1997 | A |
5806025 | Vis et al. | Sep 1998 | A |
5864798 | Oshikiri et al. | Jan 1999 | A |
6029128 | Jarvinen | Feb 2000 | A |
6138093 | Ekudden et al. | Oct 2000 | A |
6385576 | Amada et al. | May 2002 | B2 |
6795805 | Bessette et al. | Sep 2004 | B1 |
6889182 | Gustafsson | May 2005 | B2 |
6937978 | Liu | Aug 2005 | B2 |
7167828 | Ehara | Jan 2007 | B2 |
7260521 | Bessette et al. | Aug 2007 | B1 |
7280959 | Bessette | Oct 2007 | B2 |
7286980 | Wang et al. | Oct 2007 | B2 |
20050065785 | Bessette | Mar 2005 | A1 |
Number | Date | Country |
---|---|---|
2181481 | Apr 2008 | RU |
447853 | Oct 1974 | SU |
447857 | Oct 1974 | SU |
WO9700516 | Jan 1997 | WO |
Number | Date | Country | |
---|---|---|---|
20050165603 A1 | Jul 2005 | US |