The present invention relates to a system and method for enhancing a decoded tonal sound signal, for example an audio signal such as a music signal coded using a speech-specific codec. For that purpose, the system and method reduce a level of quantization noise in regions of the spectrum exhibiting low energy.
The demand for efficient digital speech and audio coding techniques with a good trade-off between subjective quality and bit rate is increasing in various application areas such as teleconferencing, multimedia, and wireless communications.
A speech coder converts a speech signal into a digital bit stream which is transmitted over a communication channel or stored in a storage medium. The speech signal is digitized, that is, sampled and quantized with usually 16-bits per sample. The speech coder has the role of representing the digital samples with a smaller number of bits while maintaining a good subjective speech quality. The speech decoder or synthesizer operates on the transmitted or stored bit stream and converts it back to a sound signal.
Code-Excited Linear Prediction (CELP) coding is one of the best prior art techniques for achieving a good compromise between subjective quality and bit rate. The CELP coding technique is a basis of several speech coding standards both in wireless and wireline applications. In CELP coding, the sampled speech signal is processed in successive blocks of L samples usually called frames, where L is a predetermined number of samples corresponding typically to 10-30 ms. A linear prediction (LP) filter is computed and transmitted every frame. The computation of the LP filter typically uses a lookahead, for example a 5-15 ms speech segment from the subsequent frame. The L-sample frame is divided into smaller blocks called subframes. Usually the number of subframes is three (3) or four (4) resulting in 4-10 ms subframes. In each subframe, an excitation signal is usually obtained from two components, a past excitation and an innovative, fixed-codebook excitation. The component formed from the past excitation is often referred to as the adaptive-codebook or pitch-codebook excitation. The parameters characterizing the excitation signal are coded and transmitted to the decoder, where the excitation signal is reconstructed and used as the input of the LP filter.
In some applications, such as music-on-hold, low bit rate speech-specific codecs are used to operate on music signals. This usually results in bad music quality due to the use of a speech production model in a low bit rate speech-specific codec.
In some music signals, the spectrum exhibits a tonal structure wherein several tones are present (corresponding to spectral peaks) and are not harmonically related. These music signals are difficult to encode with a low bit rate speech-specific codec using an all-pole synthesis filter and a pitch filter. The pitch filter is capable of modeling voice segments in which the spectrum exhibits a harmonic structure comprising a fundamental frequency and harmonics of this fundamental frequency. However, such a pitch filter fails to properly model tones which are not harmonically related. Furthermore, the all-pole synthesis filter fails to model the spectral valleys between the tones. Thus, when a low bit rate speech-specific codec using a speech production model such as CELP is used, music signals exhibit an audible quantization noise in the low-energy regions of the spectrum (inter-tone regions or spectral valleys).
An objective of the present invention is to enhance a tonal sound signal decoded by a decoder of a speech-specific codec in response to a received coded bit stream, for example an audio signal such as a music signal, by reducing quantization noise in low-energy regions of the spectrum (inter-tone regions or spectral valleys).
More specifically, according to the present invention, there is provided a system for enhancing a tonal sound signal decoded by a decoder of a speech-specific codec in response to a received coded bit stream, comprising: a spectral analyser responsive to the decoded tonal sound signal to produce spectral parameters representative of the decoded tonal sound signal; and a reducer of a quantization noise in low-energy spectral regions of the decoded tonal sound signal in response to the spectral parameters from the spectral analyser.
The present invention also relates to a method for enhancing a tonal sound signal decoded by a decoder of a speech-specific codec in response to a received coded bit stream, comprising: spectrally analysing the decoded tonal sound signal to produce spectral parameters representative of the decoded tonal sound signal; and reducing a quantization noise in low-energy spectral regions of the decoded tonal sound signal in response to the spectral parameters from the spectral analysis.
The present invention further relates to a system for enhancing a decoded tonal sound signal, comprising: a spectral analyser responsive to the decoded tonal sound signal to produce spectral parameters representative of the decoded tonal sound signal, wherein the spectral analyser divides a spectrum resulting from spectral analysis into a set of critical frequency bands, and wherein each critical frequency band comprises a number of frequency bins; and a reducer of a quantization noise in low-energy spectral regions of the decoded tonal sound signal in response to the spectral parameters from the spectral analyser, wherein the reducer of quantization noise comprises a noise attenuator that scales the spectrum of the decoded tonal sound signal per critical frequency band, per frequency bin, or per both critical frequency band and frequency bin.
The present invention still further relates to a method for enhancing a decoded tonal sound signal, comprising: spectrally analysing the decoded tonal sound signal to produce spectral parameters representative of the decoded tonal sound signal, wherein spectrally analysing the decoded tonal sound signal comprises dividing a spectrum resulting from the spectral analysis into a set of critical frequency bands each comprising a number of frequency bins; and reducing a quantization noise in low-energy spectral regions of the decoded tonal sound signal in response to the spectral parameters from the spectral analysis, wherein reducing the quantization noise comprises scaling the spectrum of the decoded tonal sound signal per critical frequency band, per frequency bin, or per both critical frequency band and frequency bin.
The foregoing and other objects, advantages and features of the present invention will become more apparent upon reading of the following non restrictive description of illustrative embodiments thereof, given by way of example only with reference to the accompanying drawings.
In the appended drawings:
In the following detailed description, an inter-tone noise reduction technique is performed within a low bit rate speech-specific codec to reduce a level of inter-tone quantization noise for example in musical content. The inter-tone noise reduction technique can be deployed with either narrowband sound signals sampled at 8000 samples/s or wideband sound signals sampled at 16000 samples/s or at any other sampling frequency. The inter-tone noise reduction technique is applied to a decoded tonal sound signal to reduce the quantization noise in the spectral valleys (low energy regions between tones). In some music signals, the spectrum exhibits a tonal structure wherein several tones are present (corresponding to spectral peaks) and are not harmonically related. These music signals are difficult to encode with a low bit rate speech-specific codec which uses an all-pole LP synthesis filter and a pitch filter. The pitch filter can model voiced speech segments having a spectrum that exhibits a harmonic structure with a fundamental frequency and harmonics of that fundamental frequency. However, the pitch filter fails to properly model tones which are not harmonically related. Further, the all-pole LP synthesis filter fails to model the spectral valleys between the tones. Thus, using a low bit rate speech-specific codec with a speech production model such as CELP, the modeled signals will exhibit an audible quantization noise in the low-energy regions of the spectrum (inter-tone regions or spectral valleys). The inter-tone noise reduction technique is therefore concerned with reducing the quantization noise in low-energy spectral regions to enhance a decoded tonal sound signal, more specifically to enhance quality of the decoded tonal sound signal.
In one embodiment, the low bit rate speech-specific codec is based on a CELP speech production model operating on either narrowband or wideband signals (8 or 16 kHz sampling frequency). Any other sampling frequency could also be used.
An example 600 of the decoder of a low bit rate speech-specific codec using a CELP speech production model will be briefly described with reference to
For example, a codec based on the AMR-WB ([1]—3GPP TS 26.190, “Adaptive Multi-Rate-Wideband (AMR-WB) speech codec; Transcoding functions”) structure can be used. The AMR-WB speech codec uses an internal sampling frequency of 12.8 kHz, and the signal can be re-sampled to either 8 or 16 kHz before performing reduction of the inter-tone quantization noise or, alternatively, noise reduction or audio enhancement can be performed at 12.8 kHz.
Referring to
The decoded sound signal 103 at the output of the sound signal decoder 102 is converted (re-sampled) to a sampling frequency of 8 kHz. However, it should be kept in mind that the inter-tone noise reduction technique disclosed herein can be equally applied to decoded tonal sound signals at other sampling frequencies such as 12.8 kHz or 16 kHz.
Preprocessing can be applied or not to the decoded sound signal 103. When preprocessing is applied, the decoded sound signal 103 is, for example, pre-emphasized through a preprocessor 104 before spectral analysis in the spectral analyser 105 is performed.
To pre-emphasize the decoded sound signal 103, the preprocessor 104 comprises a first order high-pass filter (not shown). The first order high-pass filter emphasizes higher frequencies of the decoded sound signal 103 and may have, for that purpose, the following transfer function:
H
pre-emph(z)=1−0.68z−1 (1)
where z represents the Z-transform variable.
Pre-emphasis of the higher frequencies of the decoded sound signal 103 has the property of flattening the spectrum of the decoded sound signal 103, which is useful for inter-tone noise reduction.
Following the pre-emphasis of the higher frequencies of the decoded sound signal 103 in the preprocessor 104:
For example, the speech-specific codec in which the inter-tone noise reduction technique is implemented operates on 20 ms frames containing 160 samples at a sampling frequency of 8 kHz. Also according to this example, the sound signal decoder 102 uses a 10 ms lookahead from the future frame for best frame erasure concealment performance. This lookahead is also used in the inter-tone noise reduction technique for a better frequency resolution. The inter-tone noise reduction technique implemented in the reduced 108 of quantization noise follows the same framing structure as in the decoder 102. However, some shift can be introduced between the decoder framing structure and the inter-tone noise reduction framing structure to maximize the use of the lookahead. In the following description, the indices attributed to samples will reflect the inter-tone noise reduction framing structure.
Spectral Analysis
Referring to
The analysis windows are used to weight the pre-emphasized, decoded tonal sound signal 106 for frequency analysis. The analysis windows are flat in the middle with sine function on the edges (
where LWindow=240 samples is the size of the analysis window. Since a 256-point FTT (LFFT=256) is used, the windowed signal is padded with 16 zero samples.
An alternative analysis window could be used in the case of a wideband signal with only a small lookahead available. This analysis window could have the following shape:
where Lwindow
Let s′(n) denote the decoded tonal sound signal with index 0 corresponding to the first sample in the inter-tone noise reduction frame (As indicated hereinabove, in this embodiment, this corresponds to 80 samples following the beginning of the sound signal decoder frame). The windowed decoded tonal sound signal for the spectral analysis can be obtained using the following relation:
where s′(0) is the first sample in the current inter-tone noise reduction frame.
FFT is performed on the windowed, decoded tonal sound signal to obtain one set of spectral parameters per frame:
The output of the FFT gives real and imaginary parts of the spectrum denoted by XR(k), k=0 to
and XI(k), k=1 to
Note that XR(0) corresponds to the spectrum at 0 Hz (DC) and
corresponds to the spectrum at
Hz, where FS corresponds to the sampling frequency. The spectrum at these two (2) points is only real valued and usually ignored in the subsequent analysis.
After the FFT analysis, the resulting spectrum is divided into critical frequency bands using the intervals having the following upper limits; (17 critical bands in the frequency range 0-4000 Hz and 21 critical frequency bands in the frequency range 0-8000 Hz) (See [2]: J. D. Johnston, “Transform coding of audio signal using perceptual noise criteria,” IEEE J. Select. Areas Commun., vol. 6, pp. 314-323, February 1988).
In the case of narrowband coding, the critical frequency bands={100.0, 200.0, 300.0, 400.0, 510.0, 630.0, 770.0, 920.0, 1080.0, 1270.0, 1480.0, 1720.0, 2000.0, 2320.0, 2700.0, 3150.0, 3700.0, 3950.0} Hz.
In the case of wideband coding, the critical frequency bands={100.0, 200.0, 300.0, 400.0, 510.0, 630.0, 770.0, 920.0, 1080.0, 1270.0, 1480.0, 1720.0, 2000.0, 2320.0, 2700.0, 3150.0, 3700.0, 4400.0, 5300.0, 6700.0, 8000.0} Hz.
The 256-point or 512-point FFT results in a frequency resolution of 31.25 Hz (4000/128=8000/256). After ignoring the DC component of the spectrum, the number of frequency bins per critical frequency band in the case of narrowband coding is MCB={3, 3, 3, 3, 3, 4, 5, 4, 5, 6, 7, 7, 9, 10, 12, 14, 17, 12}, respectively, when the resolution is approximated to 32 Hz. In the case of wideband coding MCB={3, 3, 3, 3, 3, 4, 5, 4, 5, 6, 7, 7, 9, 10, 12, 14, 17, 22, 28, 44, 41}.
The average spectral energy per critical frequency band is computed as follows:
where XR(k) and XI(k) are, respectively, the real and imaginary parts of the kth frequency bin and ji is the index of the first bin in the ith critical band given by ji={1, 4, 7, 10, 13, 16, 20, 25, 29, 34, 40, 47, 54, 63, 73, 85, 99, 116} in the case of narrowband coding and ji={1, 4, 7, 10, 13, 16, 20, 25, 29, 34, 40, 47, 54, 63, 73, 85, 99, 116, 138, 166, 210} in the case of wideband coding.
The spectral analyser 105 of
E
BIN(k)=XR2(k)+XI2(k), k=0, . . . , 114 (5)
Finally, the spectral analyser 105 computes a total frame spectral energy as an average of the spectral energies of the first 17 critical frequency bands calculated by the spectral analyser 105 in a frame using, the following relation:
The spectral parameters 107 from the spectral analyser 105 of
It should be noted that, for a wideband decoded tonal sound signal sampled at 16000 samples/s, up to 21 critical frequency bands could be used but computation of the total frame energy Efrt at time t will still be performed on the first 17 critical bands.
Signal Type Classifier:
The inter-tone noise reduction technique conducted by the system and method 100 enhances a decoded tonal sound signal, such as a music signal, coded by means of a speech-specific codec. Usually, non-tonal sounds such as speech are well coded by a speech-specific codec and do not need this type of frequency based enhancement.
The system and method 100 for enhancing a decoded tonal sound signal further comprises, as illustrated in
The signal type classifier 301 comprises the feature of not only separating the decoded sound signal into sound signal categories, but also to give instruction to the reducer 108 of quantization noise to reduce at a minimum any possible degradation of speech.
A schematic block diagram of the signal type classifier 301 is illustrated in
First, the signal type classifier 301 comprises a finder 501 that determines a mean of the past forty (40) total frame spectral energy (Et) variations calculated using the following relation:
Then, the finder 501 determines a statistical deviation of the energy variation history σE over the last fifteen (15) frames using the following relation:
The signal type classifier 301 comprises a memory 502 updated with the mean and deviation of the variation of the total frame spectral energy Et as calculated in Equations (7) and (8).
The resulting deviation σE is compared to four (4) floating thresholds in comparators 503-506 to determine the efficiency of the reducer 108 of quantization noise on the current decoded sound signal. In the example of
The five (5) sound signal categories 0-4 can be determined as indicated in the following Table:
The sound signal category 0 is a non-tonal sound signal category, like speech, which is not modified by the inter-tone noise reduction technique. This category of decoded sound signal has a large statistical deviation of the spectral energy variation history. When detection of categories 1-4 by the comparators 503-506 is negative, a controller 511 instructs the reducer 108 of quantization noise not to reduce inter-tone quantization noise (Reduction=0 dB).
The tree in between sound signal categories includes sound signals with different types of statistical deviation of spectral energy variation history.
Sound signal category 1 (biggest variation after “speech type” decoded sound signal) is detected by the comparator 506 when the statistical deviation of spectral energy variation history is lower than a Threshold 1. A controller 510 is responsive to such a detection by the comparator 506 to instruct, when the last detected sound signal category was ≧0, the reducer 108 of quantization noise to enhance the decoded tonal sound signal within the frequency band 2000 to
Hz by reducing the inter-tone quantization noise by a maximum allowed amplitude of 6 dB.
Sound signal category 2 is detected by the comparator 505 when the statistical deviation of spectral energy variation history is lower than a Threshold 2. A controller 509 is responsive to such a detection by the comparator 505 to instruct, when the last detected sound signal category was ≧1, the reducer 108 of quantization noise to enhance the decoded tonal sound signal within the frequency band 1270 to
Hz by reducing the inter-tone quantization noise by a maximum allowed amplitude of 9 dB.
Sound signal category 3 is detected by the comparator 504 when the statistical deviation of spectral energy variation history is lower than a Threshold 3. A controller 508 is responsive to such a detection by the comparator 504 to instruct, when the last detected sound signal category was ≧2, the reducer 108 of quantization noise to enhance the decoded tonal sound signal within the frequency band 700 to
Hz by reducing the inter-tone quantization noise by a maximum allowed amplitude of 12 dB.
Sound signal category 4 is detected by the comparator 503 when the statistical deviation of spectral energy variation history is lower than a Threshold 4. A controller 507 is responsive to such a detection by the comparator 503 to instruct, when the last detected signal type category was ≧3, the reducer 108 of quantization noise to enhance the decoded tonal sound signal within the frequency band 400 to
Hz by reducing the inter-tone quantization noise by a maximum allowed amplitude of 12 dB.
In the embodiment of
Counters of a series of frames of sound signal category 0 and of a series of frames of sound signal category 3 or 4 are used to respectively decrease or increase thresholds.
For example, if a counter 512 counts a series of more than 30 frames of sound signal category 3 or 4, the floating thresholds 1-4 will be increased by a threshold controller 514 for the purpose of allowing more frames to be considered as sound signal category 4. Each time the count of the counter 512 is incremented, the counter 513 is reset to zero.
The inverse is also true with sound signal category 0. For example, if a counter 513 counts a series of more than 30 frames of sound signal category 0, the threshold controller 514 decreases the floating thresholds 1-4 for the purpose of allowing more frames to be considered as sound signal category 0. The floating thresholds 1-4 are limited to absolute maximum and minimum values to ensure that the signal type classifier 301 is not locked to a fixed category.
The increase and decrease of the thresholds 1-4 can be illustrated by the following relations:
IF (Nbr_cat4_frame>30)
Thres(i)=Thres(i)+TH_UP|i=14
ELSE IF (Nbr_cat0_frame>30)
Thres(i)=Thres(i)−TH_DWN|i=14
Thres(i)=MIN(Thres(i),MAX_TH)|i=14
Thres(i)=MAX(Thres(i),MIN_TH)|i=14
In the case of frame erasure, all the thresholds 1-4 are reset to theirs minimum values and the output of the signal type classifier 301 is considered as non-tonal (sound signal category 0) for three (3) frames including the lost frame.
If information from a Voice Activity Detector (VAD) (not shown) is available and is indicating no voice activity (presence of silence), the decision of the signal type classifier 301 is forced to sound signal category 0.
According to an alternative of the signal type classifier 301, the frequency band of allowed enhancement and/or the level of maximum inter-tone noise reduction could be completely dynamic (without hard step).
In the case of a small lookahead, it could be necessary to introduce a minimum gain reduction smoothing in the first critical bands to further reduce any potential distortion introduced with the inter-tone noise reduction. This smoothing could be performed using the following relation:
where RedGaini is a maximum gain reduction per band, FEhBand is the first band where the inter-tone noise reduction is allowed (vary typically between 400 Hz and 2 kHz or critical frequency bands 3 and 12), Allow_red is the level of noise reduction allowed per sound signal category presented in the previous table and max_band is the maximum band for the inter tone noise reduction (17 for Narrowband (NB) and 20 for Wideband (WB)).
Inter-Tone Noise Reduction:
Inter-tone noise reduction is applied (see reducer 108 of quantization noise (
Referring to
In an alternative implementation, noise attenuator 304 perform per bin processing to apply a scaling gain to each frequency bin in the first voiced K bands and then noise attenuator 305 performs per band processing to scale the spectrum in each of the remaining critical frequency bands with a scaling gain. If K=0 then the noise attenuator 305 performs per band processing in all the critical frequency bands.
The minimum scaling gain gmin is derived from the maximum allowed inter-tone noise reduction in dB, NRmax. As described in the foregoing description (see the table above), the signal type classifier 301 makes the maximum allowed noise reduction NRmax varying between 6 and 12 dB. Thus minimum scaling gain is given by the relation:
g
min=10−NR
In the case of a narrowband tonal frame, the scaling gain can be computed in relation to the SNR per frequency bin then per bin noise reduction is performed. Per bin processing is applied only to the first 17 critical bands corresponding to a maximum frequency of 3700 Hz. The maximum number of frequency bins in which per bin processing can be used is 115 (the number of bins in the first 17 bands at 4 kHz).
In the case of a wideband tonal frame, per bin processing is applied to all the 21 critical frequency bands corresponding to a maximum frequency of 8000 Hz. The maximum number of frequency bins for which per bin processing can be used is 250 (the number of bins in the first 21 bands at 8 kHz).
In the inter-tone noise reduction technique, noise reduction starts at the fourth critical frequency band (no reduction performed before 400 Hz). To reduce any negative impact of the inter-tone quantization noise reduction technique, the signal type classifier 301 could push the starting critical frequency band up to the 12th. This means that the first critical frequency band on which inter-tone noise reduction is performed is somewhere between 400 Hz and 2 kHz and could vary on a frame basis.
The scaling gain for a certain critical frequency band, or for a certain frequency bin, can be computed as a function of the SNR in that frequency band or bin using the following relation:
(gs)2=ks SNR+cs, bounded by gmin≦gs≦1 (10)
The values of ks and cs are determined such that gs=gmin for SNR=1 dB, and gs=1 for SNR=45 dB. That is, for SNRs at 1 dB and lower, the scaling gain is limited to gs and for SNRs at 45 dB and higher, no inter-tone noise reduction is performed in the given critical frequency band (gs=1). Thus, given these two end points, the values of ks and cs in Equation (10) can be calculated using the following relations:
k
s=(1−gmin2)/44 and cs=(45gmin2−1)/44 (11)
The variable SNR of Equation (10) is either the SNR per critical frequency band, SNRCB(i), or the SNR per frequency bin, SNRBIN(k), depending on the type of per bin or per band processing.
The SNR per critical frequency band is computed as follows:
where ECB(1)(i) and ECB(2)(i) denote the energy per critical frequency band for the past and current frame spectral analyses, respectively (as computed in Equation (4)), and NCB(i) denote the noise energy estimate per critical frequency band.
The SNR per frequency bin in a certain critical frequency band i is computed using the following relation:
where EBIN(1)(k) and EBIN(2)(k) denote the energy per frequency bin for the past(1) and the current(2) frame spectral analysis, respectively (as computed in Equation (5)), NCB(i) denote the noise energy estimate per critical frequency band, ji is the index of the first frequency bin in the ith critical frequency band and MCB(i) is the number of frequency bins in critical frequency band i as defined herein above.
According to another, alternative implementation, the scaling gain could be computed in relation to the SNR per critical frequency band or per frequency bin for the first voiced bands. If KVOIC>0 then per bin processing can be performed in the first KVOIC bands. Per band processing can then be used for the rest of the bands. In the case where KVOIC=0 per band processing can be used over the whole spectrum.
In the case of per band processing for a critical frequency band with index i, after determining the scaling gain using Equation (10) and the SNR as defined in Equation (12) or (13), the actual scaling is performed using a smoothed scaling gain updated in every spectral analysis by means of the following relation:
g
CB,LP(i)=αgsgCB,LP(i)+(1−αgs)gs (14)
According to a feature, the smoothing factor αgs used for smoothing the scaling gain gs and can be made adaptive and inversely related to the scaling gain gs itself. For example, the smoothing factor can be given by αgs=1−gs. Therefore, the smoothing is stronger for smaller gains gs. This approach prevents distortion in high SNR segments preceded by low SNR frames, as it is the case for voiced onsets. In the proposed approach, the smoothing procedure is able to quickly adapt and use lower scaling gains upon occurrence of, for example, a voiced onset.
Scaling in a critical frequency band is performed as follows:
X′
R(k+ji)=gCB,LP(i)XR(k+ji), and
X′
I(k+ji)=gCB,LP(i)XI(k+ji), k=0, . . . , MCB(i)−1′ (15)
where ji is the index of the first frequency bin in the critical frequency band i and MCB(i) is the number of frequency bins in that critical frequency band.
In the case of per bin processing in a critical frequency band with index i, after determining the scaling gain using Equation (10) and the SNR as defined in Equation (12) or (13), the actual scaling is performed using a smoothed scaling gain updated in every spectral analysis as follows:
g
BIN,LP(k)=αgsgBIN,LP(k)+(1−αgs)gs (16)
where the smoothing factor αgs=1−gs is similar to Equation (14).
Temporal smoothing of the scaling gains prevents audible energy oscillations, while controlling the smoothing using αgs prevents distortion in high SNR speech segments preceded by low SNR frames, as it is the case for voiced onsets for example.
Scaling in a critical frequency band i is then performed as follows:
X′
R(k+ji)=gBIN,LP(k+ji)XR(k+ji), and
X′
I(k+ji)=gBIN,LP(k+ji)XI(k+ji), k=0, . . . , MCB(i)−1′ (17)
where ji is the index of the first frequency bin in the critical frequency band i and MCB(i) is the number of frequency bins in that critical frequency band.
The smoothed scaling gains gBIN,LP(k) and gCB,LP(i) are initially set to 1.0. Each time a non-tonal sound frame is processed (music_flag=0), the value of the smoothed scaling gains are reset to 1.0 to reduce a possible reduction of these smoothed scaling gains in the next frame.
In every spectral analysis performed by the spectral analyser 105, the smoothed scaling gains gCB,LP(i) are updated for all critical frequency bands (even for voiced critical frequency bands processed through per bin processing—in this case gCB,LP(i) is updated with an average of gBIN,LP(k) belonging to the critical frequency band i). Similarly, the smoothed scaling gains gBIN,LP(k) are updated for all frequency bins in the first 17 critical frequency bands, that is up to frequency bin 115 in the case of narrowband coding (the first 21 critical frequency bands, that is up to frequency bin 250 in the case of wideband coding). For critical frequency bands processed with per band processing, the scaling gains are updated by setting them equal to gCB,LP(i) in the first 17 (narrowband coding) or 21 (wideband coding) critical frequency bands.
In the case of a low-energy decoded tonal sound signal, inter-tone noise reduction is not performed. A low-energy sound signal is detected by finding the maximum noise energy in all the critical frequency bands, max(NCB(i)), i=0, . . . , 17, (17 in the case of narrowband coding and 21 in the case of wideband coding) and if this value is lower than or equal to a certain value, for example 15 dB, then no inter-tone noise reduction is performed.
In the case of processing of narrowband signals, the inter-tone noise reduction is performed on the first 17 critical frequency bands (up to 3680 Hz). For the remaining 11 frequency bins between 3680 Hz and 4000 Hz, the spectrum is scaled using the last scaling gain gs of the frequency bin corresponding to 3680 Hz.
Spectral Gain Correction
The Parseval theorem shows that the energy in the time domain is equal to the energy in the frequency domain. Reduction of the energy of the inter-tone noise results in an overall reduction of energy in the frequency and time domains. An additional feature is that the reducer 108 of quantization noise comprises a per band gain corrector 306 to rescale the energy per critical frequency band in such a manner that the energy in each critical frequency band at the end of the resealing will be close to the energy before the inter-tone noise reduction.
To achieve such resealing, it is not necessary to rescale all the frequency bins but to rescale only the most energetic bins. The per band gain corrector 306 comprises an analyser 401 (
The energy removed from inter-tone noise will be moved to the most energetic events (corresponding to the most energetic bins) of the critical frequency band. In this manner, the final music sample will sound clearer than just doing a simple inter-tone noise reduction because the dynamic between energetic events and the noise floor will further increase.
The spectral energy of a critical frequency band after the inter-tone noise reduction is computed in the same manner as the spectral energy before the inter-tone noise reduction:
In this respect, the per band gain corrector 306 comprises an analyser 402 to determine the per band spectral energy prior to inter-tone noise reduction using Equation (18), and an analyser 403 to determine the per band spectral energy after the inter-tone noise reduction using Equation (18).
The per band gain corrector 306 further comprises a calculator 404 to determine a corrective gain as the ratio of the spectral energy of a critical frequency band before inter-tone noise reduction and the spectral energy of this critical frequency band after inter-tone noise reduction has been applied.
G
corr(i)=√{square root over ((ECB(i)/ECB(i)′))}{square root over ((ECB(i)/ECB(i)′))}, i=0, . . . , 16 (19)
where ECB is the critical band spectral energy before inter-tone noise reduction and ECB′ is the critical frequency band spectral energy after inter-tone noise reduction. The total number of critical frequency bands covers the entire spectrum from 17 bands in Narrowband coding to 21 bands in Wideband coding.
The resealing along the critical frequency band i can be performed as follows:
IF (gBIN,LP(k+ji)>0.8 & i>4)
X″
R(k+ji)=Gcorr(k+ji)X′R(k+ji), and
X″
I(k+ji)=Gcorr(k+ji)X′I(k+ji), k=0, . . . , MCB(i)−1, (20)
ELSE
X″
R(k+ji)=X′R(k+ji), and
X″
I(k+ji)=X′I(k+ji), k=0, . . . , MCB(i)−1
where ji is the index of the first frequency bin in the critical frequency band i and MCB(i) is the number of frequency bins in that critical frequency band. No gain correction is applied under 600 Hz because it is assumed that spectral energy at very low frequency has been accurately coded by the low bit rate speech-specific codec and any increase of inter-harmonic tone will be audible.
Spectral Gain Boost
It is possible to further increase the clearness of a musical sample by increasing furthermore the gain Gcorr in critical frequency bands where not many energetic events occur. A calculator 405 of the per band gain corrector 306 determines the ratio of energetic events (ratio of the number of energetic bins on total number of frequency bins) per critical frequency band as follow:
The calculator 405 then computes an additional correction factor to the corrective gain using the following formula:
IF(NumBinmax>0)
C
F=−0.2778·REvCB+1.2778
In a per band gain corrector 406, this new correction factor CF multiplies the corrective gain Gcorr by a value situated between [1.0, 1.2778]. When this correction factor CF is taken into consideration, the rescaling along the critical frequency band i becomes:
IF(gBIN,LP(k+ji)>0.8 & i>4)
X″
R(k+ji)=Gcorr·CF·(k+ji)X′R(k+ji), and
X″
I(k+ji)=Gcorr·CF·(k+ji)X′I(k+ji), k=0, . . . , MCB(i)−1
ELSE
X″
R(k+ji)=X′R(k+ji), and
X″
I(k+ji)=X′I(k+ji), k=0, . . . , MCB(i)−1
In the particular case of Wideband coding, the rescaling is performed only in the frequency bins previously scaled by a scaling gain between [0.96, 1.0] in the inter-tone noise reduction phase. Usually, higher the bit rate is closer will be the energy of the spectrum to the desired energy level. For that reason the second part of the gain correction, the gain correction factor CF, might not be always used. Finally, at very high bit rate, it could be beneficial to perform gain rescaling only in the frequency bins which were previously not modified (having a scaling gain of 1.0).
Reconstruction of Enhanced, Denoised Sound Signal
After determining the scaled spectral components 308, X′R(k) of XR″(k) and X′I(k) or XI″(k), a calculator 307 of the inverse analyser and overlap add operator 110 computes the inverse FFT. The calculated inverse FFT is applied to the scaled spectral components 308 to obtain a windowed enhanced decoded sound signal in the time domain given by the following relation:
The signal is then reconstructed in operator 303 using an overlap add operation for the overlapping portions of the analysis. Since a sine window is used on the original decoded tonal sound signal 103 prior to spectral analysis in the spectral analyser 105, the same windowing is applied to the windowed enhanced decoded tonal sound signal 309 at the output of the inverse FFT calculator prior to the overlap add operation. Thus, the doubled windowed enhanced decoded tonal sound signal is given by the relation:
x
ww,d
(1)(n)=wFFT(n)xw,d(1)(n), n=0, . . . , LFFT−1 (22)
For the first third of the Narrowband analysis window, the overlap add operation for constructing the enhanced sound signal is performed using the relation:
s(n)=xww,d(0)(n+2·Lwindow/3)+xww,d(1)(n), n=0, . . . , Lwindow/3−1 (23)
and for the first ninth of the Wideband analysis window, the overlap-add operation for constructing the enhanced decoded tonal sound signal is performed as follows:
s(n)=xww,d(0)(n+2·Lwindow
where xww,d(0)(n) is the double windowed enhanced decoded tonal sound signal from the analysis of the previous frame.
Using an overlap add operation, since there is a 80 sample shift (40 in the case of Wideband coding) between the sound signal decoder frame and inter-tone noise reduction frame, the enhanced decoded tonal sound signal can be reconstructed up to 80 samples from the lookahead in addition to the present inter-tone noise reduction frame.
After the overlap add operation to reconstruct the enhanced decoded tonal sound signal, deemphasis is performed in the postprocessor 112 on the enhanced decoded sound signal using the inverse of the above described preemphasis filter. The postprocessor 112 therefore comprises a deemphasis filter which, in this embodiment, is given by the relation:
H
de-emph(z)=1/(1−0.68z−1) (24)
Inter-Tone Noise Energy Update
Inter-tone noise energy estimates per critical frequency band for inter-tone noise reduction can be calculated for each frame in an inter-tone noise energy estimator (not shown), using for example the following formula:
where NCB0 and ECB0 represent the current noise and spectral energies for the specified critical frequency band (i) and NCB1 and ECB1 represent the noise and the spectral energies for the past frame of the same critical frequency band.
This method of calculating inter-tone noise energy estimates per critical frequency band is simple and could introduce some distortions in the enhanced decoded tonal sound signal. However, in low bit rate Narrowband coding, these distortions are largely compensated by the improvement in the clarity of the synthesis sound signals.
In wideband coding, when the inter-tone noise is present but less annoying, the method to update the inter-tone noise energy have to be more sophisticated to prevent the introduction of annoying distortion. Different technique could be use with more or less computational complexity.
Inter-Tone Noise Energy Update Using Weighted Average Per Band Energy:
In accordance with this technique, the second maximum and the minimum energy values of each critical frequency band are used to compute an energy threshold per critical frequency band as follow:
where max2 represents the frequency bin having the second maximum energy value and min the frequency bin having the minimum energy value in the critical frequency band of concern.
The energy threshold (thr_enerCB) is used to compute a first inter-tone noise level estimation per critical band (tmp_enerCB) which corresponds to the mean of the energies) (EBIN) of all the frequency bins below the preceding energy threshold inside the critical frequency band, using the following relation:
where mcnt is the number of frequency bins of which the energies (EBIN) are included in the summation and mcnt≦MCB(i). Furthermore; the number mcnt of frequency bins of which the energy (EBIN) is below the energy threshold is compared to the number of frequency bins (MCB) inside a critical frequency band to evaluate the ratio of frequency bins below the energy threshold. This ratio accepted_ratioCB is used to weight the first, previously found inter-tone noise level estimation (tmp_enerCB).
A weighting factor βCB of the inter-tone noise level estimation is different among the bit rate used and the accepted_ratioCB. A high accepted_ratioCB for a critical frequency band means that it will be difficult to differentiate the noise energy from the signal energy. In that case it is desirable to not reduce too much the noise level of that critical frequency band to not risk any alteration of the signal energy. But a low accepted_ratioCB indicates a large difference between the noise and signal energy levels then the estimated noise level could be higher in that critical frequency band without adding distortion. The factor βCB is modified as follow:
Finally the inter-tone noise estimation per critical frequency band can be smoothed differently if the inter-tone noise is increasing or decreasing.
where NCB0 represents the current noise energy for the specified critical frequency band (i) and NCB1 represents the noise energy of the past frame of the same critical frequency band.
Although the present invention has been described in the foregoing description by way of non restrictive illustrative embodiments thereof, many other modifications and variations are possible within the scope of the appended claims without departing from the spirit, nature and scope of the present invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/CA2009/000276 | 3/5/2009 | WO | 00 | 11/8/2010 |
Number | Date | Country | |
---|---|---|---|
61064430 | Mar 2008 | US |