The present invention relates to voice enhancement, and in particular to a method and an apparatus for enhancing a coded audio signal.
Improved voice quality created by voice processing DSP (Digital Signal Processing) algorithms has been used to differentiate network providers. The transfer to packet networks or networks with extended tandem free operation (TFO) or transcoder free operation (TrFO) will diminish this ability to differentiate networks with traditional voice processing algorithms. Therefore, operators which have generally been responsible for maintaining speech quality for their customers are asking for voice processing algorithms to be carried out also for coded speech.
TFO is a voice standard to be deployed in the GSM (Global System for Mobile communications) and GSM-evolved 3G (Third Generation) networks. It is intended to avoid the traditional double speech encoding/decoding in mobile-to-mobile call configurations. The key inconvenience of a tandem configuration is the speech quality degradation introduced by the double transcoding. According to the ETSI listening tests, this degradation is usually more noticeable when the speech codecs are operating at low rates. Also, higher background noise level increases the degradation.
When the originating and terminating connections are using the same speech codec it is possible to transmit transparently the speech frames received from the originating MS (Mobile Station) to the terminating MS without activating the transcoding functions in the originating and terminating networks.
The key advantages of Tandem Free Operation are improvement in speech quality by avoiding the double transcoding in the network, possible savings on the inter-PLMN (Public Land Mobile Network) transmission links, which are carrying compressed speech compatible with a 16 kbit/s or 8 kbit/s sub-multiplexing scheme, including packet switched transmission, possible savings in processing power in the network equipment since the transcoding functions in the Transcoder Units are bypassed, and possible reduction in the end-to-end transmission delay.
In TFO call configuration a transcoder device is physically present in the signal path, but the transcoding functions are bypassed. The transcoding device may perform control and protocol conversion functions. In Transcoder Free Operation (TrFO), on the other hand, no transcoder device is physically present and hence no control or conversion or other functions associated with it are activated.
The level of speech is an important factor affecting the perceived quality of speech. Typically in the network side there are used automatic level control algorithms, which adjust the speech level to a certain desired target level by increasing the level of faint speech and somewhat decreasing the level of very loud voices.
These methods cannot be utilized as such in future packet networks where the speech travels in the coded format end-to-end from the transmitting device to the receiving device.
Currently the coded speech is decoded in the network and speech enhancement is carried out with linear PCM samples using traditional speech enhancement methods. After that the speech is encoded again, and transmitted to the receiving party.
However, for example, for AMR speech codec the level control is more difficult in the lower modes due to the fact that the fixed codebook gain is no longer scalar quantized but it is vector-quantized together with the adaptive codebook gain.
It is an object of the invention to provide a method and an apparatus for enhancing a coded audio signal by means of which the above-described problems are overcome and enhancement of a coded audio signal is improved.
According to a first aspect of the invention, this object is achieved by an apparatus and a method of enhancing a coded audio signal comprising indices which represent audio signal parameters which comprise at least a first parameter representing a first characteristic of the audio signal and a second parameter, comprising:
According to a second aspect of the invention, this object is achieved by an apparatus and a method of enhancing a coded audio signal comprising indices which represent audio signal parameters which comprise at least a first parameter representing a first characteristic of the audio signal and a background noise parameter, comprising:
More precisely, according to an embodiment of the invention a method for controlling the level of the AMR coded speech for all the AMR codec modes 12.2 kbit/s, 10.2 kbit/s, 7.95 kbit/s, 7.40 kbit/s, 6.70 kbit/s, 5.90 kbit/s, 5.15 kbit/s and 4.75 kbit/s is described. The level of the coded speech is adjusted by changing one of the coded speech parameters, namely the quantization index of the fixed codebook gain factor in the modes 12.2 kbit/s and 7.95 kbit/s. In the rest of the modes the fixed codebook gain is jointly vector-quantized with the adaptive codebook gain, and therefore adjusting the level of the coded speech requires changing both the fixed codebook gain factor and the adaptive codebook gain (joint index).
According to the invention, a new gain index is found such that the error between the desired gain and the realized effective gain becomes minimized. The proposed level control does not cause audible artifacts.
Therefore, according to the invention, level control is enabled also in lower AMR bit rates (not only 12.2 kbit/s and 7.95 kbit/s). The level control in the AMR mode 12.2 kbit/s can be improved by taking into account the required corresponding level control for the comfort noise level.
with male speech samples.
with child speech samples.
In the following, an embodiment of the present invention will be described in connection with an AMR coded audio signal comprising speech and/or noise. However, the invention is not limited to AMR coding and can be applied to any audio signal coding technique employing indices corresponding to audio signal parameters. For example, such audio signal parameters may control a level of synthesized speech. In other words, the invention can be applied to a audio signal coding technique in which an index indicating a value of an audio signal parameter controlling a first characteristic of the audio signal is transmitted as coded audio signal, in which this index may also indicate a value of an audio signal parameter controlling another audio signal characteristic such as a pitch of the synthesized speech.
The adaptive multi-rate speech codec (AMR) is presented to the extent necessary for illustrating the preferred embodiments. References 3GPP TS 26.090 V4.0.0 (2001-03), “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Mandatory Speech Codec speech processing functions; AMR speech codec; Transcoding functions (Release 4)”, and Kondoz A. M. University of Surrey, UK, “Digital speech coding for low bit rate communications systems,” chapter 6: ‘Analysis-by-synthesis coding of speech,’ pages 174-214. John Wiley & Sons, Chichester, 1994 contain further information. The adaptive multi-rate (AMR) speech codec is based on the code-excited linear predictive (CELP) coding model. It consists of eight source codecs, or modes of operation, with bit-rates of 12.2, 10.2, 7.95, 7.40, 6.70, 5.90, 5.15 and 4.75 kbit/s. The basic encoding and decoding principles of the AMR codec are explained briefly below. In addition, the matters relevant to the parameter domain gain control are discussed in more detail.
The AMR encoding process comprises three main steps:
LPC (Linear predictive coding) analysis:
The short-term correlations between speech samples (formants) are modeled and removed by a 10th order filter. In AMR codec the LP coefficients are calculated using the autocorrelation method. The LP coefficients are further transformed to Line Spectral Pairs (LSPs) for quantization and interpolation purposes utilizing the property of LSPs having a strong correlation between adjacent subframes.
Pitch analysis (long-term prediction):
The long-term correlations between speech samples (voice periodicity) are modeled and removed by a pitch filter. The pitch lag is estimated from the perceptually weighted input speech signal by first using the computationally less expensive open-loop method. A more accurate pitch lag and pitch gain gp is then estimated by a closed-loop analysis around the open-loop pitch lag estimate, allowing also fractional pitch lags. The pitch synthesis filter in AMR is implemented as shown in
where b60 is an interpolation filter based on a Hamming windowed sin(x)/x function.
Optimum excitation determination (innovative excitation search):
As shown in
The CELP model parameters LP filter coefficients, pitch parameters, i.e. the delay and the gain of the pitch filter, and fixed codebook vector and fixed codebook gainare encoded for transmission to LSP indices, adaptive codebook index (pitch index) and adaptive codebook (pitch) gain index, and fixed codebook indices and fixed codebook gain factor index, respectively.
Next, quantization of the fixed codebook gain is explained.
To make it efficient, the fixed codebook gain quantization is performed using moving-average (MA) prediction with fixed coefficients. The MA prediction is performed on the innovation energy as follows. Let E(n) be the mean-removed innovation energy (in dB) at subframe n, and given by:
where N=40 is the subframe size, c(i) is the fixed codebook excitation, and Ē (in dB) is the mean of the innovation energy (a mode-dependent constant). The predicted energy is given by:
where [b1 b2 b3 b4]=[0.68 0.58 0.34 0.19] are the MA prediction coefficients, and {circumflex over (R)}(k) is the quantified prediction error at subframe k:
{circumflex over (R)}(k)=E(k)−{tilde over (E)}(k). (1.4)
Now, a predicted fixed codebook gain is computed using the predicted energy as in Eq. (1.2) (by substituting E(n) by {tilde over (E)}(n) and gc by gc′). First, the mean innovation energy is found by:
and then the predicted gain gc′ is found by:
gc′=100.05({tilde over (E)}(n)+Ē−E
A correction factor between the gain gc and the estimated one gc′ is given by:
γgc=gc/gc′. (1.7)
The prediction error and the correction factor are related as:
R(n)=E(n)−{tilde over (E)}(n)=20 log(γgc). (1.8)
At the decoder, the transmitted speech parameters are decoded and speech is synthesized.
Decoding of the fixed codebook gain
In case of scalar quantization (in modes 12.2 kbit/s and 7.95 kbit/s), the decoder receives an index to a quantization table that gives the quantified fixed codebook gain correction factor {circumflex over (γ)}gc.
In case of vector quantization (in all the other modes) the index gives both the quantified adaptive codebook gain ĝp and the fixed codebook gain correction factor {circumflex over (γ)}gc.
The fixed codebook gain correction factor gives the fixed codebook gain the same way as described above. First, the predicted energy is found by:
and then the mean innovation energy is found by:
The predicted gain is found by:
gc′=100.05({tilde over (E)}(n)+Ē−E
And finally, the quantified fixed codebook gain is achieved by:
ĝc={circumflex over (γ)}gcgc′. (1.12)
There are some differences between the AMR modes that are relevant to the parameter domain gain control, as listed below.
In the 12.2 kbit/s mode, the fixed codebook gain correction factor γgc is scalar quantized with 5 bits (32 quantization levels). The correction factor γgc is computed using a mean energy value Ē=36 dB.
In the 10.2 kbit/s mode, the fixed codebook gain correction factor γgc and the adaptive codebook gain gp are jointly vector quantized with 7 bits. The correction factor γgc is computed using a mean energy value Ē=33 dB. Moreover, this mode includes smoothing of the fixed codebook gain. The fixed codebook gain used for synthesis in the decoder is replaced by a smoothed value of the fixed codebook gains of the previous 5 subframes. The smoothing is based on a measure of the stationarity of the short-term spectrum in the LSP (Line Spectral Pair) domain. The smoothing is performed to avoid unnatural fluctuations in the energy contour.
In the 7.95 kbit/s mode, the fixed codebook gain correction factor γgc is scalar quantized with 5 bits, as in the mode 12.2 kbit/s. The correction factor γgc is computed using a mean energy value Ē=36 dB. This mode includes anti-sparseness processing. An adaptive anti-sparseness post-processing procedure is applied to the fixed codebook vector c(n) in order to reduce perceptual artifacts arising from the sparseness of the algebraic fixed codebook vectors with only a few non-zero samples per an impulse response. The anti-sparseness processing consists of circular convolution of the fixed codebook vector with one of three pre-stored impulse responses. The selection of the impulse response is performed adaptively from the adaptive and fixed codebook gains.
In the 7.40 kbit/s mode, the fixed codebook gain correction factor γgc and the adaptive codebook gain gp are jointly vector quantized with 7 bits, as in the mode 10.2 kbit/s. The correction factor γgc is computed using a mean energy value Ē=30 dB.
In the 6.70 kbit/s mode, the fixed codebook gain correction factor γgc and the adaptive codebook gain gp are jointly vector quantized with 7 bits, as in the mode 10.2 kbit/s. The correction factor γgc is computed using a mean energy value Ē=28.75 dB. This mode includes smoothing of the fixed codebook gain, and anti-sparseness processing.
In the 5.90 and 5.15 kbit/s modes, the fixed codebook gain correction factor γgc and the adaptive codebook gain gp are jointly vector quantized with 6 bits. The correction factor γgc. is computed using a mean energy value Ē=33 dB. The modes include smoothing of the fixed codebook gain and anti-sparseness processing.
In the 4.75 kbit/s mode, the fixed codebook gain correction factor γgc and the adaptive codebook gain gp are jointly vector quantized only every 10 ms by a unique method as described in 3GPP TS 26.090 V4.0.0 (2001-03), “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Mandatory Speech Codec speech processing functions; AMR speech codec; Transcoding functions (Release 4)”. This mode includes smoothing of the fixed codebook gain and anti-sparseness processing.
Discontinuous Transmission (DTX)
During discontinuous transmission (DTX) only the average background noise information is transmitted at regular intervals to the decoder when speech is not present as described in 3GPP TS 26.092 V4.0.0 (2001-03), “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Mandatory Speech Codec speech processing functions; AMR speech codec; Comfort noise aspects (Release 4)”. At the far-end the decoder reconstructs the background noise according to the transmitted noise parameters avoiding thus extremely annoying discontinuities in the background noise in the synthesized speech.
The comfort noise parameters, information on the level and the spectrum of the background noise are encoded into a special frame called a Silence Descriptor (SID) frame for transmission to the receive side.
For parameter domain gain control purposes, the information on the level of the background noise is of interest. If the gain level were adjusted only during speech frames, the background noise level would change abruptly at the beginning and end of noise only bursts, as illustrated in
At the transmitting side, the frame energy is computed for each frame marked with (Voice Activity Detection) VAD=0 according to the equation:
where s(n) is the high-pass filtered input speech signal of the current frame i.
The averaged logarithmic energy is computed by:
The averaged logarithmic frame energy is quantized by means of a 6-bit algorithmic quantizer. These 6 bits for the energy index are transmitted in the SID frame.
In the following, gain control in the parameter domain is described.
The fixed codebook gain gc adjusts the level of the synthesized speech in the AMR speech codec, as can be noticed by studying the equation (1.1) and the speech synthesis model shown in
The adaptive codebook gain gp controls the periodicity (pitch) of the synthesized speech, and is limited between [0, 1.2]. As shown in
The speed at which the change in the fixed codebook gain is transmitted to the adaptive codebook branch depends on the pitch delay T and the pitch gain gp, as illustrated in
For real speech signals, the pitch gain and delay vary. However, the simulation with a fixed pitch delay and pitch gain tries to give a rough estimate on the limits to the stabilization time of the adaptive codebook after a change in the fixed codebook gain. The pitch delay is limited in AMR between [18, 143] samples, as in the example too, corresponding to high child and low male pitches, respectively. The pitch gain, however, may have values between [0,1.2]. For zero pitch gain, there is naturally no delay at all. On the other hand, the pitch gain receives values at or above 1 only very short time instants for the adaptive codebook not to go unstable. Therefore, the estimated maximum delay is around few thousand samples, about half a second.
In the highest bit rate mode, 12.2 kbit/s, the fixed codebook gain correction factor γgc is scalar quantized with 5-bits, giving 32 quantization levels, as shown in
The same quantization table is used in the mode 7.95 kb/s. In all other modes, the fixed codebook gain factor is jointly vector quantized with the adaptive codebook gain. These quantization tables are shown in
The lowest mode 4.75 kbit/s uses vector quantization in a unique way. In the mode 4.75 kbit/s the adaptive codebook gains gp and the correction factors {circumflex over (γ)}gc are jointly vector quantized every 10 ms with 6 bits, i.e. two codebook gains of two frames and two correction factors are jointly vector quantized.
As explained above, the speech level control in the parameter domain must take place by adjusting the fixed codebook gain. To be more specific, the quantized fixed codebook gain correction factor {circumflex over (γ)}gc is adjusted, which is one of the speech parameters transmitted to the far-end.
In the following, the relationship between amplification of the fixed codebook gain correction factor and the amplification of the fixed codebook gain is shown. As already shown in Eqs. (1.11) and (1.12), the fixed codebook gain is defined as:
If the fixed codebook gain correction factor {circumflex over (γ)}gc(n) is amplified by β, at subframe n, and is kept unchanged at least for the following four subframes, the new quantized fixed codebook gain becomes:
In the next subframe, n+1, the new fixed codebook gain becomes:
In the same way, in the following subframes, n+2, . . . , n+4, the amplified fixed codebook gain becomes:
ĝcnew(n+2)=β·βb
. . .
ĝcnew(n+4)=β(1+b
Since the prediction coefficients were given as
[b1 b2 b3 b4]=[0.68 0.58 0.34 0.19],
the fixed codebook gain stabilizes after five subframes into a value:
{tilde over (g)}cnew(n+4)=β2.79·ĝcold(n+4). (2.10)
In other words, multiplying the fixed codebook gain factor with β results in multiplication of the fixed codebook gain (and therefore also the synthesized speech) by β2.79, assuming that β is held constant at least during the next four frames.
Therefore, e.g. in AMR modes 12.2 kbit/s and 7.95 kbit/s, the minimum change for the fixed codebook gain factor (the minimum quantization step) ±1.2 dB results in ±3.4 dB change in the fixed codebook gain, and hence in the synthesized speech signal, as shown below.
20 log10 β1.2 dBβ=1.15
20 log10(β2.79)=3.4dB (2.11)
This ±3.4 dB change in the synthesized speech level takes place gradually, as illustrated in
Consequently, the parameter level gain control of coded speech may be made by changing the index value of the fixed codebook gain factor. That is, the index value in the bit stream is replaced by a new value that gives the desired amplification/attenuation. The gain values corresponding to the index changes for AMR mode 12.2 kbit/s are listed in the table below.
Next, a search for the correct index for the desired change in the overall gain is described by taking into account the nonlinear nature of the fixed codebook gain factor quantization.
The new fixed codebook gain factor quantization index corresponding to the desired amplification/attenuation of the speech signal is found by minimizing the error:
|β·{circumflex over (γ)}gcold−{circumflex over (γ)}gcnew|, (2.12)
where {circumflex over (γ)}gcold and {circumflex over (γ)}gcnew are the old and the new fixed codebook gain correction factors and β is the desired multiplier:
β=Δj,j=[ . . . −4,−3, . . . 0, . . . +3,+4, . . . ],Δ=minimum quantization step (1.15 in AMR 12.2 kbit/s)). Note that the speech signal becomes amplified/attenuated with β2.79.
In
In AMR modes 10.2 kbit/s, 7.40 kbit/s, 6.70 kbit/s, 5.90 kbit/s, 5.15 kbit/s and 4.75 kbit/s, the equation 2.12 is replaced by:
|β·{circumflex over (γ)}gcold−{circumflex over (γ)}gcnew+weight·|gp
where the weight is ≧1, and gp
In other words, in modes 12.2 kbit/s and 7.95 kbit/s, the new fixed codebook gain factor index is found as the index which minimizes the error given in Eq. (2.12). In modes 10.2 kbit/s, 7.40 kbit/s, 6.70 kbit/s, 5.90 kbit/s, 5.15 kbit/s and 4.75 kbit/s the new joint index of the vector quantized fixed codebook gain factor and adaptive gain is found as the index which minimized the error given in Eq. (2.13). The rationale behind the Eq. (2.13) is to be able to change the fixed codebook gain factor without introducing audible error to the adaptive codebook gain.
As mentioned above, in the mode 4.75 kbit/s the adaptive codebook gains gp and the correction factors {circumflex over (γ)}gc are jointly vector quantized every 10 ms with 6 bits, i.e. two codebook gains of two subframes and two correction factors are jointly vector quantized. The codebook search is done by minimizing a weighted sum of the error criterion for each of the two subframes. The default values of the weighing factors are 1. If the energy of the second subframe is more than two times the energy of the first subframe, the weight of the first subframe is set to 2. If the energy of the first subframe is more than four times the energy of the second subframe, the weight of the second subframe is set to 2. Despite of these differences, the mode 4.75 kbit/s can be processed with the vector quantization schema described above.
Thus, according to the above-described embodiment, a new gain index (new index value) minimizing the error between the desired gain β·{circumflex over (γ)}gcold (enhanced first parameter value) and the realized effective gain {circumflex over (γ)}gcnew (new first parameter value) according to Eq. (2.12) or (2.13) is determined according to the quantization tables for the respective modes. The new fixed codebook gain correction factor (and the new adaptive codebook gain in case of modes other than 12.2 kbits/s and 7.95 kbit/s) correspond to the determined new gain index. The old gain index (current index value) representing the old fixed codebook gain correction factor {circumflex over (γ)}gcold (current first parameter value) (and the old adaptive codebook gain gp
In the following, alternative methods for providing an improved gain accuracy are described. At first it is illustrated how the total desired gain is formulated in case the gain is not kept constant during five consecutive subframes.
As described above, in the AMR-codec, the fixed codebook gain is encoded using the fixed codebook gain correction factor γgc. The gain correction factor is used to scale the predicted fixed codebook gain gc′ to obtain the fixed codebook gain gc, i.e.
The fixed codebook gain is predicted as follows:
where Ē is a mode dependent energy value (in dB) and E1 is the fixed codebook excitation energy (in dB).
To obtain a desired overall signal gain α, the quantified fixed codebook correction factor has to be multiplied by a correction factor gain β. Realized correction factor gains are denoted with {circumflex over (β)}(n−i), i>0. By amplifying the fixed codebook correction factor {circumflex over (γ)}gc(n) with β(n), at subframe n, the new quantized fixed codebook gain becomes: (Note that the prediction gc′ depends on the history of the correction gains, as shown in Equation 2.14)
Therefore, a new prediction, which is obtained using the realized factor gains {circumflex over (β)}(n−i), can be written as
Furthermore,
i.e., the target correction factor gain for the present subframe can be written as
If {circumflex over (β)}(n) is kept constant, the overall gain stabilizes after five subframes into a value
because the prediction coefficients were given as b=[1,0.68,0.58,0.34,0.19].
Next, a first alternative of the above described gain manipulation is described, which first alternative is referred to as Synthesizing Error Minimization (synthesizing method).
The algorithm according to the synthesizing method follows as much as possible the original error criteria given for the scalar quantization as
ESQ=(gc−ĝc)2=(gc−{circumflex over (γ)}gcgc′)2,
where ESQ is the fixed codebook quantization error and gc is the target fixed codebook gain. As mentioned before, the goal is to scale the fixed codebook gain with the desired total gain gcnew=αĝc. Therefore, for the CDALC (Coded Domain Automatic Level Control) purposes, the target must be scaled by the desired gain, i.e.
ESQ=(αĝc−{circumflex over (γ)}gcnewgc′new)2. (3.2)
In the vector quantization, the pitch gain gp and the fixed codebook correction factor {circumflex over (γ)}gc are jointly quantized. In the AMR encoder, the vector quantization index is found by minimizing the quantization error EVQ defined as
EVQ=∥x−ĝpy−ĝcz∥,
where x,y and z are a target vector, a weighted LP-filtered adaptive codebook vector and a weighted LP-filtered fixed codebook vector, respectively. The error criterion is actually a norm of the perceptually weighted error between the target and the synthesized speech. Following the procedure of the scalar quantization, the target vector is replaced by the scaled version, i.e.
EVQ=∥(ĝpynew+αĝcz)−ĝpnewynew−ĝcnewz∥. (3.3)
In the following, the synthesizing method is described for the scalar quantization.
The derivation of the minimization criterion is started from the Equation 3.2 used in the AMR-encoder and given as:
ESQ=(αgc−{circumflex over (γ)}gcnewgc′new)2.
Unfortunately, there is no direct access to gc, however it can be approximated by gc≈{circumflex over (γ)}gcgc ′ and therefore the first CDALC error criterion for the scalar quantization can be written as
where {circumflex over (β)}(n−i) is the realized correction factor gain for the subframe (n−i), i.e.
This error criterion is simple to evaluate and only the fixed codebook correction factor has to be decoded. Furthermore, four previous realized correction factor gains have to be kept in the memory.
Next, the synthesizing method is described for the vector quantization.
For the vector quantization case the error criterion used in the AMR-encoder is more complicated, since the synthesis filters are used. In view of the fact that there is no direct access to the target x, it is approximated by ĝpy+ĝcz. Thus, the error minimization with CDALC becomes:
In addition to decoding the gains, both codebook vectors have to be decoded and filtered with the LP-synthesis filter. Therefore, LP-synthesis filter parameters have to be decoded. This means that basically all the parameters have to be decoded. In the AMR-encoder the codebook vectors are also weighted by a specific weighting filter, but this was not done for this CDALC error criterion.
Next, a second alternative of the gain manipulation is described, which second alternative is referred to as Quantization Error Minimization with Memory (memory method).
This criterion minimizes quantization error while taking in account the history of the previous correction factors. In case of scalar quantization the error criterion is the same as in the first alternative, i.e. the error function to be minimized will be the same as in Equation 3.4. But for the vector quantization the error function becomes little easier to evaluate.
Vector Quantization
Starting from the error function derived for the first alternative and given in Equation 3.5, minimizing the error of the sum of two components will require decoding the y and z vectors. Practically this means that the whole signal has to be decoded. Instead of minimizing the norm, of the error vector, the error can be approximated by the sum of two error components (which would be the case if both vectors y and z are parallel to each other), namely the pitch gain error and the fixed codebook gain error. Combining these components using the Euclidean norm, the new error criteria can be written as:
The sum of the previous equation (Equation 3.5) is divided into two components. However, the synthesized codebook vectors still exist in the pitch gain error scaling term
Due to the synthesis, the pitch gain error scaling term is complicate to compute. If it is computed, it would be more efficient to use the synthesization error minimization criterion described in the first alternative. To get rid of the synthesis-procedure, the term
is replaced by the constant pitch gain error weight wg
This algorithm using fixed pitch gain weight requires decoding (finding a value according to the received quantization index) of both the pitch gain and the correction factor ({circumflex over (γ)}gc) and also reconstructing of the fixed codebook gain prediction gc′. To be able to construct the prediction, the fixed codebook vector has to be decoded. Furthermore, the integer pitch lag is needed for the pitch sharpening of the fixed codebook excitation. The energy of the fixed codebook excitation is required for the prediction (see Equation 3.1). If necessary, the prediction can be included in the fixed weight, i.e.
After that there is no need to decode the fixed codebook vector. Presumably, it would not affect much in performance. On the other hand, the energy of the fixed codebook excitation can be estimated, since it is fairly fixed. This allows the creation of a prediction without decoding the fixed codebook vector.
The range of the terms
are demonstrated in
In step S1 in
According to the above-described embodiment, a new index value for a·{circumflex over (γ)}gcold is searched such that the equation |α·{circumflex over (γ)}gcold−{circumflex over (γ)}gcnew is minimized, {circumflex over (γ)}gcnew being the new first parameter value corresponding to the searched new index value.
Moreover, according to the present invention, a current second parameter value may be determined from the index further corresponding to a second parameter such as the adaptive codebook gain controlling a second characteristic of speech. In this case, the new index value is determined from the table further relating the index values to second parameter values, e.g. a vector quantization table, such that a new second parameter value corresponding to the new index value substantially matches the current second parameter value.
According to the above-described embodiment, a new index value for a·{circumflex over (γ)}gcold and gp
“weight” can be ≧1, so that the new index value is determined from the table such that substantially matching the current second parameter value has precedence.
The parameter value determination block 11 may further determine a current second parameter value from the index further corresponding to a second parameter, and the index value determination block 13 may then determine the new index value from the table further relating the index values to second parameter values, such that a new second parameter value corresponding to the new index value substantially matches the current second parameter value. Thus, the index value is optimized simultaneously for both the first and second parameters.
The index value determination block 13 may determine the new index value from the table such that substantially matching the current second parameter value has precedence.
The apparatus 100 may further include replacing means for replacing a current value of the index corresponding to the at least first parameter by the determined new index value, and output enhanced coded speech containing the new index value.
Referring to
Alternatively, the second parameter value may be the background noise level parameter the index value of which is determined in accordance with the adjusted speech level.
As discussed beforehand, the speech level manipulation requires also manipulating the background noise level parameter during speech pauses in DTX.
According to the AMR codec, the background noise level parameter, the averaged logarithmic frame energy, is quantized with 6 bits. The comfort noise level can be adjusted by changing the energy index value. The level can be adjusted in 1.5 dB, so finding a suitable comfort noise level corresponding to the change of the speech level is possible.
The evaluated comfort noise parameters (the average LSF (Line Spectral Frequency) parameter vector fmean and the averaged logarithmic frame energy
are encoded into a special frame, called a Silence Descriptor (SID) frame for transmission to the receiver side. The parameters give information on the level
and the spectrum (fmean) of the background noise. More details can be found in 3GPP TS 26.093 V4.0.0 (2001-03), “3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Mandatory Speech Codec speech processing functions; AMR speech codec; Source controlled rate operation (Release 6)”.
The frame energy is computed for each frame marked with Voice Activity Detector VAD=0 according to the equation:
where x is the HP-filtered input speech signal of the current frame i. The averaged logarithmic energy, which will be transmitted, is computed by:
The averaged logarithmic energy is quantized by means of a 6 bit algorithmic quantizer. Quantization is performed using quantization function, as defined in 3GPP TS 26.104 V4.1.0 2001-06, “AMR Floating-point Speech Codec C-source”.
where the value of the index is restricted to a range [0 . . . 63], i.e. in a range of 6 bits.
The index can be computed using base 10 logarithm as follows:
where 10 log10 enmean(i) is the energy in decibels. Therefore, it is shown that one quantization step corresponds to approximately 1.5 dB.
In the following the gain adjustment of the comfort noise parameters is described.
Since an energy parameter is transmitted, the signal energy can be manipulated directly by modifying the energy parameters. As shown above, one quantization step equals to 1.5 dB. Assuming that all eight frames of a SID update interval will be scaled by α, the new index can be found as follows
Because the old index was as
the new index can be approximated by
indexnew≈└4 log2 α┘+index.
Referring back to
The level of the synthesized speech signal can be adjusted by manipulating the fixed codebook gain factor index, as shown previously. While being a measure of prediction error, the fixed codebook gain factor index does not discover the level of the speech signal. Therefore, to control the gain manipulation, i.e. to determine whether the level should be changed, the speech signal level must be first estimated.
In TFO, the six or seven MSB of the PCM speech samples (not compressed) are transmitted to the far end unchanged, to facilitate a seamless TFO interruption. These six or seven MSB can be used to estimate the speech level.
If these PCM speech samples are unavailable, the coded speech signal must be at least partially decoded (post-filtering is not necessary) to estimate the speech level.
Alternatively, there is the possibility of using a fixed gain, thereby avoiding a complete decoding.
As shown in
In a VED (Voice Enhancement Device) shown in
It is to be noted that the partial decoding of speech shown in
The above described embodiments of the present invention may not only be utilized in level control itself, but also in noise suppression and echo control (nonlinear processing) in the coded domain. Noise suppression can utilize the above technique by e.g. adjusting the comfort noise level during speech pauses. Echo control may utilize the above technique e.g. by attenuating the speech signal during echo bursts.
The present invention is not intended to be limited only to TFO and TrFO voice communication and to voice communication over packet-switched networks, but rather to comprise enhancing coded audio signals in general. The invention finds application also in enhancing coded audio signals related e.g. to audio/speech/multimedia streaming applications and to MMS (Multimedia Messaging Service) applications.
It is to be understood that the above description is illustrative of the invention and is not to be construed as limiting the invention. Various modifications and applications may occur to those skilled in the art without departing from the scope of the invention as defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
03029182 | Dec 2003 | EP | regional |
Number | Name | Date | Kind |
---|---|---|---|
20020184010 | Eriksson et al. | Dec 2002 | A1 |
20040024594 | Lee et al. | Feb 2004 | A1 |
20040243404 | Cezanne et al. | Dec 2004 | A1 |
20050071154 | Etter | Mar 2005 | A1 |
Number | Date | Country |
---|---|---|
1081684 | Mar 2001 | EP |
WO 9940569 | Aug 1999 | WO |
WO 0103317 | Jan 2001 | WO |
WO 03098598 | Nov 2003 | WO |
Number | Date | Country | |
---|---|---|---|
20050137864 A1 | Jun 2005 | US |