This application claims the priority of Korean Patent Application No. 10-2004-0075959, filed on Sep. 22, 2004, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field of the Invention
The present invention relates to an apparatus of encoding/decoding voice, and more specifically, to an apparatus for and method of selecting encoding/decoding appropriate to voice characteristics in a voice encoding/decoding apparatus.
2. Description of Related Art
A conventional linear prediction coding (LPC) coefficient quantizer obtains an LPC coefficient to perform linear prediction on signals input to a encoder of a voice compressor/decompressor (codec), and quantizes the LPC coefficient to transmit it to the decoder. However, there are problems in that an operating range of the LPC coefficient is too wide to be directly quantized by the LPC coefficient quantizer and a filter stability is not guaranteed even with small errors. Therefore, the LPC coefficient is quantized by converting into a line spectral frequency (LSF), which is mathematically equivalent with good quantization characteristics.
In general, in the case of narrow band speech codec that has 8 kHz input speech, 10 LSFs are made for representing spectral envelope. Here, the tenth-order LSF has a high correlation in a short term, and a ordering property among respective elements in the LSF vector, so that a predictive vector quantizer is used. However, when a frame in which frequency characteristics of the voice are rapidly changed, there occur a lot of errors due to a predictor so that the quantization performance is degraded. Accordingly, a quantizer having two predictors has been used to quantize the LSF vector having low inter-correlation correlation.
An LSF vector input to an LSF quantizer is input to a first vector quantization unit 111 and a second vector quantization unit 121 through lines, respectively. Here, respective first and second subtractors 100 and 105 subtract LSF vectors predicted by respective first and second predictors 115 and 125 from the LSF vector respectively input to the first vector quantization unit 111 and the second vector quantization unit 121, respectively. A process of subtracting the LSF vector is shown in the following equation. 1.
where, ri1,n is a prediction error of an ith element in an nth frame of the LSF vector of the first vector quantizer 110 fin is an ith element in the nth frame of LSF vector,
is an ith element in the nth frame of the predicted LSF vector of the first vector quantization unit 111, and βi1 is a prediction coefficient between ri1,n and fin of the first vector quantization unit 111.
The prediction error signal output through the first subtractor 100 is vector quantized by the first vector quantizer 110. The quantized prediction error signal is input to the first predictor 115 and a first adder 130. The quantized prediction error signal input to the first predictor 115 is calculated as shown in the following equation 2 to predict the next frame and then stored into a memory.
wherein, {circumflex over (r)}1,ni is an ith element in an nth frame of the quantized prediction error signal of the first vector quantizer 110, and αi1 is an prediction coefficient of the ith element of the first vector quantization unit 111.
The first adder 130 adds the predicted signal to the LSF prediction error vector quantized by the first vector quantizer 110. The LSF prediction error vector added to the predicted signal is output to the LSF vector selection unit 140 via the line. The predicted signal adding process by the first adder 130 is performed as shown in Equation 3.
where, {circumflex over (r)}1,ni is an ith element in the nth frame of the quantized prediction error signal of the first vector quantizer 110. The LSF vector input to the second vector quantization unit 121 through the line subtracts a LSF predicted by the second predictor 125 through the second subtractor 105 to output a predicted error. The predicted error signal subtraction is calculated as the following equation 4.
where, ri2,n is a prediction error of an ith element in an nth frame of the LSF vector of the second vector quantizer 121, fin is an ith element in the nth frame of LSF vector,
is an ith element in the nth frame of the prediction LSF vector of the second vector quantization unit 121, and βi2 is a prediction coefficient between ri2,n and fin of the second vector quantization unit 121.
The prediction error signal output through the second subtractor 105 is quantized by the second vector quantizer 120. The quantized prediction error signal is input to the second predictor 125 and a second adder 135. The quantized prediction error signal input to the second predictor 125 is calculated as shown in the following equation 5 to predict the next frame and then stored into a memory.
wherein, {circumflex over (r)}2,ni is an ith element in an nth frame of the quantized prediction error signal of the second vector quantization unit 121, and αi2 is an prediction coefficient of the ith element of the second vector quantization unit 121.
The signal input to the second adder 135 is added to the predicted signal and the LSF vector quantized by the second quantizer 120 is output to the switch selection unit 140 through the lines. The predicted signal adding process by the second adder 135 is performed as shown in Equation 6.
where, {circumflex over (r)}2,ni is an ith element of a quantized vector of an nth frame of the prediction error signal in the second vector quantizer 120. An LSF vector selection unit 140 calculates a difference between the original LSF vector and the quantized LSF vector output from the respective first and second quantization units 111 and 121, and inputs a switch selection signal selecting a smaller LSF vector into the switch selection unit 145. The switch selection unit 145 selects the quantized LSF having the smaller difference with the original LSF vector, among the quantized LSF vectors by the respective first and second vector quantization units 111 and 121 by using the switch selection signal, and outputs the selected quantized LSF to the lines.
In general, the respective first and second vector quantization units 111 and 121 have the same configuration. However, to more flexibly respond to the correlation between frames of the LSF vector, other predictors 115 and 125 are used. Each of the vector quantizers 110 and 120 has a codebook. Therefore, calculation amount is twice as large as with one quantization unit. In addition, one bit of the switch selection information is transmitted to the decoder to inform the decoder of a selected quantization unit.
In the conventional quantizer arrangement described above, the quantization is performed by using two quantization units in parallel. Thus, the complexity is twice as large as with one quantization unit and one bit is used to represent the selected quantization unit. In addition, when the switching bit is corrupted on the channel, the decoder may select the wrong quantization unit. Therefore, the voice decoding quality may be seriously degraded.
Thus, there is a need for a voice encoding/decoding apparatus and method capable of causing specific quantization/dequantization for a current frame to be performed based on characteristics of the voice synthesized in previous frames to reduce complexity and calculation amount and efficiently performing LSF quantization in a CELP series voice codec.
According to an aspect of the present invention, there is provided a voice encoder including: a quantization selection unit generating a quantization selection signal; and a quantization unit extracting a linear prediction coding (LPC) coefficient from an input signal, converting the extracted LPC coefficient into a line spectral frequency (LSF), quantizing the LSF with a first LSF quantization unit or a second LSF quantization unit based on the quantization selection signal, and converting the quantized LSF into a quantized LPC coefficient. The the quantization selection signal selects the first LSF quantization unit or second LSF quantization unit based on characteristics of a synthesized voice signal in previous frames of the input signal.
According to an aspect of the present invention, there is provided a method of selecting quantization in a voice encoder, including: extracting a linear prediction encoding (LPC) coefficient from an input signal; converting the extracted LPC coefficient into a line spectral frequency (LSF); quantizing the LSF through a first quantization process or second LSF quantization process based on characteristics of a synthesized voice signal in previous frames of the input signal; and converting the quantized LSF into an quantized LPC coefficient.
According to an aspect of the present invention, there is provided a voice decoder including: a dequantization unit dequantizing line spectral frequency (LSF) quantization information to generate an LSF vector, and converting the LSF vector into a linear prediction coding (LPC) coefficient, the LSF quantization information being received through a specified channel and dequantized by using a first LSF dequantization unit or second LSF dequantization unit based on a dequantization selection signal; and a dequantization selection unit generating the dequantization selection signal, the dequantization selection signal selecting the first LSF dequantization unit or the second LSF dequantization unit based on characteristics of a synthesized signal in previous frames. The synthesized signal is generated from synthesis information of a received voice signal.
According to an aspect of the present invention, there is provided a method of selecting dequantization in a voice decoder, including: receiving line spectral frequency (LSF) quantization information and voice signal synthesis information through a specified channel; dequantizing the LSF through a first dequantization process or a second LSF dequantization process to generate an LSF vector based on characteristics of a synthesized voice signal in a previous frame of a synthesized signal, wherein the synthesized signal is generated from the voice signal synthesis information by using the LSF quantization information; and converting the LSF quantization vector into an LPC coefficient.
According to another embodiment of the present invention, there is provided a quantization selection unit of a voice encoder, including: an energy calculation unit receiving a synthesized voice signal, calculating respective energy values of the subframes; an energy buffer receiving and storing the calculated energy values to obtain the moving average of the calculated energy values; a moving average calculation unit calculating two energy moving values; an energy increase calculation unit receiving the calculated energy values and the two energy moving values, and calculating an energy increase; an energy decrease calculation unit receiving the calculated energy values and the two energy moving values, and calculating an energy decrease; an zero crossing calculation unit which receives the synthesized voice signal and calculating a changing a zero crossing rate; a pitch difference calculation unit receiving a pitch delay and calculating a difference of the pitch delay; and a selection signal generation unit receiving the energy increase, the energy decrease, and the calculated difference, and generating a selection signal selecting a quantization unit appropriate for the voice encoding, based on the energy increase of the energy increase calculation unit, the energy decrease of the energy decrease calculation unit, the zero crossing rate of the zero crossing calculation unit, and the pitch difference of the pitch difference calculation unit.
According to another embodiment of the present invention, there is provided a dequantization selection unit of a voice decoder, including: an energy calculation unit receiving a synthesized voice signal, calculating respective energy values of the subframes; an energy buffer receiving and storing the calculated energy values to obtain the moving average of the calculated energy values; a moving average calculation unit calculating two energy moving values; an energy increase calculation unit receiving the calculated energy values and the two energy moving values, and calculating an energy increase; an energy decrease calculation unit receiving the calculated energy values and the two energy moving values, and calculating an energy decrease; an zero crossing calculation unit which receives the synthesized voice signal and calculating a changing a zero crossing rate; a pitch difference calculation unit receiving a pitch delay and calculating a difference of the pitch delay; and a selection signal generation unit receiving the energy increase, the energy decrease, and the calculated difference, and generating a selection signal selecting a dequantization unit appropriate for the voice encoding, based on the energy increase of the energy increase calculation unit, the energy decrease of the energy decrease calculation unit, the zero crossing rate of the zero crossing calculation unit, and the pitch difference of the pitch difference calculation unit.
Therefore, quantization/dequantization can be selected according to voice characteristics in encoder/decoder.
Additional and/or other aspects and advantages of the present invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
These and/or other aspects and advantages of the present invention will become apparent and more readily appreciated from the following detailed description, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to an embodiment of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiment is described below in order to explain the present invention by referring to the figures.
Now, voice encoding/decoding apparatus and quantization/dequantization selection method will be described with reference to the attached drawings.
The voice encoder includes a preprocessor 200, a quantization unit 202, a perceptual weighting filter 255, a signal synthesis unit 262 and a quantization selection unit 240. Further, the quantization unit 202 includes an LPC coefficient extraction unit 205, an LSF conversion unit 210, a first selection switch 215, a first LSF quantization unit 220, a second LSF quantization unit 225 and a second selection switch 230. The signal synthesis unit 262 includes an excited signal searching unit 265, an excited signal synthesis unit 270 and a synthesis filter 275.
The preprocessor 200 takes a window for a voice signal input through a line. The windowed signal in window is input to the linear prediction coding (LPC) coefficient extraction unit 205 and the perceptual weighting filter 255. The LPC coefficient extraction unit 205 extracts the LPC coefficient corresponding to the current frame of the input voice signal by using autocorrelation and Levinson-Durbin algorithm. The LPC coefficient extracted by the LPC coefficient extraction unit 205 is input to the LSF conversion unit 210.
The LSF conversion unit 210 converts the input LPC coefficient into a line spectral frequency (LSF), which is more suitable in vector quantization, and then, outputs the LSF to the first selection switch 215. The first selection switch 215 outputs the LSF from the LSF conversion unit 210 to the first LSF quantization unit 220 or the second LSF quantization unit 225, according to the quantization selection signal from the quantization selection unit 240.
The first LSF quantization unit 220 or the second LSF quantization unit 225 outputs the quantized LSF to the second selection switch 230. The second selection switch 230 selects the LSF quantized by the first LSF quantization unit 220 or the second LSF quantization unit 225 according to the quantization selection signal from the quantization selection unit 240, as in the first selection switch 215. The second selection switch 230 is synchronized with the first selection switch 215.
Further, the second selection switch 230 outputs the selected quantized LSF to the LPC coefficient conversion unit 235. The LPC coefficient conversion unit 235 converts the quantized LSF into a quantized LPC coefficient, and outputs the quantized LPC coefficient to the synthesis filter 275 and the perceptual weighting filter 255.
The perceptual weighting filter 255 receives the windowed voice signal in window from the preprocessor 200 and the quantized LPC coefficient from the LPC coefficient conversion unit 235. The perceptual weighting filter 255 perceptually weights the windowed voice signal, using the quantized LPC coefficient. In other words, the perceptual weighting filter 255 causes the human ear not to perceive a quantization noise. The perceptually weighted voice signal is input to a subtractor 260.
The synthesis filter 275 synthesizes the excited signal received from the excited signal synthesis unit 270, using the quantized LPC coefficient received from the LPC coefficient conversion unit 235, and outputs the synthesized voice signal to the subtractor 260 and the quantization selection unit 240.
The subtractor 260 obtains a linear prediction remaining signal by subtracting the synthesized voice signal received from the synthesis filtering unit 275 from the perceptually weighted voice signal received from the perceptual weighting filter 255, and outputs the linear prediction remaining signal to the excited signal searching unit 265. The linear prediction remaining signal is generated as shown in the following Equation 7.
where, x(n) is the linear prediction remaining signal, sw(n) is the perceptually weighted voice signal, âi is an ith element of the quantized LPC coefficient vector, ŝ(n) is the synthesized voice signal, and L is the number of sample per one frame.
The excited signal searching unit 265 is a block for representing a voice signal which can not be represented with the synthesis filter 275. For a typical voice codec, two searching units are used. The first searching unit represents periodicity of the voice. The second searching unit, which is a second excited signal searching unit, is used to efficiently represent the voice signal that is not represented by pitch analysis and the linear prediction analysis.
In other words, the signal input to the excited signal searching unit 265 is represented by a summation of the signal delayed by the pitch and the second excited signal, and is output to the excited signal synthesis unit 270.
The voice decoder includes a dequantization unit 302, a dequantization selection unit 325, a signal synthesis unit 332 and a postprocessor 340. Here, the dequantization unit 302 includes a third selection switch 300, a first LSF dequantization unit 305, a second LSF dequantization unit 310, a fourth selection switch 315 and an LPC coefficient conversion unit 320. The signal synthesis unit 332 includes an excited signal synthesis unit 330 and a synthesis filter 335.
The third selection switch 300 outputs the LSF quantization information, transmitted through a channel to the first LSF dequantization unit 305 or the second LSF dequantization unit 310, according to the dequantization selection signal received from the dequantization selection unit 325. The quantized LSF restored by the first LSF dequantization unit 305 or the second LSF dequantization unit 310 is output to the fourth selection switch 315.
The fourth selection switch 315 outputs the quantized LSF restored by the first LSF dequantization unit 305 or the second LSF dequantization unit 310 to the LPC coefficient conversion unit 320 according to the dequantization selection signal received from the dequantization selection unit 325. The fourth selection switch 315 is synchronized with the third selection switch 300, and also with the first and second selection switches 215 and 230 of the voice encoder shown in
The LPC coefficient conversion unit 320 converts the quantized LSF into the quantized LPC coefficient, and outputs the quantized LPC coefficient to the synthesis filter 335.
The excited signal synthesis unit 330 receives the excited signal synthesis information received through the channel, synthesizes the excited signal based on the received excited signal synthesis information, and outputs the excited signal to the synthesis filter 335. The synthesis filter 335 filters the excited signal by using the quantized LPC coefficient received from the LPC coefficient conversion unit 320 to synthesize the voice signal. The synthesis of the voice signal is processed as shown in the following Equation 8.
where, {circumflex over (x)}(n) is the synthesized excited signal.
The synthesis filter 335 outputs the synthesized voice signal to the dequantization selection unit 325 and the postprocessor 340.
The dequantization selection unit 325 generates a dequantization selection signal representing the dequantization unit to be selected in the next frame, based on the synthesized voice signal, and the outputs the dequantization selection signal to the third and fourth selection switches 300 and 315.
The postprocessor 340 improves the voice quality of the synthesized voice signal. In general, the postprocessor 340 improves the synthesized voice by using the long section post processing filter and the short section post processing filter.
The quantization selection unit 240 of
More specifically, the synthesized voice signal from the synthesis filter 275 of the voice encoder of
First, the energy calculation unit 400 calculates respective energy values Ei of the ith subframes. The respective energy values of the subframes are calculated as shown in the following Equation 9.
where, N is the number of subframes, and L is the number of samples per frame.
The energy calculation unit 400 outputs the respective calculated energy values of the subframes to the energy buffer 405, the energy increase calculation unit 415 and the energy decrease calculation unit 420.
The energy buffer 405 stores the calculated energy values in a frame unit to obtain the moving average of the energy. The process in which the calculated energy values are stored into the energy buffer 405 is as shown the following Equation 10.
where, LB is a length of an energy buffer, and EB is an energy buffer.
The energy buffer 405 outputs the stored energy values to the moving average calculation unit 410. The moving average calculation unit 410 calculates two energy moving averages EM,1 and EM,2, as shown in Equations 11a and 11b.
The moving average calculation unit 410 outputs the two calculated energy values EM,1 and EM, 2 to the energy increase calculation unit 415 and the energy decrease calculation unit 420, respectively.
The energy increase calculation unit 415 calculates an energy increase Er as shown in Equation 12, and the energy decrease calculation unit 420 calculates an energy decrease Ed as shown in Equation 13.
Er=Ei/EM,1 [Equation 12]
Ed=Em,2/Ei [Equation 13]
The energy increase calculation unit 415 and the energy decrease calculation unit 420 outputs the calculated energy increase Er and the energy decrease Ed to the selection signal generation unit 440, respectively.
The zero crossing calculation unit 425 receives the synthesized voice signal from the synthesis filters 275, 335 of the voice encoder/decoder (
Czcr=0
for i=(N−1)L/N to L−2
if ŝ(i)·ŝ(i−1)<0
Czcr=Czcr+1
Czcr=Czcr/(L/N) [Equation 14]
The zero crossing calculation unit 425 outputs the calculated the zero crossing rate to the selection signal generation unit 440.
The pitch delay is input to the pitch difference calculation unit 430 and the pitch delay buffer 435. The pitch delay buffer 435 stores the pitch delay of the last subframe prior to one frame.
In addition, the pitch difference calculation unit 430 calculates a difference Dp between the pitch delay P(n) of the last subframe of the current frame and the pitch delay P(n−1) of the last subframe of the previous frame, using the pitch delay of prior subframe stored in the pitch delay buffer 435, as shown in the following Equation 15.
Dp=|P(n)−P(n−1)| [Equation 15]
The pitch difference calculation unit 430 outputs the calculated difference of the pitch delay Dp to the selection signal generation unit 440.
The selection signal generation unit 440 generates a selection signal selecting the quantization unit (dequantization unit for a voice decoder) appropriate to the voice encoding, based on the energy increase of the energy increase calculation unit 415, the energy decrease of the energy decrease calculation unit 420, the zero crossing rate of the zero crossing calculation unit 425, and the pitch difference of the pitch difference calculation unit 430.
Referring to
The voice existence searching unit 500 receives the energy increase Er and the energy decrease Ed from the energy increase calculation unit 415 and the energy decrease calculation unit 420 of
if Er>ThrE
if Ed>ThrE
where, Fv is a signal representing a voice signal existence as ‘1’ in case that the voice exists in the currently synthesized voice signal, and as ‘0’ in case that the voice doesn't exist in the currently synthesized voice signal. The representation showing the voice existence can be made differently.
The voice existence searching unit 500 outputs the voice existence signal Fv to the first operation block 510 and the voice existence signal buffer 505.
The voice existence signal buffer 505 stores the previously searched voice existence signal Fv to perform logic determination of the plurality of operation blocks 510, 515 and 520, and outputs the previous voice existence signal to the respective first, second, and third operation blocks 510, 515, and 520.
The first operation block 510 outputs a signal to set a next frame LSF quantizer mode Mq to 1 for a case that the voice exists in the synthesized signal of the current frame but doesn't exist in the synthesized signal of the previous frames. Otherwise, the second operation block is performed next.
The second operation block 515 causes the fourth operation block 525 to operate for a case that the voice doesn't exist in the synthesized signal of the current frame but exists in the synthesized signal of the previous frames. Otherwise, the second operation block 515 causes the third operation block 520 to operate.
The fourth operation block 525 outputs a signal to set the next frame LSF quantizer mode Mq to 1 for a case that the zero crossing rate calculated by the zero crossing calculation unit 425 is Thrzcr or more, or the energy decrease Ed is ThrEd2 or more. Otherwise, the fourth operation block 525 outputs a signal to set the next frame LSF quantizer mode Mq to 0.
The third operation block 520 causes the fifth operation block 530 to operate for a case that all of the signals synthesized in the previous and current frames are voice signal. Otherwise, the third operation block 520 outputs a signal to set the next frame LSF quantizer mode Mq to 0.
The fifth operation block 530 outputs a signal to set the next frame LSF quantizer mode Mq to 1 for a case that the energy increase Er is ThrEr2 or more, or the pitch difference Dp is ThrDp or more. Otherwise, the fifth operation block 530 outputs a signal to set the next frame LSF quantizer mode Mq to 0.
Here, Thr refers to a specified threshold, and Mq refers to a quantizer selection signal of
According to the above-described embodiment of the present invention, an LSF can be efficiently quantized in a CELP type voice codec according to characteristics of the previous synthesized voice signal in a voice encoder/decoder. Thus, complexity can be reduced.
Although an embodiment of the present invention have been shown and described, the present invention is not limited to the described embodiment. Instead, it would be appreciated by those skilled in the art that changes may be made to the embodiment without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2004-0075959 | Sep 2004 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
5428394 | Yamagami et al. | Jun 1995 | A |
5732389 | Kroon et al. | Mar 1998 | A |
5774839 | Shlomot | Jun 1998 | A |
5822723 | Kim et al. | Oct 1998 | A |
5893061 | Gortz | Apr 1999 | A |
5966688 | Nandkumar et al. | Oct 1999 | A |
5995923 | Mermelstein et al. | Nov 1999 | A |
6003004 | Hershkovits et al. | Dec 1999 | A |
6067511 | Grabb et al. | May 2000 | A |
6097753 | Ko | Aug 2000 | A |
6098036 | Zinser et al. | Aug 2000 | A |
6122608 | McCree | Sep 2000 | A |
6275796 | Kim et al. | Aug 2001 | B1 |
6438517 | Yeldener | Aug 2002 | B1 |
6665646 | John et al. | Dec 2003 | B1 |
6691082 | Aguilar et al. | Feb 2004 | B1 |
20040176951 | Sung et al. | Sep 2004 | A1 |
20040230429 | Son et al. | Nov 2004 | A1 |
Number | Date | Country |
---|---|---|
2003-0062361 | Jul 2003 | KR |
Number | Date | Country | |
---|---|---|---|
20060074643 A1 | Apr 2006 | US |