Voice encoding method

Information

  • Patent Grant
  • 6366881
  • Patent Number
    6,366,881
  • Date Filed
    Wednesday, August 11, 1999
    25 years ago
  • Date Issued
    Tuesday, April 2, 2002
    22 years ago
Abstract
In a voice coding method for adaptively quantizing a difference dn between an input signal xn and a predicted value yn to code the difference, adaptive quantization is performed such that a reversely quantized value qn of a code Ln corresponding to a section where the absolute value of the difference dn is small is approximately zero.
Description




TECHNICAL FIELD




The present invention relates generally to a voice coding method, and more particularly, to improvements of an adaptive pulse code modulation (APCM) method and an adaptive differential pulse code modulation (ADPCM) method.




BACKGROUND




As a coding system of a voice signal, an adaptive pulse code modulation (APCM) method and an adaptive difference pulse code modulation (ADPCM) method, and so on have been known.




The ADPCM is a method of predicting the current input signal from the past input signal, quantizing a difference between its predicted value and the current input signal, and then coding the quantized difference. On the other hand, in the ADPCM, a quantization step size is changed depending on the variation in the level of the input signal.





FIG. 11

illustrates the schematic construction of a conventional ADPCM encoder 4 and a conventional ADPCM decoder


5


. n used in the following description is an integer.




Description is now made of the ADPCM encoder


4


.




A first adder


41


finds a difference (a prediction error signal d


n


) between a signal x


n


signal y


n


on the basis of the following equation (1):






d


n


=x


n


−y


n


  (1)






A first adaptive quantizer


42


codes the prediction error signal d


n


found by the first adder


41


on the basis of a quantization step size T


n


, to find a code L


n


. That is, the first adaptive quantizer


42


finds the code L


n


on the basis of the following equation (2). The found code L


n


is sent to a memory


6


.






L


n


=[d


n


/T


n


]  (2)






In the equation (2), [ ] is Gauss' notation, and represents the maximum integer which does not exceed a number in the square brackets. An initial value of the quantized value T


n


is a positive number.




A first quantization step size updating device


43


finds a quantization step size T


n+1


corresponding the subsequent voice signal sampling value X


n+1


on the basis of the following equation (3). The relationship between the code L


n


and a function M (L


n


) is as shown in Table 1. Table 1 shows an example in a case where the code L


n


is composed of four bits.




 T


n+1


=T


n


×M(L


n


)  (3)















TABLE 1











L


n






M (L


n


)


























0




−1




0.9






1




−2




0.9






2




−3




0.9






3




−4




0.9






4




−5




1.2






5




−6




1.6






6




−7




2.0






7




−8




2.4














A first adaptive reverse quantizer


44


reversely quantizes the prediction error signal d


n


using the code L


n


, to find a reversely quantized value q


n


. That is, the first adaptive reverse quantizer


44


finds the reversely quantized value q


n


on the basis of the following equation (4):






q


n


=(L


n


+0.5)×T


n


  (4)






A second adder


45


finds a reproducing signal w


n


the basis of the predicting signal y


n


ponding to the current voice signal sampling x


n


and the reversely quantized value q


n


. That is, the second adder


45


finds the reproducing signal w


n


on the basis of the following equation (5):






w


n


=y


n


+q


n


  (5)






A first predicting device


46


delays the reproducing signal w


n


by one sampling time, to find a predicting signal y


n+1


corresponding to the subsequent voice signal sampling value x


+1


.




Description is now made of the ADPCM decoder


5


.




A second adaptive reverse quantizer


51


uses a code L


n


′ obtained from the memory


6


and a quantization step size T


n


′ obtained by a second quantization step size updating device


52


, to find a reversely quantized value q


n


′ on the basis of the following equation (6).






q


n


′=(L


n


′+0.5)×T


n


′  (6)






If L


n


found in the ADPCM encoder


4


is correctly transmitted to the ADPCM decoder


5


, that is, L


n


=L


n


′, the values of q


n


′, y


n


′, T


n


′ and w


n


′ used on the side of the ADPCM decoder


5


are respectively equal to the values of q


n


, y


n


, T


n


and w


n


used on the side of the ADPCM encoder


4


.




The second quantization step size updating device


52


uses the code L


n


′ obtained from the memory


6


, to find a quantization step size T


n+1


′ used with respect to the subsequent code L


n+1


′ on the basis of the following equation (7) The relationship between L


n


′ and a function M (L


n


′) in the following equation (7) is the same as the relationship between L


n


and the function M (L


n


) in the foregoing Table 1.






T


n+1


′=T


n


′×M(L


n


′)  (7)






A third adder


53


finds a reproducing signal w


n


′ on the basis of a predicting signal y


n


′ obtained by a second predicting device


54


and the reversely quantized value q


n


′. That is, the third adder


53


finds the reproducing signal w


n


′ on the basis of the following equation (8). The found reproducing signal w


n


′ is outputted from the ADPCM decoder


5


.






w


n


′=y


n


′+q


n


′  (8)






The second predicting device


54


delays the reproducing signal w


n


′ by one sampling time, to find the subsequent predicting signal y


n+1


′, and sends the predicting signal y


n+1


′ to the third adder


53


.





FIGS. 12 and 13

illustrate the relationship between the reversely quantized value q


n


and the prediction error signal d


n


in a case where the code L


n


is composed of three bits.




T in

FIG. 12 and U

in

FIG. 13

respectively represent quantization step sizes determined by the first quantization step size updating device


43


at different time points, where it is assumed that T<U.




In a case where the range A to B of the prediction error signal d


n


is indicated by A and B, the range is indicated by “[A” when a boundary A is included in the range, while being indicated by “(A” when it is not included therein. Similarly, the range is indicated by “B]” when a boundary B is included in the range, while being indicated by “B)” when it is not included therein.




In

FIG. 12

, the reversely quantized value q


n


is 0.5T when the value of the prediction error signal d


n


is in the range of [0, T), 1.5T when it is in the range of [T, 2T), 2.5T when it is in the range of [2T, 3T) and 3.5T when it is in the range of [3T, ∞].




The reversely quantized value q


n


is −0.5T when the value of the prediction error signal d


n


is in the range of [−T, 0), −1.5T when it is in the range of [−2T, −T) −


2


.


5


when it is in the range of [−3T, −2T), and −3.5T when it is in the range of [−∞, −3T)




In the relationship between the reversely quantized value q


n


and the prediction error signal d


n


in

FIG. 13

, T in

FIG. 12

is replaced with U. As shown in

FIGS. 12 and 13

, the relationship between the reversely quantized value q


n


and the prediction error signal d


n


is so determined that the characteristics are symmetrical in a positive range and a negative range of the prediction error signal d


n


in the prior art. As a result, even when the prediction error signal d


n


is small, the reversely quantized value q


n


is not zero.




As can be seen from the equation (3) and Table 1, when the code L


n


becomes large, the quantization step size T


n


is made large. That is, the quantization step size is made small as shown in

FIG. 12

when the prediction error signal d


n


is small, while being made large as shown in

FIG. 13

when the prediction error signal d


n


is large.




In a voice signal, there exist a lot of silent sections where the prediction error signal d


n


is zero. In the above-mentioned prior art, however, even when the prediction error signal d


n


is zero, the reversely quantized value q


n


is 0.5T(or 0.5U) which is not zero, so that an quantizing error is increased.




In the above-mentioned prior art, even if the absolute value of the prediction error signal d


n


is rapidly changed from a large value to a small value, a large value corresponding to the previous prediction error signal d


n


whose absolute value is large is maintained as the quantization step size, so that the quantizing error is increased. That is, in a case where the quantization step size is a relatively large value U as shown in

FIG. 13

, even if the absolute value of the prediction error signal d


n


is rapidly decreased to a value close to zero, the reversely quantized value q


n


is 0.5U which is a large value, so that the quantizing error is increased.




Furthermore, even if the absolute value of the prediction error signal d


n


is rapidly changed from a small value to a large value, a small value corresponding to the previous prediction error signal d


n


whose absolute value is small is maintained as the quantization step size, so that the quantizing error is increased.




Such a problem similarly occurs even in APCM using an input signal as it is in place of the prediction error signal d


n


.




An object of the present invention is to provide a voice coding method capable of decreasing a quantizing error when a prediction error signal d


n


is zero or an input signal is rapidly changed.




DISCLOSURE OF THE INVENTION




A first voice coding method according to the present invention is a voice coding method for adaptively quantizing a difference d


n


between an input signal x


n


and a predicted value y


n


to code the difference, characterized in that adaptive quantization is performed such that a reversely quantized value q


n


of a code L


n


corresponding to a section where the absolute value of the difference d


n


is small is approximately zero.




A second voice coding method according to the present invention is characterized by comprising the first step of adding, when a first prediction error signal d


n


which is a difference between an input signal x


n


and a predicted value y


n


corresponding to the input signal x


n


is not less than zero, one-half of a quantization step size T


n


to the first prediction error signal d


n


to produce a second prediction error signal e


n


, while subtracting, when the first prediction error signal dais less than zero, one-half of the quantization step size T


n


from the first prediction error signal d


n


to produce a second prediction error signal e


n


, the second step of finding a code L


n


on the basis of the second prediction error signal e


n


found in the first step and the quantization step size T


n


, the third step of finding a reversely quantized value q


n


on the basis of the code L


n


found in the second step, the fourth step of finding a quantization step size T


n+1


corresponding to the subsequent input signal x


n+1


on the basis of the code L


n


found in the second step, and the fifth step of finding a predicted value y


n+1


corresponding to the subsequent input signal x


n+1


on the basis of the reversely quantized value q


n


found in the third step and the predicted value y


n


.




In the second step, the code L


n


is found on the basis of the following equation (9), for example:






L


n


=[e


n


/T


n


]  (9)






where [ ] is Gauss' notation, and represents the maximum integer which does not exceed a number in the square brackets.




In the third step, the reversely quantized value q


n


is found on the basis of the following equation (10), for example:






q


n


=L


n


×T


n


  (10)






In the fourth step, the quantization step size T


n+1


is found on the basis of the following equation (11), for example:






T


n+1


=T


n


×M(L


n


)  (11)






where M (L


n


) is a value determined depending on L


n


.




In the fifth step, the predicted value y


n+1


is found on the basis of the following equation (12), for example:






y


n+1


=y


n


+q


n


  (12)






A third voice coding method according to the present invention is a voice coding method for adaptively quantizing a difference d


n


between an input signal x


n


and a predicted value y


n


to code the difference, characterized in that adaptive quantization is performed such that a reversely quantized value q


n


of a code L


n


corresponding to a section where the absolute value of the difference d


n


is small is approximately zero, and a quantization step size corresponding to a section where the absolute value of the difference d


n


is large is larger, as compared with that corresponding to the section where the absolute value of the difference d


n


is small.




A fourth voice coding method according to the present invention is characterized by comprising the first step of adding, when a first prediction error signal d


n


which is a difference between an input signal x


n


and a predicted value y


n


corresponding to the input signal x


n


is not less than zero, one-half of a quantization step size T


n


to the first prediction error signal d


n


to produce a second prediction error signal e


n


, while subtracting, when the first prediction error signal d


n


is less than zero, one-half of the quantization step size T


n


from the first prediction error signal d


n


to produce a second prediction error signal e


n


, the second step of finding, on the basis of the second prediction error signal e


n


found in the first step and a table previously storing the relationship between the second prediction error signal e


n


and a code L


n


, the code L


n


, the third step of finding, on the basis of the code L


n


found in the second step and a table previously storing the relationship between the code L


n


and a reversely quantized value q


n


, the reversely quantized value q


n


, the fourth step of finding, on the basis of the code L


n


found in the second step and a table previously storing the relationship between the code L


n


and a quantization step size T


n+1


corresponding to the subsequent input signal x


n+1


, the quantization step size T


n+1


corresponding to the subsequent input signal x


n+1


, and the fifth step of finding a predicted value y


n+1


corresponding to the subsequent input signal x


n+1


on the basis of the reversely quantized value q


n


found in the third step and the predicted value y


n


, wherein each of the tables is produced so as to satisfy the following conditions (a), (b) and (c):




(a) The quantization step size T


n


is so changed as to be increased when the absolute value of the difference d


n


is so changed as to be increased,




(b) The reversely quantized value q


n


of the code L


n


corresponding to a section where the absolute value of the difference d


n


is small is approximately zero, and




(c) A substantial quantization step size corresponding to a section where the absolute value of the difference d


n


is large is larger, as compared with that corresponding to the section where the a absolute value of the difference d


n


is small.




In the fifth step, the predicted value y


n+1


is found on the basis of the following equation (13), for example:






y


n+1


=y


n


+q


n


  (13)






A fifth voice coding method according to the present invention is a voice coding method for adaptively quantizing an input signal x


n


to code the input signal, characterized in that adaptive quantization is performed such that a reversely quantized value of a code L


n


corresponding to a section where the absolute value of the input signal x


n


is small is approximately zero.




A sixth voice coding method according to the present invention is characterized by comprising the first step of adding one-half of a quantization step size T


n


to an input signal x


n


to produce a corrected input signal g


n


when the input signal x


n


is not less than zero, while subtracting one-half of the quantization step size T


n


from the input signal x


n


to produce a corrected input signal g


n


when the input signal x


n


is less than zero, the second step of finding a code L


n


on the basis of the corrected input signal g


n


found in the first step and the quantization step size T


n


, the third step of finding a quantization step size T


n+1


corresponding to the subsequent input signal x


n+1


on the basis of the code L


n


found in the second step, and the fourth step of finding a reproducing signal w


n


′ on the basis of the code L


n


′(=L


n


) found in the second step.




In the second step, the code L


n


is found on the basis of the following equation (14), for example:






L


n


=[g


n


/T


n


]  (14)






where [ ] is Gauss' notation, and represents the maximum integer which does not exceed a number in the square brackets.




In the third step, the quantization step size T


n+1


is found on the basis of the following equation (15), for example:






T


n+1


=T


n


×M(L


n


)  (15)






where M (L


n


) is a value determined depending on L


n


.




In the fourth step, the reproducing signal w


n


′ is found on the basis of the following equation (16), for example:






w


n


′=L


n


′(=L


n


)×T


n


′  (16)






A seventh voice coding method according to the present invention is a voice coding method for adaptively quantizing an input signal x


n


to code the input signal, characterized in that adaptive quantization is performed such that a reversely quantized value q


n


of a code L


n


corresponding to a section where the absolute value of the input signal x


n


is small is approximately zero, and a quantization step size corresponding to a section where the absolute value of the input signal x


n


is large is larger, as compared with that corresponding to the section where the absolute value of the input signal x


n


is small.




An eighth voice coding method according to the present invention is characterized by comprising the first step of adding one-half of a quantization step size T


n


to an input signal x


n


to produce a corrected input signal g


n


when the input signal d


n


is not less than zero, while subtracting one-half of the quantization step size T


n


from the input signal x


n


to produce a corrected input signal g


n


when the input signal x


n


is less than zero, the second step of finding, on the basis of the corrected input signal g


n


found in the first step and a table previously storing the relationship between the signal g


n


and a code L


n


, the code L


n


, the third step of finding, on the basis of the code L


n


found in the second step and a table previously storing the relationship between the code L


n


and a quantization step size T


n+1


corresponding to the subsequent input signal x


n+1


, the quantization step size T


n+1


corresponding to the subsequent input signal x


n+1


, and the fourth step of finding, on the basis of the code L


n


′(=L


n


) found in the second step and a table storing the relationship between the code L


n


′(=L


n


) and a reproducing signal w


n


′, the reproducing signal w


n


′, wherein each of the tables is produced so as to satisfy the following conditions (a), (b) and (c):




(a) The quantized value T


n


is so changed as to be increased when the absolute value of the input signal x


n


is so changed as to be increased,




(b) The reversely quantized value q


n


of the code L


n


corresponding to a section where the absolute value of the input signal x


n


is small is approximately zero, and




(c) A substantial quantization step size corresponding to a section where the absolute value of the input signal x


n


is large is made larger, as compared with that corresponding to the section where the absolute value of the input signal x


n


is small.











BRIEF DESCRIPTION OF THE DRAWINGS





FIG. 1

is a block diagram showing a first embodiment of the present invention;





FIG. 2

is a flow chart showing operations performed by an ADPCM encoder shown in

FIG. 1

;





FIG. 3

is a flow chart showing operations performed by an ADPCM decoder shown in

FIG. 1

;





FIG. 4

is a graph showing the relationship between a prediction error signal d


n


and a reversely quantized value q


n


;





FIG. 5

is a graph showing the relationship between a prediction error signal d


n


and a reversely quantized value q


n


;





FIG. 6

is a block diagram showing a second embodiment of the present invention;





FIG. 7

is a flow chart showing operations performed by an ADPCM encoder shown in

FIG. 6

;





FIG. 8

is a flow chart showing operations performed by an ADPCM decoder shown in

FIG. 6

;





FIG. 9

is a graph showing the relationship between a prediction error signal d


n


and a reversely quantized value q


n


;





FIG. 10

is a block diagram showing a third embodiment of the present invention;





FIG. 11

is a block diagram showing a conventional example;





FIG. 12

is a graph showing the relationship between a prediction error signal d


n


and a reversely quantized value q


n


in the conventional example; and





FIG. 13

is a graph showing the relationship between a prediction error signal d


n


and a reversely quantized value q


n


in the conventional example.











BEST MODE FOR CARRYING OUT THE INVENTION




[1] Description of First Embodiment




Referring now to

FIGS. 1

to


5


, a first embodiment of the present invention will be described.





FIG. 1

illustrates the schematic construction of an ADPCM encoder


1


and an ADPCM decoder


2


. n used in the following description is an integer.




Description is now made of the ADPCM encoder


1


. A first adder


11


finds a difference (hereinafter referred to as a first prediction error signal d


n


) between a signal x


n


inputted to the ADPCM encoder


1


and a predicting signal y


n


on the basis of the following equation (17):






d


n


=x


n


−y


n


  (17)






A signal generator


19


generates a correcting signal a


n


on the basis of the first prediction error signal d


n


and a quantization step size T


n


obtained by a first quantization step size updating device


18


. That is, the signal generator


19


generates the correcting signal a


n


on the basis of the following equation (18):






in the case of d


n


≧0: a


n


=T


n


/2








in the case of d


n


<0: a


n


=−T


n


/2  (18)






A second adder


12


finds a second prediction error signal e


n


on the basis of the first prediction error signal d


n


and the correcting signal a


n


obtained by the signal generator


19


. That is, the second adder


12


finds the second prediction error signal e


n


on the basis of the following equation (19):






e


n


=d


n


+a


n


  (19)






Consequently, the second prediction error signal e


n


is expressed by the following equation (20):






in the case of d


n


≧0: e


n


=d


n


+T


n


/2








in the case of d


n


<0: e


n


=d


n


−T


n


/2   (20)






A first adaptive quantizer


14


codes the second prediction error signal e


n


found by the second adder


12


on the basis of the quantization step size T


n


obtained by the first quantization step size updating device


18


, to find a code L


n


. That is, the first adaptive quantizer


14


finds the code L


n


on the basis of the following equation (21). The found code L


n


is sent to a memory


3


.






L


n


=[e


n


/T


n


]  (21)






In the equation (21), [ ] is Gauss' notation, and represents the maximum integer which does not exceed a number in the square brackets. An initial value of the quantization step size T


n


is a positive number.




The first quantization step size updating device


18


finds a quantization step size T


n+1


corresponding the subsequent voice signal sampling value X


n+1


on the basis of the following equation (22). The relationship between the code L


n


and a function M (L


n


) is the same as the relationship between the code L


n


and the function M (L


n


) in the foregoing Table 1.






T


n+1


=T


n


×M(L


n


)  (22)






A first adaptive reverse quantizer


15


find a reversely quantized value q


n


on the basis of the following equation (23).






q


n


=L


n


×T


n


  (23)






A third adder


16


finds a reproducing signal w


n


on the basis of the predicting signal y


n


corresponding to the current voice signal sampling value x


n


and the reversely quantized value q


n


. That is, the third adder


16


finds the reproducing signal w


n


on the basis of the following equation (24):






w


n


=y


n


+q


n


  (24)






A first predicting device


17


delays the reproducing signal w


n


by one sampling time, to find a predicting signal y


n+1


corresponding to the subsequent voice signal sampling value x


n+1


.




Description is now made of the ADPCM decoder


2


.




A second adaptive reverse quantizer


22


uses a code L


n


′ obtained from the memory


3


and a quantization step size T


n


′ obtained by a second quantization step size updating device


23


, to find a reversely quantized value q


n


′ on the basis of the following equation (25).




 q


n


′L


n


′×T


n


′  (25)




If L


n


found in the ADPCM encoder


1


is correctly transmitted to the ADPCM decoder


2


, that is, L


n


=L


n


′, the values of q


n


′, y


n


′, T


n


′ and w


n


′ used on the side of the ADPCM decoder


2


are respectively equal to the values of q


n


, y


n


, T


n


and w


n


used on the side of the ADPCM encoder


1


.




The second quantization step size updating device


23


uses the code L


n


′ obtained from the memory


3


, to find a quantization step size T


n+1


′ used with respect to the subsequent code L


n+1


′ on the basis of the following equation (26). The relationship between the code L


n


′ and a function M (L


n


′) is the same as the relationship between the code L


n


and the function M (L


n


) in the foregoing Table 1.






T


n+1


′=T


n


′×M(L


n


′)  (26)






A fourth adder


24


finds a reproducing signal w


n


′ on the basis of a predicting signal y


n


′ obtained by a second predicting device


25


and the reversely quantized value q


n


′. That is, the fourth adder


24


finds the reproducing signal w


n


′ on the basis of the following equation (27). The found reproducing signal w


n


′ is outputted from the ADPCM decoder


2


.






w


n


′=y


n


′+q


n


′  (27)






The second predicting device


25


delays the reproducing signal w


n


′ by one sampling time, to find the subsequent predicting signal y


n+1


′, and sends the predicting signal y


n+1


′ to the fourth adder


24


.





FIG. 2

shows the procedure for operations performed by the ADPCM encoder


1


.




The predicting signal y


n


is first subtracted from the input signal x


n


, to find the first prediction error signal d


n


(step


1


).




It is then judged whether the first prediction error signal d


n


is not less than zero or less than zero (step


2


). When the first prediction error signal d


n


is not less than zero, one-half of the quantization step size T


n


is added to the first prediction error signal d


n


, to find the second prediction error signal e


n


(step


3


).




When the first prediction error signal d


n


is less than zero, one-half of the quantization step size T


n


is subtracted from the first prediction error signal d


n


, to find the second prediction error signal e


n


(step


4


).




When the second prediction error signal e


n


is found in the step


3


or the step


4


, coding based on the foregoing equation (21) and reverse quantization based on the foregoing equation (23) are performed (step


5


). That is, the code L


n


and the reversely quantized value q


n


are found.




The quantization step size T


n


is then updated on the basis of the foregoing equation (22) (step


6


). The predicting signal y


n+1


corresponding to the subsequent voice signal sampling value x


n+1


is found on the basis of the foregoing equation (24) (step


7


).





FIG. 3

shows the procedure for operations performed by the ADPCM decoder


2


.




The code L


n


′ is first read out from the memory


3


, to find the reversely quantized value q


n


′ on the basis of the foregoing equation (25) (step


11


).




Thereafter, the subsequent predicting signal Y


n+1


′ is found on the basis of the foregoing equation (27) (step


12


).




The quantization step size T


n+1


′ used with respect to the subsequent code L


n+1


′ is found on the basis of the foregoing equation (26) (step


13


).





FIGS. 4 and 5

illustrate the relationship between the reversely quantized value q


n


obtained by the first adaptive reverse quantizer


15


in the ADPCM encoder


1


and the first prediction error signal d


n


in a case where the code L


n


is composed of three bits.




T in

FIG. 4 and U

in

FIG. 5

respectively represent quantization step sizes determined by the first quantization step size updating device


18


at different time points, where it is assumed that T<U.




In a case where the range A to B of the first prediction error signal d


n


is indicated by A and B, the range is indicated by “[A” when a boundary A is included in the range, while being indicated by “(A” when it is not included therein. Similarly, the range is indicated by “B]” when a boundary B is included in the range, while being indicated by “B)” when it is not included therein.




In

FIG. 4

, the reversely quantized value q


n


is n zero when the value of the first prediction error signal d


n


is in the range of (−0.5T, 0.5T) T when it is in the range of [0.5T, 1.5T), 2T when it is in the range of [1.5T, 2.5T), and 3T when it is in the range of [2.5T, ∞].




Furthermore, the reversely quantized value q


n


is −T when the value of the first prediction error signal d


n


is in the range of (−1.5T, −0.5T], −2T when it is in the range of (−2.5T, −1.5T], −3T when it is in the range of (−3.5T, −2.5T], and −4T when it is in the range of [∞, −3.5T].




In the relationship between the reversely quantized value q


n


and the first prediction error signal d


n


in

FIG. 5

, T in

FIG. 4

is replaced with U.




Also in the first embodiment, when the code L


n


becomes large, the quantization step size T


n


is made large, as can be seen from the foregoing equation (22) and Table 1. That is, the quantization step size is made small as shown in

FIG. 4

when the prediction error signal d


n


is small, while being made large as shown in

FIG. 5

when it is large.




According to the first embodiment, when the prediction error signal d


n


which is a difference between the input signal x


n


and the predicting signal y


n


is zero, the reversely quantized value q


n


is zero. When the prediction error signal d


n


is zero as in a silent section of a voice signal, therefore, a quantizing error is decreased.




When the absolute value of the first prediction error signal d


n


is rapidly changed from a large value to a small value, a large value corresponding to the previous prediction error signal d


n


whose absolute value is large is maintained as the quantization step size. However, the reversely quantized value q


n


can be made zero, so that the quantizing error is decreased. That is, in a case where the quantization step size is a relatively large value U as shown in

FIG. 5

, when the absolute value of the prediction error signal d


n


is rapidly decreased to a value close to zero, the reversely quantized value q


n


is zero, so that the quantizing error is decreased.




[2] Description of Second Embodiment




Referring now to

FIGS. 6

to


9


, a second embodiment of the present invention will be described.





FIG. 6

illustrates the schematic construction of an ADPCM encoder


101


and an ADPCM decoder


102


. n used in the following description is an integer.




Description is now made of the ADPCM encoder


101


.




The ADPCM encoder


101


comprises first storage means


113


. The first storage means


113


stores a translation table as shown in Table 2. Table 2 shows an example in a case where a code L


n


is composed of four bits.

















TABLE 2











Second Prediction






Quantization







Error Signal e


n






L


n






q


n






Step Size T


n+1































11T


n


≦ e


n






0111




12T


n






T


n+1


= T


n


× 2.5







8T


n


≦ e


n


< 11T


n






0110




9T


n






T


n+1


= T


n


× 2.0







6T


n


≦ e


n


< 8T


n






0101




6.5T


n






T


n+1


= T


n


× 1.25







4T


n


≦ e


n


< 6T


n






0100




4.5T


n






T


n+1


= T


n


× 1.0







3T


n


≦ e


n


< 4T


n






0011




3T


n






T


n+1


= T


n


× 1.0







2T


n


≦ e


n


< 3T


n






0010




2T


n






T


n+1


= T


n


× 1.0







T


n


≦ e


n


< 2T


n






0001




T


n






T


n+1


= T


n


× 0.75







−T


n


< e


n


< T


n






0000




0




T


n+1


= T


n


× 0.75







−2T


n


< e


n


≦ −T


n






1111




−T


n






T


n+1


= T


n


× 0.75







−3T


n


< e


n


≦ −2T


n






1110




−2T


n






T


n+1


= T


n


× 1.0







−4T


n


< e


n


≦ −3T


n






1101




−3T


n






T


n+1


= T


n


× 1.0







−5T


n


< e


n


≦ −4T


n






1100




−4T


n






T


n+1


= T


n


× 1.0







−7T


n


< e


n


≦ −5T


n






1011




−5.5T


n






T


n+1


= T


n


× 1.25







−9T


n


< e


n


≦ −7T


n






1010




−7.5T


n






T


n+1


= T


n


× 2.0







−12T


n


< e


n


≦ −9T


n






1001




−10T


n






T


n+1


= T


n


× 2.5







e


n


≦ −12T


n






1000




−13T


n






T


n+1


= T


n


× 5.0















The translation table comprises the first column storing the range of a second prediction error signal e


n


, the second column storing a code L


n


corresponding to the range of the second prediction error signal e


n


in the first column, the third column storing a reversely quantized value q


n


corresponding to the code L


n


in the second column, and the fourth column storing a calculating equation of a quantization step size T


n+1


corresponding to the code L


n


in the second column. The quantization step size is a value for determining a substantial quantization step size, and is not the substantial quantization step size itself.




In the second embodiment, conversion from the second prediction error signal e


n


to the code L


n


in a first adaptive quantizer


114


, conversion from the code L


n


to the reversely quantized value q


n


in a first adaptive reverse quantizer


115


, and updating of a quantization step size T


n


in a first quantization step size updating device


118


are performed on the basis of the translation table stored in the first storage means


113


.




A first adder


111


finds a difference (hereinafter referred to as a first prediction error signal d


n


) between a signal x


n


inputted to the ADPCM encoder


101


and a predicting signal y


n


on the basis of the following equation (28):






d


n


=x


n


−y


n


  (28)






A signal generator


119


generates a correcting signal a


n


on the basis of the first prediction error signal d


n


and the quantization step size T


n


obtained by a first quantization step size updating device


118


. That is, the signal generator


119


generates a correcting signal a


n


on the basis of the following equation (29):






in the case of d


n


≧0: a


n


=T


n


/2








in the case of d


n


<0: a


n


=−T


n


/2  (29)






A second adder


112


finds a second prediction error signal e


n


on the basis of the first prediction error signal d


n


and the correcting signal a


n


obtained by the signal generator


119


. That is, the second adder


112


finds the second prediction error signal e


n


on the basis of the following equation (30):






e


n


=d


n


+a


n


  (30)






Consequently, the second prediction error signal e


n


is expressed by the following equation (31):






in the case of d


n


≧0: e


n


=d


n


+T


n


/2








in the case of d


n


<0: e


n


=d


n


−T


n


/2  (31)






The first adaptive quantizer


114


finds a code L


n


on the basis of the second prediction error signal e


n


found by the second adder


112


and the translation table. That is, the code L


n


corresponding to the second prediction error signal e


n


out of the respective codes L


n


in the second column of the translation table is read out from the first storage means


113


and is outputted from the first adaptive quantizer


114


. The found code L


n


is sent to a memory


103


.




The first adaptive reverse quantizer


115


finds the reversely quantized value q


n


on the basis of the code L


n


found by the first adaptive quantizer


114


and the translation table. That is, the reversely quantized value q


n


corresponding to the code L


n


found by the first adaptive quantizer


114


is read out from the first storage means


113


and is outputted from the first adaptive reverse quantizer


115


.




The first quantization step size updating device


118


finds the subsequent quantization step size T


n+1


on the basis of the code L


n


found by the first adaptive quantizer


114


, the current quantization step size T


n


, and the translation table. That is, the subsequent quantization step size T


n+1


is found on the basis of the quantization step size calculating equation corresponding to the code L


n


found by the first adaptive quantizer


114


out of the quantization step size calculating equations in the fourth column of the translation table.




A third adder


116


finds a reproducing signal w


n


on the basis of the predicting signal y


n


corresponding to the current voice signal sampling value x


n


and the reversely quantized value q


n


. That is, the third adder


116


finds the reproducing signal w


n


on the basis of the following equation (32):






w


n


=y


n


+q


n


  (32)






A first predicting device


117


delays the reproducing signal w


n


by one sampling time, to find a predicting signal y


n+1


corresponding to the subsequent voice signal sampling value x


n+1


.




Description is now made of the ADPCM decoder


102


.




The ADPCM decoder


102


comprises second storage means


121


. The second storage means


121


stores a translation table having the same contents as those of the translation table stored in the first storage means


113


.




A second adaptive reverse quantizer


122


finds a reversely quantized value q


n


′ on the basis of a code L


n


′ obtained from the memory


103


and the translation table. That is, a reversely quantized value q


n


′ corresponding to the code L


n


in the second column which corresponds to the code L


n


′ obtained from the memory


103


out of the reversely quantized values q


n


in the third column of the translation table is read out from the second storage means


121


and is outputted from the second adaptive reverse quantizer


122


.




If L


n


found in the ADPCM encoder


101


is correctly transmitted to the ADPCM decoder


2


, that is, L


n


=L


n


′, the values of q


n


′, y


n


′, T


n


′ and w


n


′ used on the side of the ADPCM decoder


102


are respectively equal to the values of q


n


, y


n


, T


n


and w


n


used on the side of the ADPCM encoder


101


.




A second quantization step size updating device


123


finds the subsequent quantization step size T


n+1


′ on the basis of the code L


n


′ obtained from the memory


103


, the current quantization step size T


n


′ and the translation table. That is, the subsequent quantization step size T


n+1


′ is found on the basis of the quantization step size calculating equation corresponding to the code L


n


′ obtained from the memory


103


out of the quantization step size calculating equations in the fourth column of the translation table.




A fourth adder


124


finds a reproducing signal w


n


′ on the basis of a predicting signal y


n


′ obtained by a second predicting device


125


and the reversely quantized value q


n


′. That is, the fourth adder


124


finds the reproducing signal w


n


′ on the basis of the following equation (33). The found reproducing signal w


n


′ is outputted from the ADPCM decoder


102


.






w


n


′=y


n


′+q


n


′  (33)






The second predicting device


125


delays the reproducing signal w


n


′ by one sampling time, to find the subsequent predicting signal y


n+1


′, and sends the predicting signal y


n+1


′ to the fourth adder


124


.





FIG. 7

shows the procedure for operations performed by the ADPCM encoder


101


.




The predicting signal y


n


is first subtracted from the input signal x


n


, to find the first prediction error signal d


n


(step


21


).




It is then judged whether the first prediction error signal d


n


is not less than zero or less than zero (step


22


). When the first prediction error signal d


n


is not less than zero, one-half of the quantization step size T


n


is added to the first prediction error signal d


n


, to find the second prediction error signal e


n


(step


23


).




When the first prediction error signal d


n


is less than zero, one-half of the quantization step size T


n


is subtracted from the first prediction error signal d


n


, to find the second prediction error signal e


n


(step


24


).




When the second prediction error signal e


n


is found in the step


23


or the step


24


, coding and reverse quantization are performed on the basis of the translation table (step


25


). That is, the code L


n


and the reversely quantized value q


n


are found.




The quantization step size T


n


is then updated on the basis of the translation table (step


26


). The predicting signal y


n+1


corresponding to the subsequent voice signal sampling value x


n+1


is found on the basis of the foregoing equation (32) (step


27


).





FIG. 8

shows the procedure for operations performed by the ADPCM decoder


102


.




The code L


n


′ is first read out from the memory


103


, to find the reversely quantized value q


n


′ on the basis of the translation table (step


31


).




Thereafter, the subsequent predicting signal y


n+1


′ is found on the basis of the foregoing equation (33) (step


32


).




The quantization step size T


n+1


′ used with respect to the subsequent code L


n+1


′ is found on the basis of the translation table (step


33


).





FIG. 9

illustrates the relationship between the reversely quantized value q


n


obtained by the first adaptive reverse quantizer


115


in the ADPCM encoder


101


and the first prediction error signal d


n


in a case where the code L


n


is composed of four bits. T represents a quantization step size determined by the first quantization step size updating device


118


at a certain time point.




In a case where the range A to B of the first prediction error signal d


n


is indicated by A and B, the range is indicated by “[A” when a boundary A is included in the range, while being indicated by “(A” when it is not included therein. Similarly, the range is indicated by “B]” when a boundary B is included in the range, while being indicated by “B)” when it is not included therein.




The reversely quantized value q


n


is zero when the value of the first prediction error signal d


n


is in the range of (−0.5T, 0.5T), T when it is in the range of [0.5T, 1.5T), 2T when it is in the range of [1.5T, 2.5T), and 3T when it is in the range of [2.5T, 3.5T).




The reversely quantized value q


n


is 4.5T when the value of the first prediction error signal d


n


is in the range of [3.5T, 5.5T), and 6.5T when it is in the range of [5.5T, 7.5T). The reversely quantized value q


n


is 9T when the value of the first prediction error signal d


n


is in the range of [7.5T, 10.5T), and 12T when it is in the range of [10.5T, ∞].




Furthermore, the reversely quantized value q


n


is −T when the value of the first prediction error signal d


n


is in the range of (−1.5T, 0.5T], −2T when it is in the range of (−2.5T, −1.5T], −3T when it is in the range of (−3.5T, −2.5T], and −4T when it is in the range of (−4.5T, −3.5T].




The reversely quantized value q


n


is −5.5T when the value of the first prediction error signal d


n


is in the range of (−6.5T, −4.5T], and −7.5T when it is in the range of (−8.5T, −6.5T]. The reversely quantized value q


n


is −10T when the value of the first prediction error signal d


n


is in the range of (−11.5T, −8.5T], and −13T when it is in the range of [∞, −1.5T].




Also in the second embodiment, the quantization step size T


n


is made large when the code L


n


becomes large, as can be seen from Table 2. That is, the quantization step size is made small when the prediction error signal d


n


is small, while being made large when it is large.




Also in the second embodiment, when the prediction error signal d


n


which is a difference between the input signal x


n


and the predicting signal y


n


is zero, the reversely quantized value q


n


is zero, as in the first embodiment. When the prediction error signal d


n


is zero as in a silent section of a voice signal, therefore, a quantizing error is decreased.




When the absolute value of the first prediction error signal d


n


is rapidly changed from a large value to a small value, a large value corresponding to the previous prediction error signal d


n


whose absolute value is large is maintained as the quantization step size. However, the reversely quantized value q


n


can be made zero, so that the quantizing error is decreased.




In the first embodiment, the quantization step size at each time point may, in some case, be changed. When the quantization step size is determined at a certain time point, however, the quantization step size is constant irrespective of the absolute value of the prediction error signal d


n


at that time point. On the other hand, in the second embodiment, even in a case where the quantization step size T


n


is determined at a certain time point, the substantial quantization step size is decreased when the absolute value of the prediction error signal d


n


is relatively small, while being increased when the absolute value of the prediction error signal d


n


is relatively large.




Therefore, the second embodiment has the advantage that the quantizing error in a case where the absolute value of the prediction error signal d


n


is small can be made smaller, as compared with that in the first embodiment. When the absolute value of the prediction error signal d


n


is small, a voice may be small in many cases, so that the quantizing error greatly affects the degradation of a reproduced voice. If the quantizing error in a case where the prediction error signal d


n


is small can be decreased, therefore, this is useful.




On the other hand, when the absolute value of the prediction error signal d


n


is large, a voice may be large in many cases, so that the quantizing error does not greatly affect the degradation of a reproduced voice. Even if the substantial quantization step size is increased in a case where the absolute value of the prediction error signal d


n


is relatively large as in the second embodiment, therefore, there are few demerits therefor.




Furthermore, when the absolute value of the prediction error signal d


n


is rapidly changed from a small value to a large value, the quantization step size is small. In the second embodiment, when the absolute value of the prediction error signal d


n


is large, however, the substantial quantization step size is made larger than the quantization step size, so that the quantizing error can be decreased.




Although in the first embodiment and the second embodiment, description was made of a case where the present invention is applied to the ADPCM, the present invention is applicable to APCM in which the input signal x


n


is used as it is in place of the first prediction error signal d


n


in the ADPCM.




[3] Description of Third Embodiment




Referring now to

FIG. 10

, a third embodiment of the present invention will be described.





FIG. 10

illustrates the schematic construction of an APCM encoder


201


and an APCM decoder


202


. n used in the following description is an integer.




Description is now made of the APCM encoder


201


.




A signal generator


219


generates a correcting signal a


n


on the basis of a signal x


n


inputted to the APCM encoder


201


and a quantization step size T


n


obtained by a first quantization step size updating device


218


. That is, the signal generator


219


generates the correcting signal a


n


on the basis of the following equation (34):






in the case of x


n


≧0: a


n


=T


n


/2








in the case of x


n


<0: a


n


=−T


n


/2  (34)






A first adder


212


finds a corrected input signal g


n


on the basis of the input signal x


n


and the correcting signal a


n


obtained by the signal generator


219


. That is, the first adder


212


finds the corrected input signal g


n


on the basis of the following equation (35):






g


n


=x


n


+a


n


  (35)






Consequently, the corrected input signal g


n


is expressed by the following equation (36):






in the case of d


n


≧0: g


n


=x


n


+T


n


/2








in the case of d


n


<0: g


n


=x


n


−T


n


/2  (36)






A first adaptive quantizer


214


codes the corrected input signal g


n


found by the first adder


212


on the basis of the quantization step size T


n


obtained by the first quantization step size updating device


218


, to find a code L


n


. That is, the first adaptive quantizer


214


finds the code L


n


on the basis of the following equation (37). The found code L


n


is sent to a memory


203


.






L


n


=[g


n


/T


n


]  (37)






In the equation (37), [ ] is Gauss' notation, and represents the maximum integer which does not exceed a number in the square brackets. An initial value of the quantization step size T


n


is a positive number.




The first quantization step size updating device


218


finds a quantization step size T


n+1


corresponding to the subsequent voice signal sampling value x


n+1


on the basis of the following equation (37). The relationship between the code L


n


and a function M (L


n


) is as shown in Table 3. Table 3 shows an example in a case where the code L


n


is composed of four bits.






T


n+1


=T


n


×M(L


n


)  (38)

















TABLE 3











L


n






M (L


n


)


























0




−1




0.8






1




−2




0.8






2




−3




0.8






3




−4




0.8






4




−5




1.2






5




−6




1.6






6




−7




2.0






7




−8




2.4














Description is now made of the APCM decoder


202


.




A second adaptive reverse quantizer


222


uses a code L


n


′ obtained from the memory


203


and a quantization step size T


n


′ obtained by a second quantization step size updating device


223


, to find w


n


′ (a reversely quantized value) on the basis of the following equation (39) The found reproducing signal w


n


′ is outputted from the APCM decoder


202


.






w


n


′=L


n


′×T


n


′  (39)






The second quantization step size updating device


223


uses the code L


n


′ obtained from the memory


203


, to find a quantization step size T


n+1


′ used with respect to the subsequent code L


n+1


′ on the basis of the following equation (40). The relationship between the code L


n


′ and a function M (L


n


′) is the same as the relationship between the code L


n


and the function M (L


n


) in Table 3.






T


n+1


′=T


n


×M(L


n


′)  (40)






In the third embodiment, a reproducing signal w


n


′ obtained by reversely quantizing the code L


n


corresponding to a section where the absolute value of the input signal x


n


is small is approximately zero.




In the above-mentioned third embodiment, the code L


n


may be found on the basis of the corrected input signal g


n


and a table previously storing the relationship between the signal g


n


and the code L


n


, and the quantization step size T


n+1


corresponding to the subsequent input signal x


n+1


may be found on the basis of the found code L


n


and a table previously storing the relationship between the code L


n


and the quantization step size T


n+1


corresponding to the subsequent input signal x


n+1


.




In this case, the respective tables storing the relationship between the signal g


n


and the code L


n


and the relationship between the code L


n


and the quantization step size T


n+1


corresponding to the subsequent input signal x


n+1


are produced so as to satisfy the following conditions (a), (b), and (c):




(a) the quantization step size T


n


is so changed as to be increased when the absolute value of the input signal x


n


is so changed as to be increased.




(b) the reproducing signal w


n


′ obtained by reversely quantizing the code L


n


corresponding to the section where the absolute value of the input signal x


n


is small is approximately zero.




(c) the substantial quantization step size corresponding to a section where the absolute value of the input signal x


n


is large is larger, as compared with that corresponding to the section where the absolute value of the input signal x


n


is small.




Industrial Applicability




A voice coding method according to the present invention is suitable for use in voice coding methods such as ADPCM and APCM.



Claims
  • 1. A voice coding method comprising:the first step of adding, when a first prediction error signal dn which is a difference between an input signal xn and a predicted value yn corresponding to the input signal xn is not less than zero, one-half of a quantization step size Tn to the first prediction error signal dn to produce a second prediction error signal en, while subtracting, when the first prediction error signal dn is less than zero, one-half of the quantization step size Tn from the first prediction error signal dn to produce a second prediction error signal en; the second step of finding a code Ln on the basis of the second prediction error signal en found in the first step and the quantization step size Tn; the third step of finding a reversely quantized value qn on the basis of the code Ln found in the second step; the fourth step of finding a quantization step size Tn+1 corresponding to the subsequent input signal xn+1 on the basis of the code Ln found in the second step; and the fifth step of finding a predicted value yn+1 corresponding to the subsequent input signal xn+1 on the basis of the reversely quantized value qn found in the third step and the predicted value yn.
  • 2. The voice coding method according to claim 1, whereinin said second step, the code Ln is found on the basis of the following equation: Ln=[en/Tn]where [ ] is Gauss' notation, and represents the maximum integer which does not exceed a number in the square brackets.
  • 3. The voice coding method according to claim 1, whereinin said third step, the reversely quantized value qn is found on the basis of the following equation: gn=Ln×Tn.
  • 4. The voice coding method according to claim 1, whereinin said fourth step, the quantization step size Tn+1 is found on the basis of the following equation: Tn+1=Tn×M(Ln) where M (Ln) is a value determined depending on Ln.
  • 5. The voice coding method according to claim 1, whereinin said fifth step, the predicted value yn+1 is found on the basis of the following equation: yn+1=yn+qn.
  • 6. A voice coding method comprising:the first step of adding, when a first prediction error signal dn which is a difference between an input signal xn and a predicted value yn corresponding to the input signal xn is not less than zero, one-half of a quantization step size Tn to the first prediction error signal dn to produce a second prediction error signal en, while subtracting, when the first prediction error signal dn is less than zero, one-half of the quantization step size Tn from the first prediction error signal dn to produce a second prediction error signal en; the second step of finding, on the basis of the second prediction error signal en found in the first step and a table previously storing the relationship between the second prediction error signal en and a code Ln, the code Ln; the third step of finding, on the basis of the code Ln found in the second step and a table previously storing the relationship between the code Ln and a reversely quantized value qn, the reversely quantized value qn; the fourth step of finding, on the basis of the code Ln found in the second step and a table previously storing the relationship between the code Ln and a quantization step size Tn+1 corresponding to the subsequent input signal xn+1, the quantization step size Tn+1 corresponding to the subsequent input signal xn+1; and the fifth step of finding a predicted value yn+1 corresponding to the subsequent input signal xn+1 on the basis of the reversely quantized value qn found in the third step and the predicted value yn, wherein each of the tables being produced so as to satisfy the following conditions (a), (b) and (c): (a) The quantization step size Tn is so changed as to be increased when the absolute value of the difference dn is so changed as to be increased, (b) The reversely quantized value qn of the code Ln corresponding to a section where the absolute value of the difference dn is small is approximately zero, and (c) A substantial quantization step size corresponding to a section where the absolute value of the difference dn is large is larger, as compared with that corresponding to the section where the absolute value of the difference dn is small.
  • 7. The voice coding method according to claim 6, wherein in said fifth step, the predicted value yn+1 is found on the basis of the following equation:yn+1=yn+qn.
Priority Claims (1)
Number Date Country Kind
9-035062 Feb 1997 JP
PCT Information
Filing Document Filing Date Country Kind
PCT/JP98/00674 WO 00
Publishing Document Publishing Date Country Kind
WO98/37636 8/27/1998 WO A
US Referenced Citations (3)
Number Name Date Kind
4686512 Nakamura et al. Aug 1987 A
4754258 Nakamura et al. Jun 1988 A
5072295 Murakami et al. Dec 1991 A
Foreign Referenced Citations (2)
Number Date Country
59-178030 Oct 1984 JP
59-210723 Nov 1984 JP
Non-Patent Literature Citations (1)
Entry
International Preliminary Examination Report issued in PCT/JP98/00674, dated Apr. 5, 1999.