Receiver for receiving a linear predictive coded speech signal

Information

  • Patent Grant
  • 6763330
  • Patent Number
    6,763,330
  • Date Filed
    Monday, February 25, 2002
    24 years ago
  • Date Issued
    Tuesday, July 13, 2004
    21 years ago
Abstract
A receiver is used in decoding a received encoded signal. The received encoded speech signal is encoded using excitation linear prediction. The receiver receives the encoded speech signal. The encoded speech signal comprises a code, a pitch lag and a line spectral pair index. An innovation sequence is produced by selecting a code from each of a plurality of codebooks based on the code index. A line spectral pair quantization of a speech signal is determined using the line spectral pair index. A pitch lag is determined using the pitch lag index. A speech signal is reconstructed using the produced innovation sequence, the determined line spectral pair quantization and pitch lag.
Description




BACKGROUND




1. Field of the Invention




This invention relates to digital speech encoders using code excited linear prediction coding, or CELP. More particularly, this invention relates a method and apparatus for efficiently selecting a desired codevector used to reproduce an encoded speech segment at the decoder.




2. Background of the Invention




Direct quantization of analog speech signals is too inefficient for effective bandwidth utilization. A technique known as linear predictive coding, or LPC, which takes advantage of speech signal redundancies, requires much fewer bits to transmit or store speech signals. Originally speech signals are produced as a result of acoustical excitation of the vocal tract. While the vocal cords produce the acoustical excitation, the vocal tract (e.g. mouth, tongue and lips) acts as a time varying filter of the vocal excitation. Thus, speech signals can be efficiently represented as a quasi-periodic excitation signal plus the time varying parameters of a digital filter. In addition, the periodic nature of the vocal excitation can further be represented by a linear filter excited by a noise-like Gaussian sequence. Thus, in CELP, a first long delay predictor corresponds to the pitch periodicity of the human vocal cords, and a second short delay predictor corresponds to the filtering action of the human vocal tract.




CELP reproduces the individual speaker's voice by processing the input speech to determine the desired excitation sequence and time varying digital filter parameters. At the encoder, a prediction filter forms an estimate for the current sample of the input signal based on the past reconstructed values of the signal at the receiver decoder, i.e. the transmitter encoder predicts the value that the receiver decoder will reconstruct. The difference between the current value and predicted value of the input signal is the prediction error. For each frame of speech, the prediction residual and filter parameters are communicated to the receiver. The prediction residual or prediction error is also known as the innovation sequence and is used at the receiver as the excitation input to the prediction filters to reconstruct the speech signal. Each sample of the reconstructed speech signal is produced by adding the received signal to the predicted estimate of the present sample. For each successive speech frame, the innovation sequence and updated filter parameters are communicated to the receiver decoder.




The innovation sequence is typically encoded using codebook encoding. In codebook encoding, each possible innovation sequence is stored as an entry in a codebook and each is represented by an index. The transmitter and receiver both have the same codebook contents. To communicate given innovation sequence, the index for that innovation sequence in the transmitter codebook is transmitted to the receiver. At the receiver, the received index is used to look up the desired innovation sequence in the receiver codebook for use as the excitation sequence to the time varying digital filters.




The task of the CELP encoder is to generate the time varying filter coefficients and the innovation sequence in real time. The difficulty of rapidly selecting the best innovation sequence from a set of possible innovation sequences for each frame of speech is an impediment to commercial achievement of real time CELP based systems, such as cellular telephone, voice mail and the like.




Both random and deterministic codebooks are known. Random codebooks are used because the probability density function of the prediction error samples has been shown to be nearly white Gaussian random noise. However, random codebooks present a heavy computational burden to select an innovation sequence from the codebook at the encoder since the codebook must be exhaustively searched.




To select an innovation sequence from the codebook of stored innovation sequences, a given fidelity criterion is used. Each innovation sequence is filtered through time varying linear recursive filters to reconstruct (predict) the speech frame as it would be reconstructed at the receiver. The predicted speech frame using the candidate innovation sequence is compared with the desired target speech frame (filtered through a perceptual weighting filter) and the fidelity criterion is calculated. The process is repeated for each stored innovation sequence. The innovation sequence that maximizes the fidelity criterion function is selected as the optimum innovation sequence, and an index representing the selected optimum sequence is sent to the receiver, along with other filter parameters.




At the receiver, the index is used to access the selected innovation sequence, and, in conjunction with the other filter parameters, to reconstruct the desired speech.




The central problem is how to select an optimum innovation sequence from the codebook at the encoder within the constraints of real time speech encoding and acceptable transmission delay. In a random codebook, the innovation sequences are independently generated random white Gaussian sequences. The computational burden of performing an exhaustive search of all the innovation sequences in the random code book is extremely high because each innovation sequence must be passed through the prediction filters.




One prior art solution to the problem of selecting an innovation sequence is found in U.S. Pat. No. 4,797,925 in which the adjacent codebook entries have a subset of elements in common. In particular, each succeeding code sequence may be generated from the previous code sequence by removing one or more elements from the beginning of the previous sequence and adding one or more elements to the end of the previous sequence. The filter response to each succeeding code sequence is then generated from the filter response to the preceding code sequence by subtracting the filter response to the first samples and appending the filter response to the added samples. Such overlapping codebook structure permits accelerated calculation of the fidelity criterion.




Another prior art solution to the problem of rapidly selecting an optimum innovation sequence is found in U.S. Pat. No. 4,817,157 in which the codebook of excitation vectors is derived from a set of M basis vectors which are used to generate a set of 2


M


codebook excitation code vectors. The entire codebook of 2


M


possible excitation vectors is searched using the knowledge of how the code vectors are generated from the basis vectors, without having to generate and evaluate each of the individual code vectors.




SUMMARY




A receiver is used in decoding a received encoded signal. The received encoded speech signal is encoded using excitation linear prediction. The receiver receives the encoded speech signal. The encoded speech signal comprises a code, a pitch lag and a line spectral pair index. An innovation sequence is produced by selecting a code from each of a plurality of codebooks based on the code index. A line spectral pair quantization of a speech signal is determined using the line spectral pair index. A pitch lag is determined using the pitch lag index. A speech signal is reconstructed using the produced innovation sequence, the determined line spectral pair quantization and pitch lag.











BRIEF DESCRIPTION OF THE DRAWING(S)





FIG. 1

is a diagram of a CELP encoder utilizing a ternary codebook in accordance with the present invention.





FIG. 2

is a block diagram of a CELP decoder utilizing a ternary codebook in accordance with the present invention.





FIG. 3

is a flow diagram of an exhaustive search process for finding an optimum codevector in accordance with the present invention.





FIG. 4

is a flow diagram of a first sub-optimum search process for finding a codevector in accordance with the present invention.





FIG. 5

is a flow diagram of a second sub-optimum search process for finding a codevector in accordance with the present invention.





FIGS. 6A

,


6


B and


6


C are graphical representations of a first binary codevector, a second binary codevector, and a ternary codevector, respectively.











DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT(S)




CELP Encoding




The CELP encoder of

FIG. 1

includes an input terminal


10


for receiving input speech samples which have been converted to digital form. The CELP encoder represents the input speech samples as digital parameters comprising an LSP index, a pitch lag and gain, and a code index and gain, for digital multiplexing by transmitter


30


on communication channel


31


.




LSP Index




As indicated above, speech signals are produced as a result of acoustical excitation of the vocal tract. The input speech samples received on terminal


10


are processed in accordance with known techniques of LPC analysis


26


, and are then quantized by a line spectral pair (LSP) quantization circuit


28


into a conventional LSP index.




Pitch Lag and Gain




Pitch lag and gain are derived from the input speech using a weighted synthesis filter


16


, and an adaptive codebook analysis


18


. The parameters of pitch lag and gain are made adaptive to the voice of the speaker, as is known in the art. The prediction error between the input speech samples at the output of the perceptual weighting filter


12


, and predicted reconstructed speech samples from a weighted synthesis filter


16


is available at the output of adder


14


. The perceptual weighting filter


12


attenuates those frequencies where the error is perceptually more important. The role of the weighting filter is to concentrate the coding noise in the formant regions where it is effectively masked by the speech signal. By doing so, the noise at other frequencies can be lowered to reduce the overall perceived noise. Weighted synthesis filter


16


represents the combined effect of the decoder synthesis filter and the perceptual weighting filter


12


. Also, in order to set the proper initial conditions at the subframe boundary, a zero input is provided to weighted synthesis filter


16


. The adaptive codebook analysis


18


performs predictive analysis by selecting a pitch lag and gain which minimizes the instantaneous energy of the mean squared prediction error.




Innovation Code Index and Gain




The innovation code index and gain is also made adaptive to the voice of the speaker using a second weighted synthesis filter


22


, and a ternary codebook analysis


24


, containing an encoder ternary codebook of the present invention. The prediction error between the input speech samples at the output of the adder


14


, and predicted reconstructed speech samples from a second weighted synthesis filter


22


is available at the output of adder


20


. Weighted synthesis filter


22


represents the combined effect of the decoder synthesis filter and the perceptual weighting filter


12


, and also subtracts the effect of adaptive pitch lag and gain introduced by weighted synthesis filter


16


to the output of adder


14


.




The ternary codebook analysis


18


performs predictive analysis by selecting an innovation sequence which maximizes a given fidelity criterion function. The ternary codebook structure is readily understood from a discussion of CELP decoding.




CELP Decoding




A CELP system decoder is shown in

FIG. 2. A

digital demultiplexer


32


is coupled to a communication channel


31


. The received innovation code index (index i and index j), and associated gain is input to ternary decoder codebook


34


. The ternary decoder codebook


34


is comprised of a first binary codebook


36


, and a second binary codebook


38


. The output of the first and second binary codebooks are added together in adder


40


to form a ternary codebook output, which is scaled by the received signed gain in multiplier


42


. In general, any two digital codebooks may be added to form a third digital codebook by combining respective codevectors, such as a summation operation.




To illustrate how a ternary codevector is formed from two binary codevectors, reference is made to

FIGS. 6A

,


6


B and


6


C. A first binary codevector is shown in

FIG. 6A

consisting of values {0, 1}. A second binary codevector is shown in

FIG. 6B

consisting of values {−1, 0}. By signed addition in adder


40


of

FIG. 2

, the two binary codevectors form a ternary codevector, as illustrated in FIG.


6


C.




The output of the ternary decoder codebook


34


in

FIG. 2

is the desired innovation sequence or the excitation input to a CELP system. In particular, the innovation sequence from ternary decoder codebook


34


is combined in adder


44


with the output of the adaptive codebook


48


and applied to LPC synthesis filter


46


. The result at the output of LPC synthesis filter


46


is the reconstructed speech. As a specific example, if each speech frame is 4 milliseconds, and the sampling rate is 8 Mhz, then each innovation sequence, or codevector, is 32 samples long.




Optimum Innovation Sequence Selection




The ternary codebook analysis


24


of

FIG. 1

is illustrated in further detail by the process flow diagram of FIG.


3


. In code excited linear prediction coding, the optimum codevector is found by maximizing the fidelity criterion function,










MAX
k





(


x
t



Fc
k


)

2



&LeftDoubleBracketingBar;

Fc
k

&RightDoubleBracketingBar;

2






Equation





1













where x


t


is the target vector representing the input speech sample, F is an N×N matrix with the term in the n th row and the i th column given by f


n-i


, and C


k


is the k th codevector in the innovation codebook. Also, ∥λ


2


indicates the sum of the squares of the vector components, and is essentially a measure of signal energy content. The truncated impulse response f


n


, n=1, 2 . . . N, represents the combined effects of the decoder synthesis filter and the perceptual weighting filter. The computational burden of the CELP encoder comes from the evaluation of the filtered term Fc


k


and the cross-correlation, auto-correlation terms in the fidelity criterion function.




Let C


k


=0


i





j


,




k=0, 1, . . . K−1




i=0, 1, . . . I−1




j=0, 1, . . . J−1




Log


2


K=Log


2


I+Log


2


J, where θ


i


η


j


are codevectors from the two binary codebooks, the fidelity criterion function for the codebook search becomes,










ψ


(

i
,
j

)


=



(



x
t


F






θ
i


+


x
t


F






η
j



)

2




θ
i
t



F
t


F






θ
i


+

2


θ
i
t



F
t


F






η
j


+


η
j
t



F
t


F






η
j








Equation





2













Search Procedures




There are several ways in which the fidelity criterion function ψ(i,j) may be evaluated.




1. Exhaustive Search.




Finding the maximum ψ(i,j) involves the calculation of Fθ


i


, Fη


j


and θ


i




t


F


t





j


, which has I and J filtering and the IJ cross-correlation of x


t





i


, x


t





j


and ∥Fθ


i





2


, ∥Fθ


j





2


, which has I+J cross-correlation and I+J auto-correlation terms.





FIG. 3

illustrates an exhaustive search process for the optimum innovation sequence. All combinations of binary codevectors in binary codebooks


1


and


2


are computed for the fidelity criterion function ψ(i,j). The peak fidelity criterion function ψ(i,j) is selected at step


62


, thereby identifying the desired codebook index i and codebook index j.




Binary codebook


1


is selectively coupled to linear filter


50


. The output of linear filter


50


is coupled to correlation step


52


, which provides a correlation calculation with the target speech vector X, the input speech samples filtered in a perceptual weighting filter. Binary codebook


2


is selectively coupled to linear filter


68


. The output of linear filter


68


is coupled to correlation step


72


, which provides a correlation calculation with the target speech vector X. The output of correlation step


52


is coupled to one input of adder


66


. The output of correlation step


72


is coupled to the other input of adder


66


. The output of adder


66


is coupled to a square function


64


which squares the output of the adder


66


to form a value equal to the numerator of the fidelity criterion ψ(i,j) of Equation 2. The linear filters


50


and


68


are each equivalent to the weighted synthesis filter


22


of

FIG. 1

, and are used only in the process of selecting optimum synthesis parameters. The decoder (

FIG. 2

) will use the normal synthesis filer.




The output of linear filter


50


is also coupled to a sum of the squares calculation step


54


. The output of linear filter


68


is further coupled to a sum of the squares calculation step


70


. The sum of the squares is a measure of signal energy content. The linear filter


50


and the linear filter


68


are also input to correlation step


56


to form a cross-correlation term between codebook


1


and codebook


2


. The cross-correlation term output of correlation step


56


is multiplied by 2 in multiplier


58


. Adder


60


combines the output of multiplier


58


, the output of sum of the squares calculation step


54


plus the output of sum of the squares calculation step


70


to form a value equal to the denominator of the fidelity criterion ψ(i,j) of Equation 2.




In operation, one of 16 codevectors of binary codebook


1


corresponding to a 4 bit codebook index i, and one of 16 codevectors of binary codebook


2


corresponding to a 4 bit codebook index j, is selected for evaluation in the fidelity criterion. The total number of searches is 16×16, or 256. However, the linear filtering steps


50


,


68


, the auto-correlation calculations


52


,


72


and the sum of the squares calculation


54


,


70


need only be performed 32 times (not 256 times), or once for each of 16 binary codevectors in two codebooks. The results of prior calculations are saved and reused, thereby reducing the time required to perform an exhaustive search. The number of cross-correlation calculations in correlation step


56


is equal to 256, the number of binary vector combinations searched.




The peak selection step


62


receives the numerator of Equation 2 on one input and the denominator of Equation 2 on the other input for each of the 256 searched combinations. Accordingly, the codebook index i and codebook index j corresponding to a peak of the fidelity criterion function ψ(i,j) is identified. The ability to search the ternary codebook


34


, which stores 256 ternary codevectors, by searching among only 32 binary codevectors, is based on the superposition property of linear filters.




2. Sub-Optimum Search I





FIG. 4

illustrates an alternative search process for the codebook index i and codebook index j corresponding to a desired codebook innovation sequence. This search involves the calculation of Equation 1 for codebook


1


and codebook


2


individually as follows:












(


x
t


F






θ
i


)

2



&LeftDoubleBracketingBar;

F






θ
i


&RightDoubleBracketingBar;

2







and








(


x
t


F






η
j


)

2



&LeftDoubleBracketingBar;

F






η
j


&RightDoubleBracketingBar;

2






Equation





3













To search all the codevectors in both codebooks individually, only 16 searches are needed, and no cross-correlation terms exist. A subset of codevectors (say 5) in each of the two binary codebooks are selected as the most likely candidates. The two subsets that maximizes the fidelity criterion functions above are then jointly searched to determine the optimum, as in the exhaustive search in FIG.


3


. Thus, for a subset of 5 codevectors in each codebook, only 25 joint searches are needed to exhaustively search all subset combinations.




In

FIG. 4

, binary codebook


1


is selectively coupled to linear filter


74


. The output of linear filter


74


is coupled to a squared correlation step


76


, which provides a squared correlation calculation with the target speech vector X. The output of linear filter


74


is also coupled to a sum of the squares calculation step


78


. The output of the squared correlation step


76


, and the sum of the squares calculation step


78


is input to peak selection step


80


to select a candidate subset of codebook


1


vectors.




Binary codebook


2


is selectively coupled to linear filter


84


. The output of linear filter


84


is coupled to a squared correlation step


86


, which provides a squared correlation calculation with the target speech vector X. The output of linear filter


84


is also coupled to a sum of the squares calculation step


88


. The output of the squared correlation step


86


, and the sum of the squares calculation step


88


is input to peak selection step


90


to select a candidate subset of codebook


2


vectors. In such manner a fidelity criterion function expressed by Equation 3 is carried out in the process of FIG.


4


.




After the candidate subsets are determined, an exhaustive search as illustrated in

FIG. 3

is performed using the candidate subsets as the input codevectors. In the present example, 25 searches are needed for an exhaustive search of the candidate subsets, as compared to 256 searches for the full binary codebooks. In addition, filtering and auto-correlation terms from the first calculation of the optimum binary codevector subsets are available for reuse in the subsequent exhaustive search of the candidate subsets.




3. Sub-Optimum Search II





FIG. 5

illustrates yet another alternative search process for the codebook index i and codebook index j corresponding to a desired codebook innovation sequence. This search evaluates each of the binary codevectors individually in both codebooks using the same fidelity criterion function as given in Equation 3 to find the one binary codevector having the maximum value of the fidelity criterion function. The maximum binary codevector, which may be found in either codebook (binary codebook


1


or binary codebook


2


), is then exhaustively searched in combination with each binary codevector in the other binary codebook (binary codebook


2


or binary codebook


1


), to maximize the fidelity criterion function ψ(i,j).




In

FIG. 5

, binary codebooks


1


and


2


are treated as a single set of binary codevectors, as schematically represented by a data bus


93


and selection switches


94


and


104


.




That is, each binary codevector of binary codebook


1


and binary codebook


2


is selectively coupled to linear filter


96


. The output of linear filter


96


is coupled to a squared correlation step


98


, which provides a squared correlation calculation with the target speech vector X. The output of linear filter


96


is also coupled to a sum of the squares calculation step


100


. The output of the squared correlation step


98


, and the sum of the squares calculation step


100


is input to peak selection step


102


to select a single optimum codevector from codebook


1


and codebook


2


. A total of 32 searches is required, and no cross-correlation terms are needed.




Having found the optimum binary codevector from codebook


1


and codebook


2


, an exhaustive search for the optimum combination of binary codevectors


106


(as illustrated in

FIG. 3

) is performed using the single optimum codevector found as one set of the input codevectors. In addition, instead of exhaustively searching both codebooks, switch


104


under the control of the peak selection step


102


, selects the codevectors from the binary codebook which does not contain the single optimum codevector found by peak selection step


102


. In other words, if binary codebook


2


contains the optimum binary codevector, then switch


104


selects the set of binary codevectors from binary codebook


1


for the exhaustive search


106


, and vice versa. In such manner, only 16 exhaustive searches need be performed. As before, filtering and auto-correlation terms from the first calculation of the optimum single optimum codevector from codebook


1


and codebook


2


are available for reuse in the subsequent exhaustive search step


106


. The output of search step is the codebook index i and codebook index j representing the ternary innovation sequence for the current frame of speech.




Overlapping Codebook Structures




For any of the foregoing search strategies, the calculation of Fθ


i


, Fη


j


can be further accelerated by using an overlapping codebook structure as indicated in cited U.S. Pat. No. 4,797,925 to the present inventor. That is, the codebook structure has adjacent codevectors which have a subset of elements in common. An example of such structure is the following two codevectors:






θ


L




t


=(


g




L




, g




L


+1


, . . . g




L




+N−


1)








θ


L


+1


t


=(


g




L


+1


, g




L


+2


, . . . , g




L




+N


)






Other overlapping structures in which the starting positions of the codevectors are shifted by more than one sample are also possible. With the overlapping structure, the filtering operation of Fθ


i


and Fη


j


can be accomplished by a procedure using recursive endpoint correction in which the filter response to each succeeding code sequence is then generated from the filter response to the preceding code sequence by subtracting the filter response to the first sample g


L


, and appending the filter response to the added sample g


L


+N. In such manner, except for the first codevector, the filter response to each successive codevector can be calculated using only one additional sample.



Claims
  • 1. A receiver for use in decoding a received encoded speech signal, the received encoded speech signal encoded using coded excitation linear prediction, the receiver comprising:means for receiving the encoded speech signals, the encoded speech signal comprising a code index, a pitch lag index and a line spectral pair index; means for producing an innovation sequence by selecting a code from each of a plurality of codebooks based on the code index and combining the selected codes as the innovation sequence; means for determining a line spectral pair quantization of a speech signal using the line spectral pair index; means for determining a pitch lag of the speech signal using the pitch lag index; means for reconstructing a speech signal using the produced innovation sequence, the determined line spectral pair quantization and pitch lag.
  • 2. The receiver of claim 1 wherein the code index includes a gain index and the innovation sequence is adjusted by a gain identified by the gain index.
  • 3. The receiver of claim 1 wherein the pitch lag index includes an associated gain index.
  • 4. The receiver of claim 1 wherein the plurality of codebooks is two codebooks.
  • 5. The receiver of claim 4 wherein the code index comprises a first index representing a first code from one of the two codebooks and a second index representing a second code of another of the two codebooks.
  • 6. The receiver of claim 5 wherein the first and second codes are added as the produced innovation sequence.
  • 7. The receiver of claim 5 wherein the first and second codes are binary sequences and the produced innovation sequence is a ternary sequence.
  • 8. The receiver of claim 5 wherein a possible number of produced innovation sequences is 2M and a number of codes in each of the two codebooks is 2M/2, when M is an even integer.
  • 9. The receiver of claim 8 wherein the possible number of produced innovation sequences is 256 and the number of codes in each of the two codebooks is 16.
  • 10. A receiver for use in decoding a received encoded speech signal, the received encoded speech signal encoded using code excitation linear prediction, the receiver comprising:an input configured to receive the encoded speech signal, the encoded speech signal comprising a code index, a pitch lag index and a line spectral pair index; a plurality of codebooks for use in producing an innovation sequence, the code index of the encoded speech signal is used to select a code from each of the plurality of codebooks; an adder for combining the selected codes as the innovation sequence; an adaptive codebook for determining a pitch lag of the speech signal using the pitch lag index; and a linear predictive coding synthesis filter using the line spectral pair index, the determined pitch lag and innovation sequence to reconstruct a speech signal.
  • 11. The receiver of claim 10 wherein the code index includes a gain index and the innovation sequence is adjusted by a gain identified by the gain index.
  • 12. The receiver of claim 10 wherein the pitch lag index includes an associated gain index.
  • 13. The receiver of claim 10 wherein the plurality of codebooks is two codebooks.
  • 14. The receiver of claim 13 wherein the code index comprises a first index representing a first code from one of the two codebooks and a second index representing a second code of another of the two codebooks.
  • 15. The receiver of claim 14 wherein the first and second codes are added as the produced innovation sequence.
  • 16. The receiver of claim 14 wherein the first and second codes are binary sequences and the produced innovation sequence is a ternary sequence.
  • 17. The receiver of claim 14 wherein a possible number of produced innovation sequences is 2M and a number of codes in each of the two codebooks is 2M/2, when M is an even integer.
  • 18. The receiver of claim 17 wherein the possible number of produced innovation sequences is 256 and the number of codes in each of the two codebooks is 16.
Parent Case Info

This application is a continuation of U.S. patent application Ser. No. 09/711,252, filed Nov. 13, 2000 U.S. Pat. No. 6,389,388, which is a continuation of U.S. patent application Ser. No. 08/734,356, filed Oct. 21, 1996, issued on May 29, 2001 as U.S. Pat. No. 6,240,382, which is a continuation of U.S. patent application Ser. No. 08/166,223, filed Dec. 14, 1993, issued on Apr. 15, 1997 as U.S. Pat. No. 5,621,852.

US Referenced Citations (16)
Number Name Date Kind
4220819 Atal Sep 1980 A
4797925 Lin Jan 1989 A
4817157 Gerson Mar 1989 A
5271089 Ozawa Dec 1993 A
5274741 Taniguchi et al. Dec 1993 A
5353373 Drogo de Iacouo et al. Oct 1994 A
5371853 Kao et al. Dec 1994 A
5451951 Elliot et al. Sep 1995 A
5621852 Lin Apr 1997 A
5657418 Gerson et al. Aug 1997 A
5787390 Quinquis et al. Jul 1998 A
5845244 Proust Dec 1998 A
6148282 Paksoy et al. Nov 2000 A
6161086 Mukherjee et al. Dec 2000 A
6240382 Lin May 2001 B1
6389388 Lin May 2002 B1
Non-Patent Literature Citations (9)
Entry
Moncet and Rabal, “Codeword Selection for CELP Coders”, INRS -Telecommunications Technical Report, No. 87-35 (Jul. 1987), pp. 1-22.
Davidson and Gersho, “Complexity Reduction Methods for Vector Excitation Coding”, IEEE-IECEI-ASJ International Conference on Acoustics, Speech and Signal Processing, vol. 4, Apr. 7, 1986, p. 3055.
Atal, “Predictive Coding at Low Bit Rates”, IEEE Transactions on Communcations, vol. COM-30, No. 4 (Apr. 1982), p. 600.
Trancoso and Atat, “Efficient Procedures for Finding the Optimum Innovation Sequence in Stochastic Coders”, IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 4, Apr. 7, 1986, p. 2375.
Schroder et al., “Stochastic Coding at Very Low Bit Rates, The Importance of Speech Perception”, Speech Communication 4 (1985), North Holland, p. 155.
Schroder et al., “Code Excited Linear Prediction (CELP) High Quality Speech at Very Low Bit Rates”, IEEE 1985, p. 937.
Schroder, “Linear Predictive Coding of Speech: Review and Current Directions”, IEEE Communications Magazine, vol. 23, No. 8, Aug. 1985, p. 54.
Miyano et al., “Improved 4.87 Kbls CELP Coding Using Two-Stage Vector Quantization with Multiple Candidates (LCELP)”, ICASSP 1992: Acoustics Speech and Signal Processing Cone, Sep. 1992, pp. 321-324.
Casaju's Quir'os et al., “Analysis and Quantization Procedures for a Real-Time Implementation of a 4.8 kbls CELP Coder”, ICASSP 1990: Acoustics, Speech and Signal Processing Cone, Feb. 1990, pp. 609-612.
Continuations (3)
Number Date Country
Parent 09/711252 Nov 2000 US
Child 10/082412 US
Parent 08/734356 Oct 1996 US
Child 09/711252 US
Parent 08/166223 Dec 1993 US
Child 08/734356 US