Vector quantization method and speech encoding method and apparatus

Information

  • Patent Grant
  • 6611800
  • Patent Number
    6,611,800
  • Date Filed
    Thursday, September 11, 1997
    27 years ago
  • Date Issued
    Tuesday, August 26, 2003
    21 years ago
Abstract
The processing volume for codebook search for vector quantization is diminished by sending data representing an envelope of spectral components of the harmonics from a spectrum evaluation unit 148 of a sinusoidal analytic encoder 114 to a vector quantizer 116 for vector quantization, so that the degree of similarity between an input vector and all code vectors stored in the codebook is found by approximation for pre-selecting a smaller number of code vectors. From these pre-selected code vectors, such a code vector minimizing an error with respect to the input vector is ultimately selected. In this manner, a smaller number of candidate code vectors are pre-selected by pre-selection involving simplified processing and subsequently subjected to ultimate selection with high precision.
Description




BACKGROUND OF THE INVENTION




1. Field of the Invention




This invention relates to a vector quantization method in which an input vector is compared to code vectors stored in a codebook for outputting an index of an optimum one of the code vectors. The present invention also relates to a speech encoding method and apparatus in which an input speech signal is divided in terms of a pre-set encoding unit, such as a block or a frame, and encoding processing including vector quantization is carried out on the encoding unit basis.




2. Description of the Related Art




There has hitherto been known vector quantization in which, for digitizing and compression-encoding audio or video signals, a plurality of input data are grouped together into a vector for representation as a sole code (index).




In such vector quantization, representative patterns of a variety of input vectors are previously determined by, for example, learning, and given codes or indices, which are then stored in a codebook. The input vector is then compared to the respective patterns (code vectors) by way of pattern matching for outputting the code of the pattern bearing the strongest similarity or correlation. This similarity or correlation is found by calculating the distortion measure or an error energy between the input vector and the respective code vectors and becomes higher as the distortion or error becomes smaller.




There have hitherto been known a variety of encoding methods exploiting statistic properties in the time domain or frequency domain and psychoacoustic properties of the human being in signal compression. This encoding method is roughly classified into encoding in the time domain, encoding in the frequency domain and analysis-by-synthesis encoding.




Among examples of high-efficiency encoding of a speech signal, there are sinusoidal wave analytic encoding, such as a harmonic encoding, a sub-band coding (SBC), linear predictive coding (LPC), discrete cosine transform (DCT), modified DCT (MDCT) or fast Fourier transform (FFT).




In high-efficiency encoding of the speech signals, the above-mentioned vector quantization is used for parameters such as spectral components of the harmonics.




Meanwhile, if the number of the patterns stored in the codebook, that is the number of the code vectors, is large, or if the vector quantizer is of a multi-stage configuration made up of plural codebooks, combined together, the number of times of code vector search operations for pattern matching is increased to increase the processing volume. In particular, if plural codebooks are combined together, processing for finding the similarity of the number of multiplications of the number of code vectors in the codebooks becomes necessary, thereby increasing the codebook search processing volume significantly.




SUMMARY OF THE INVENTION




It is therefore an object of the present invention to provide a vector quantization method, a speech encoding method and a speech encoding apparatus capable of suppressing the codebook search processing volume.




For accomplishing the above object, the present invention provides a vector quantization method including a step of finding the degree of similarity between an input vector to be vector quantized and all code vectors stored in a codebook by approximation for pre-selecting plural code vectors bearing a high degree of similarity and a step of ultimately selecting one of the plural pre-selected code vectors that minimizes an error with respect to the input vector.




By executing ultimate selection after the pre-selection, a smaller number of candidate code vectors are selected by pre-selection involving simplified processing and subjected to ultimate selection of high precision to reduce the processing volume for codebook searching.




The codebook is constituted by plural codebooks from each of which can be selected plural code vectors representing an optimum combination. The degree of similarity may be an inner product of the input vector and the code vector, optionally divided by a norm or a weighted norm of each code vector.




The present invention also provides a speech encoding method in which an input speech signal or short-term prediction residuals thereof are analyzed by sinusoidal analysis to find spectral components of the harmonics and in which parameters derived from the encoding-unit-based spectral components of the harmonics, as the input vector, are vector quantized for encoding. In the vector quantization, the degree of similarity between the input vector and all code vectors stored in a codebook is found by approximation for pre-selecting a smaller plural number of the code vectors having a high degree of similarity, and one of these pre-selected code vectors which minimizes an error with respect to the input vector is selected ultimately.




The degree of similarity may be an optionally weighted inner product between the input vector and the code vector optionally divided by a norm or a weighted norm of each code vector. For weighting the norm, a weight having a concentrated energy towards the low frequency range and a decreasing energy towards the high frequency range may be used. Thus, the degree of similarity can be found by dividing the weighted inner product of the code vector by the weighted code vector norm.




The present invention is also directed to a speech encoding device for carrying out the speech encoding method.











BRIEF DESCRIPTION OF THE DRAWINGS





FIG. 1

is a block diagram showing a basic structure of a speech signal encoding apparatus (encoder) for carrying out the encoding method according to the present invention.





FIG. 2

is a block diagram showing a basic structure of a speech signal decoding apparatus (decoder) for carrying out the decoding method according to the present invention.





FIG. 3

is a block diagram showing a more specified structure of the speech signal encoder shown in FIG.


1


.





FIG. 4

is a block diagram showing a more detailed structure of the speech signal decoder shown in FIG.


2


.





FIG. 5

is a table showing bit rates of the speech signal encoding device.





FIG. 6

is a block diagram showing a more detailed structure of the LSP quantizer.





FIG. 7

is a block diagram showing a basic structure of the LSP quantizer.





FIG. 8

is a block diagram showing a more detailed structure of the vector quantizer.





FIG. 9

is a block diagram showing a more detailed structure of the vector quantizer.





FIG. 10

is a graph illustrating a specified example of the weight value of W[i] for weighting.





FIG. 11

is a table showing the relation between the quantization values, number of dimensions and the numbers of bits.





FIG. 12

is a block circuit diagram showing an illustrative structure of a vector quantizer for variable-dimension codebook retrieval.





FIG. 13

is a block circuit diagram showing another illustrative structure of a vector quantizer for variable-dimension codebook retrieval.





FIG. 14

is a block circuit diagram showing a first illustrative structure of a vector quantizer employing a codebook for variable dimension and a codebook for fixed dimension.





FIG. 15

is a block circuit diagram showing a second illustrative structure of a vector quantizer employing a codebook for variable dimension and a codebook for fixed dimension.





FIG. 16

is a block circuit diagram showing a third illustrative structure of a vector quantizer employing a codebook for variable dimension and a codebook for fixed dimension.





FIG. 17

is a block circuit diagram showing a fourth illustrative structure of a vector quantizer employing a codebook for variable dimension and a codebook for fixed dimension.





FIG. 18

is a block circuit diagram showing a specified structure of a CULP encoding portion (second encoder) of the speech encoding device according to the present invention.





FIG. 19

is a flowchart showing processing flow in the arrangement shown in FIG.


16


.





FIGS. 20A and 20B

show the state of the Gaussian noise and the noise after clipping at different threshold values.





FIG. 21

is a flowchart showing processing flow at the time o generating a shape codebook by learning.





FIG. 22

is a table showing the state of LSP switching depending on the U/UV transitions.





FIG. 23

shows 10-order linear spectral pairs (LSPs) based on the α-parameters obtained by the 10-order LPC analysis.





FIG. 24

illustrates the state of gain change from un unvoiced (UV) frame to a voiced (V) frame.





FIG. 25

illustrates the interpolating operation for the waveform or spectra components synthesized from frame to frame.





FIG. 26

illustrates an overlapping at a junction portion between the voiced (V) frame and the unvoiced (UV) frame.





FIG. 27

illustrates noise addition processing at the time of synthesis of voiced speech.





FIG. 28

illustrates an example of amplitude calculation of the noise added at the time of synthesis of voiced speech.





FIG. 29

illustrates an illustrative structure of a post filter.





FIG. 30

illustrates the period of updating of the filter coefficients and the gain updating period of a post filter.





FIG. 31

illustrates the processing for merging at a frame boundary portion of the gain and filter coefficients of the post filter.





FIG. 32

is a block diagram showing a structure of a transmitting side of a portable terminal employing a speech signal encoding device embodying the present invention.





FIG. 33

is a block diagram showing a structure of a receiving side of a portable terminal employing a speech signal decoding device embodying the present invention.











DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS




Referring to the drawings, preferred embodiments of the present invention will be explained in detail.





FIG. 1

shows the basic structure of an encoding apparatus (encoder) for carrying out a speech encoding method according to the present invention.




The basic concept underlying the speech signal encoder of

FIG. 1

is that the encoder has a first encoding unit


110


for finding short-term prediction residuals, such as linear prediction encoding (LPC) residuals, of the input speech signal, in order to effect sinusoidal analysis, such as harmonic coding, and a second encoding unit


120


for encoding the input speech signal by waveform encoding having phase reproducibility, and that the first encoding unit


110


and the second encoding unit


120


are used for encoding the voiced (V) speech of the input signal and for encoding the unvoiced (UV) portion of the input signal, respectively.




The first encoding unit


110


employs a constitution of encoding, for example, the LPC residuals, with sinusoidal analytic encoding, such as harmonic encoding or multi-band excitation (MBE) encoding. The second encoding unit


120


employs a constitution of carrying out code excited linear prediction (CELP) using vector quantization by closed loop search of an optimum vector and also using, for example, an analysis by synthesis method.




In the embodiment shown in

FIG. 1

, the speech signal supplied to an input terminal


101


is sent to an LPC inverted filter


111


and an LPC analysis and quantization unit


113


of the first encoding unit


110


. The LPC coefficients or the so-called α-parameters, obtained by an LPC analysis quantization unit


113


, are sent to the LPC inverted filter


111


of the first encoding unit


110


. From the LPC inverted filter


111


are taken out linear prediction residuals (LPC residuals) of the input speech signal. From the LPC analysis quantization unit


113


, a quantized output of linear spectrum pairs (LSPs) are taken out and sent to an output terminal


102


, as later explained. The LPC residuals from the LPC inverted filter


111


are sent to a sinusoidal analytic encoding unit


114


. The sinusoidal analytic encoding unit


114


performs pitch detection and calculations of the amplitude of the spectral envelope as well as V/UV discrimination by a V/UV discrimination unit


115


. The spectra envelope amplitude data from the sinusoidal analytic encoding unit


114


is sent to a vector quantization unit


116


. The codebook index from the vector quantization unit


116


, as a vector-quantized output of the spectral envelope, is sent via a switch


117


to an output terminal


103


, while an output of the sinusoidal analytic encoding unit


114


is sent via a switch


118


to an output terminal


104


. A V/UV discrimination output of the V/UV discrimination unit


115


is sent to an output terminal


105


and, as a control signal, to the switches


117


,


118


. If the input speech signal is a voiced (V) sound, the index and the pitch are selected and taken out at the output terminals


103


,


104


, respectively.




The second encoding unit


120


of

FIG. 1

has, in the present embodiment, a code excited linear prediction coding (CELP coding) configuration and vector-quantizes the time-domain waveform using a closed loop search employing an analysis by synthesis method in which an output of a noise codebook


121


is synthesized by a weighted synthesis filter, the resulting weighted speech is sent to a subtractor


123


, an error between the weighted speech and the speech signal supplied to the input terminal


101


and thence through a perceptually weighted filter


125


is taken out, the error thus found is sent to a distance calculation circuit


124


to effect distance calculations and a vector minimizing the error is searched by the noise codebook


121


. This CELP encoding is used for encoding the unvoiced speech portion, as explained previously. The codebook index, as the UV data from the noise codebook


121


, is taken out at an output terminal


107


via a switch


127


which is turned on when the result of the V/UV discrimination is unvoiced (UV).





FIG. 2

is a block diagram showing the basic structure of a speech signal decoder, as a counterpart device of the speech signal encoder of

FIG. 1

, for carrying out the speech decoding method according to the present invention.




Referring to

FIG. 2

, a codebook index as a quantization output of the linear spectral pairs (LSPs) from the output terminal


102


of

FIG. 1

is supplied to an input terminal


202


. Outputs of the output terminals


103


,


104


and


105


of

FIG. 1

, that is the pitch, V/UV discrimination output and the index data, as envelope quantization output data, are supplied to input terminals


203


to


205


, respectively. The index data as data for the unvoiced data are supplied from the output terminal


107


of

FIG. 1

to an input terminal


207


.




The index as the envelope quantization output of the input terminal


203


is sent to an inverse vector quantization unit


212


for inverse vector quantization to find a spectral envelope of the LPC residues which is sent to a voiced speech synthesizer


211


. The voiced speech synthesizer


211


synthesizes the linear prediction encoding (LPC) residuals of the voiced speech portion by sinusoidal synthesis. The synthesizer


211


is fed also with the pitch and the V/UV discrimination output from the input terminals


204


,


205


. The LPC residuals of the voiced speech from the voiced speech synthesis unit


211


are sent to an LPC synthesis filter


214


. The index data of the UV data from the input terminal


207


is sent to an unvoiced sound synthesis unit


220


where reference is had to the noise codebook for taking out the LPC residuals of the unvoiced portion. These LPC residuals are also sent to the LPC synthesis filter


214


. In the LPC synthesis filter


214


, the LPC residuals of the voiced portion and the LPC residuals of the unvoiced, portion are processed by LPC synthesis. Alternatively, the LPC residuals of the voiced portion and the LPC residuals of the unvoiced portion summed together may be processed with LPC synthesis. The LSP index data from the input terminal


202


is sent to the LPC parameter reproducing unit


213


where α-parameters of the LPC are taken out and sent to the LPC synthesis filter


214


. The speech signals synthesized by the LPC synthesis filter


214


are taken out at an output terminal


201


.




Referring to

FIG. 3

, a more detailed structure of a speech signal encoder shown in

FIG. 1

is now explained. In

FIG. 3

, the parts or components similar to those shown in

FIG. 1

are denoted by the same reference numerals.




In the speech signal encoder shown in

FIG. 3

, the speech signals supplied to the input terminal


101


are filtered by a high-pass filter HPF


109


for removing signals of an unneeded range and thence supplied to an LPC (linear prediction encoding) analysis circuit


132


of the LPC analysis/quantization unit


113


and to the inverted LPC filter


111


.




The LPC analysis circuit


132


of the LPC analysis/quantization unit


113


applies a Hamming window, with a length of the input signal waveform on the order of 256 samples as a block, and finds a linear prediction coefficient, that is a so-called α-parameter, by the autocorrelation method. The framing interval as a data outputting unit is set to approximately 160 samples. If the sampling frequency fs is 8 kHz, for example, a one-frame interval is 20 msec or 160 samples.




The α-parameter from the LPC analysis circuit


132


is sent to an α-LSP conversion circuit


133


for conversion into line spectnum pair (LSP) parameters. This converts the α-parameter, as found by direct type filter coefficient, into for example, ten, that is, five pairs of the LSP parameters. This conversion is carried out by, for example, the Newton-Rhapson method. The reason the α-parameters are converted into the LSP parameters is that the LSP parameter is superior in interpolation characteristics to the α-parameters.




The LSP parameters from the α-LSP conversion circuit


133


are matrix- or vector quantized by the LSP quantizer


134


. It is possible to take a frame-to-frame difference prior to vector quantization, or to collect plural frames in order to perform matrix quantization. In the present case, two frames, each 20 msec long, of the LSP parameters, calculated every 20 msec, are handled together and processed with matrix quantization and vector quantization.




The quantized output of the quantizer


134


, that is, the index data of the LSP quantization, are taken out at a terminal


102


, while the quantized LSP vector is sent to an LSP interpolation circuit


136


.




The LSP interpolation circuit


136


interpolates the LSP vectors, quantized every 20 msec or 40 msec, in order to provide an octatuple rate. That is, the LSP vector is updated every 2.5 msec. The reason is that, if the residual waveform is processed with the analysis/synthesis by the harmonic encoding/decoding method, the envelope of the synthetic waveform presents an extremely smooth waveform, so that, if the LPC coefficients are changed abruptly every 20 msec, a foreign noise is likely to be produced. That is, if the LPC coefficient is changed gradually every 2.5 msec, such foreign noise may be prevented from occurrence.




For inverted filtering of the input speech using the interpolated LSP vectors produced every 2.5 msec, the LSP parameters are converted by an LSP to α conversion circuit


137


into α-parameters, which are filter coefficients of e.g., ten-order direct type filter. An output of the LSP to α conversion circuit


137


is sent to the LPC inverted filter circuit


111


which then performs inverse filtering for producing a smooth output using an α-parameter updated every 2.5 msec. An output of the inverse LPC filter


111


is sent to an orthogonal transform circuit


145


, such as a DCT circuit, of the sinusoidal analysis encoding unit


114


, such as a harmonic encoding circuit.




The α-parameter from the LPC analysis circuit


132


of the LPC analysis/quantization unit


113


is sent to a perceptual weighting filter calculating circuit


139


where data for perceptual weighting is found. These weighting data are sent to a perceptual weighting vector quantizer


116


, perceptual weighting filter


125


and the perceptual weighted synthesis filter


122


of the second encoding unit


120


.




The sinusoidal analysis encoding unit


114


of the harmonic encoding circuit analyzes the output of the inverted LPC filter


111


by a method of harmonic encoding. That is, pitch detection, calculations of the amplitudes Am of the respective harmonics and voiced (V)/unvoiced (UV) discrimination, are carried out and the numbers of the amplitudes Am or the envelopes of the respective harmonics, varied with the pitch, are made constant by dimensional conversion.




In an illustrative example of the sinusoidal analysis encoding unit


114


shown in

FIG. 3

, commonplace harmonic encoding is used. In particular, in multi-band excitation (MBE) encoding, it is assumed in modeling that voiced portions and unvoiced portions are present in each frequency area or band at the same time point (in the same block or frame). In other harmonic encoding techniques, it is uniquely judged whether the speech in one block or in one frame is voiced or unvoiced. In the following description, a given frame is judged to be UV if the totality of the bands is UV, insofar as the MBE encoding is concerned. Specified examples of the technique of the analysis synthesis method for MBE as described above may be found in JP Patent Application No. 4-91442 filed in the name of the Assignee of the present Application.




The open-loop pitch search unit


141


and the zero-crossing counter


142


of the sinusoidal analysis encoding unit


114


of

FIG. 3

is fed with the input speech signal from the input terminal


101


and with the signal from the high-pass filter (HPF)


109


, respectively. The orthogonal transform circuit


145


of the sinusoidal analysis encoding unit


114


is supplied with LPC residuals or linear prediction residuals from the inverted LPC filter


111


. The open loop pitch search unit


141


takes the LPC residuals of the input signals to perform relatively rough pitch search by open loop search. The extracted rough pitch data is sent to a fine pitch search unit


146


by closed loop search as later explained. From the open loop pitch search unit


141


, the maximum value of the normalized self correlation r(p), obtained by normalizing the maximum value of the autocorrelation of the LPC residuals along with the rough pitch data, are taken out along with the rough pitch data so as to be sent to the V/UV discrimination unit


115


.




The orthogonal transform circuit


145


performs orthogonal transform, such as discrete Fourier transform (DFT), for converting the LPC residuals on the time axis into spectral amplitude data on the frequency axis. An output of the orthogonal transform circuit


145


is sent to the fine pitch search unit


146


and a spectral evaluation unit


148


configured for evaluating the spectral amplitude or envelope.




The fine pitch search unit


146


is fed with relatively rough pitch data extracted by the open loop pitch search unit


141


and with frequency-domain data obtained by DFT by the orthogonal transform unit


145


. The fine pitch search unit


146


swings the pitch data by±several samples, at a rate of 0.2 to 0.5, centered about the rough pitch value data, in order to arrive ultimately at the value of the fine pitch data having an optimum decimal point (floating point). The analysis by synthesis method is used as the fine search technique for selecting a pitch so that the power spectrum will be closest to the power spectrum of the original sound. Pitch data from the closed-loop fine pitch search unit


146


is sent to an output terminal


104


via a switch


118


.




In the spectral evaluation unit


148


, the amplitude of each harmonics and the spectral envelope as the sum of the harmonics are evaluated based on the spectral amplitude and the pitch as the orthogonal transform output of the LPC residuals, and sent to the fine pitch search unit


146


, V/UV discrimination unit


115


and to the perceptually weighted vector quantization unit


116


.




The V/UV discrimination unit


115


discriminates V/UV of a frame based on an output of the orthogonal transform circuit


145


, an optimum pitch from the fine pitch search unit


146


, spectral amplitude data from the spectral evaluation unit


148


, maximum value of the normalized autocorrelation r(p) from the open loop pitch search unit


141


and the zero-crossing count value from the zero-crossing counter


142


. In addition, the boundary position of the band-based V/UV discrimination for the MBE may also be used as a condition for V/UV discrimination. A discrimination output of the V/UV discrimination unit


115


is taken out at an output terminal


105


.




An output unit of the spectrum evaluation unit


148


or an input unit of the vector quantization unit


116


is provided with a number of data conversion unit (a unit performing a sort of sampling rate conversion), not shown. The number of data conversion unit is used for setting the amplitude data |Am| of an envelope to a constant value in consideration that the number of bands split on the frequency axis and the number of data differ with the pitch. That is, if the effective band is up to 3400 kHz, the effective band can be split into 8 to 63 bands depending on the pitch. The number of mMX+1 of the amplitude data |Am|, obtained from band to band, is changed in a range from 8 to 63. Thus the data number conversion unit converts the amplitude data of the variable number mMx+1 to a pre-set number M of data, such as 44.




The amplitude data or envelope data of the pre-set number M, such as 44, from the data number conversion unit, provided at an output unit of the spectral evaluation unit


148


or at an input unit of the vector quantization unit


116


, are handled together in terms of a pre-set number of data, such as 44, as a unit, by the vector quantization unit


116


, by way of perfonning weighted vector quantization. This weight is supplied by an output of the perceptual weighting filter calculation circuit


139


. The index of the envelope from the vector quantizer


116


is taken out by a switch


117


at an output terminal


103


. Prior to weighted vector quantization, it is advisable to take inter-frame difference using a suitable leakage coefficient for a vector made up of a pre-set number of data.




The second encoding unit


120


is explained. The second encoding unit


120


has a so-called CELP encoding structure and is used in particular for encoding the unvoiced portion of the input speech signal. In the CELP encoding structure for the unvoiced portion of the input speech signal, a noise output, corresponding to the LPC residuals of the unvoiced sound, as a representative output value of the noise codebook, or a so-called stochastic codebook


121


, is sent via a gain control circuit


126


to a perceptually weighted synthesis filter


122


. The weighted synthesis filter


122


LPC synthesizes the input noise by LPC synthesis and sends the produced weighted unvoiced signal to the subtractor


123


. The subtractor


123


is fed with a signal supplied from the input terminal


101


via a high-pass filter (HPF)


109


and perceptually weighted by a perceptual weighting filter


125


. The subtractor finds the difference or error between the signal and the signal from the synthesis filter


122


. Meanwhile, a zero input response of the perceptually weighted synthesis filter is previously subtracted from an output of the perceptual weighting filter


125


. This error is fed to a distance calculation circuit


124


for calculating the distance. A representative vector value which will minimize the error is searched in the noise codebook


121


. The above is the summary of the vector quantization of the time-domain waveform employing the closed-loop search by the analysis by synthesis method.




As data for the unvoiced (UV) portion from the second encoder


120


employing the CELP coding structure, the shape index of the codebook from the noise codebook


121


and the gain index of the codebook from the gain circuit


126


are taken out. The shape index, which is the UV data from the noise codebook


121


, is sent to an output terminal


107




s


via a switch


127




s


, while the gain index, which is the UV data of the gain circuit


126


, is sent to an output terminal


107




g


via a switch


127




g.






These switches


127




s


,


127




g


and the switches


117


,


118


are turned on and off depending on the results of V/UV decision from the V/UV discrimination unit


115


. Specifically, the switches


117


,


118


are turned on, if the results of V/UV discrimination of the speech signal of the frame currently transmitted indicates voiced (V), while the switches


127




s


,


127




g


are turned on if the speech signal of the frame currently transmitted is unvoiced (UV).





FIG. 4

shows a more detailed structure of a speech signal decoder shown in FIG.


2


. In

FIG. 4

, the same numerals are used to denote the opponents shown in FIG.


2


.




In

FIG. 4

, a vector quantization output of the LSPs corresponding to the output terminal


102


of

FIGS. 1 and 3

, that is the codebook index, is supplied to an input terminal


202


.




The LSP index is sent to the inverted vector quantizer


231


of the LSP for the LPC parameter reproducing unit


213


so as to be inverse vector quantized to line spectral pair (LSP) data which are then supplied to LSP interpolation circuits


232


,


233


for interpolation. The resulting interpolated data is converted by the LSP to α conversion circuits


234


,


235


to α parameters which are sent to the LPC synthesis filter


214


. The LSP interpolation circuit


232


and the LSP to α conversion circuit


234


are designed for voiced (V) sound, while the LSP interpolation circuit


233


and the LSP to α conversion circuit


235


are designed for unvoiced (UV) sound. The LPC synthesis filter


214


is made up of the LPC synthesis filter


236


of the voiced speech portion and the LPC synthesis filter


237


of the unvoiced speech portion. That is, LPC coefficient interpolation is carried out independently for the voiced speech portion and the unvoiced speech portion for prohibiting ill effects which might otherwise be produced in the transient portion from the voiced speech portion to the unvoiced speech portion or vice versa by interpolation of the LSPs of totally different properties.




To an input terminal


203


of

FIG. 4

is supplied code index data corresponding to the weighted vector quantized spectral envelope Am corresponding to the output of the terminal


103


of the encoder of

FIGS. 1 and 3

. To an input terminal


204


is supplied pitch data from the terminal


104


of

FIGS. 1 and 3

and to an input terminal


205


is supplied V/UV discrimination data from the terminal


105


of

FIGS. 1 and 3

.




The vector-quantized index data of the spectral envelope Am from the input terminal


203


is sent to an inverted vector quantizer


212


for inverse vector quantization where a conversion inverted from the data number conversion is carried out. The resulting spectral envelope data is sent to a sinusoidal synthesis circuit


215


.




If the inter-frame difference is found prior to vector quantization of the spectrum during encoding, inter-frame difference is decoded after inverse vector quantization for producing the spectral envelope data.




The sinusoidal synthesis circuit


215


is fed with the pitch from the input terminal


204


and the V/UV discrimination data from the input terminal


205


. From the sinusoidal synthesis circuit


215


, LPC residual data corresponding to the output of the LPC inverse filter


111


shown in

FIGS. 1 and 3

are taken out and sent to an adder


218


. The specified technique of the sinusoidal synthesis is disclosed in, for example, JP Patent Application Nos. 4-91442 and 6-198451 proposed by the present Assignee.




The envelop data of the inverse vector quantizer


212


and the pitch and the V/UV discrimination data from the input terminals


204


,


205


are sent to a noise synthesis circuit


216


configured for noise addition for the voiced portion (V). An output of the noise synthesis circuit


216


is sent to an adder


218


via a weighted overlap-and-add circuit


217


. Specifically, the noise is added to the voiced portion of the LPC residual signals in consideration that, if the excitation as an input to the LPC synthesis filter of the voiced sound is produced by sine wave synthesis, a stuffed feeling is produced in the low-pitch sound, such as male speech, and the sound quality is abruptly changed between the voiced sound and the unvoiced sound, thus producing an unnatural hearing feeling. Such noise takes into account the parameters concerned with speech encoding data, such as pitch, amplitudes of the spectral envelope, maximum amplitude in a frame or the residual signal level, in connection with the LPC synthesis filter input of the voiced speech portion, that is excitation.




A sum output of the adder


218


is sent to a synthesis filter


236


for the voiced sound of the LPC synthesis filter


214


where LPC synthesis is carried out to form time waveform data which then is filtered by a post-filter


238




v


for the voiced speech and sent to the adder


239


.




The shape index and the gain index, as UV data from the output terminals


107




s


and


107




g


of

FIG. 3

, are supplied to the input terminals


207




s


and


207




g


of

FIG. 4

, respectively, and thence supplied to the unvoiced speech synthesis unit


220


. The shape index from the terminal


207




s


is sent to the noise codebook


221


of the unvoiced speech synthesis unit


220


, while the gain index from the terminal


207




g


is sent to the gain circuit


222


. The representative value output read out from the noise codebook


221


is a noise signal component corresponding to the LPC residuals of the unvoiced speech. This becomes a pre-set gain amplitude in the gain circuit


222


and is sent to a windowing circuit


223


so as to be windowed for smoothing the junction to the voiced speech portion.




An output of the windowing circuit


223


is sent to a synthesis filter


237


for the unvoiced (UV) speech of the LPC synthesis filter


214


. The data sent to the synthesis filter


237


is processed with LPC synthesis to become time waveform data for the unvoiced portion. The time waveform data of the unvoiced portion is filtered by a post-filter for the unvoiced portion


238




u


before being sent to an adder


239


.




In the adder


239


, the time waveform signal from the post-filter for the voiced speech


238




v


and the time waveform data for the unvoiced speech portion from the post-filter


238




u


for the unvoiced speech are added to each other and the resulting sum data is taken out at the output terminnal


201


.




The above-described speech signal encoder can output data of different bit rates depending on the demanded sound quality. That is, the output data can be outputted with variable bit rates.




Specifically, the bit rate of output data can be switched between a low bit rate and a high bit rate. For example, if the low bit rate is 2 kbps and the high bit rate is 6 kbps, the output data is data of the bit rates having the following bit rates shown in Table 1.




The pitch data from the output terminal


104


is outputted at all times at a bit rate of 8 bits/20 msec for the voiced speech, with the V/UV discrimination output from the output terminal


105


being at all times 1 bit/20 msec. The index for LSP quantization, outputted from the output terminal


102


, is switched between 32 bits/40 msec and 48 bits/40 msec. On the other hand, the index during the voiced speech (V) outputted by the output terminal


103


is switched between 15 bits/20 msec and 87 bits/20 msec. The index for the unvoiced (UV) outputted from the output terminals


107




s


and


107




g


is switched between 11 bits/10 msec and 23 bits/5 msec. The output data for the voiced sound (UV) is 40 bits/20 msec for 2 kbps and 120 kbps/20 msec for 6 kbps. On the other hand, the output data for the voiced sound (UV) is 39 bits/20 msec for 2 kbps and 117 kbps/20 msec for 6 kbps.




The index for LSP quantization, the index for voiced speech (V) and the index for the unvoiced speech (UV) are explained later on in connection with the arrangement of pertinent portions.




Referring to

FIGS. 6 and 7

, matrix quantization and vector quantization in the LSP quantizer


134


are explained in detail.




The α-parameter from the LPC analysis circuit


132


is sent to an α-LSP circuit


133


for conversion to LSP parameters. If the P-order LPC analysis is performed in a LPC analysis circuit


132


, P α-parameters are calculated. These P α-parameters are converted into LSP parameters which are held in a buffer


610


.




The buffer


610


outputs 2 frames of LSP parameters. The two frames of the LSP parameters are matrix-quantized by a matrix quantizer


620


made up of a first matrix quantizer


620




1


and a second matrix quantizer


620




2


. The two frames of the LSP parameters are matrix-quantized in the first matrix quantizer


620




1


and the resulting quantization error is further matrix-quantized in the second matrix quantizer


620




2


. The matrix quantization exploits correlation in both the time axis and in the frequency axis.




The quantization error for two frames from the matrix quantizer


620




2


enters a vector quantization unit


640


made up of a first vector quantizer


640




1


and a second vector quantizer


640




2


. The first vector quantizer


640




1


is made up of two vector quantization portions


650


,


660


, while the second vector quantizer


640




2


is made up of two vector quantization portions


670


,


680


. The quantization error from the matrix quantization unit


620


is quantized on the frame basis by the vector quantization portions


650


,


660


of the first vector quantizer


640




1


. The resulting quantization error vector is further vector-quantized by the vector quantization portions


670


,


680


of the second vector quantizer


640




2


. The above described vector quantization exploits correlation along the frequency axis.




The matrix quantization unit


620


, executing the matrix quantization as described above, includes at least a first matrix quantizer


620




1


for performing first matrix quantization step and a second matrix quantizer


620




2


for performing second matrix quantization step for matrix quantizing the quantization error produced by the first matrix quantization. The vector quantization unit


640


, executing the vector quantization as described above, includes at least a first vector quantizer


640




1


for performing a first vector quantization step and a second vector quantizer


640




2


for performing a second matrix quantization step for matrix quantizing the quantization error produced by the first vector quantization.




The matrix quantization and the vector quantization will now be explained in detail.




The LSP parameters for two frames, stored in the buffer


600


, that is a 10×2 matrix, is sent to the first matrix quantizer


620




1


. The first matrix quantizer


620




1


sends LSP parameters for two frames via LSP parameter adder


621


to a weighted distance calculating unit


623


for finding the weighted distance of the minimum value.




The distortion measure d


MQ1


during codebook search by the first matrix quantizer


620




1


is given by the equation (1):











d
MQ1



(


X
1

,

X
1



)


=




1


t
=
0







P


i
=
1





w


(

t
,
i

)





(



x
1



(

t
,
i

)


-


x
1




(

t
,
i

)



)

2








(
1
)













where X


1


is the LSP parameter and X


1


′ is the quantization value, with t and i being the numbers of the P-dimension.




The weight w, in which weight limitation in the frequency axis and in the time axis is not taken into account, is given by the equation (2):










w


(

t
,
i

)


=


1


x


(

t
,

i
+
1


)


-

x


(

t
,
i

)




+

1


x


(

t
,
i

)


-

x


(

t
,

i
-
1


)









(
2
)













where x(t, 0)=0, x(t, p+1)=π regardless of t.




The weight w of the equation (2) is also used for downstream side matrix quantization and vector quantization.




The calculated weighted distance is sent to a matrix quantizer MQ


1




622


for matrix quantization. An 8-bit index outputted by this matrix quantization is sent to a signal switcher


690


. The quantized value by matrix quantization is subtracted in an adder


621


from the LSP parameters for two frames from the buffer


610


. A weighted distance calculating unit


623


calculates the weighted distance every two frames so that matrix quantization is carried out in the matrix quantization unit


622


. Also, a quantization value minimizing the weighted distance is selected. An output of the adder


621


is sent to an adder


631


of the second matrix quantizer


620




2


.




Similarly to the first matrix quantizer


620




1


, the second matrix quantizer


620




2


performs matrix quantization. An output of the adder


621


is sent via adder


631


to a weighted distance calculation unit


633


where the minimum weighted distance is calculated.




The distortion measure d


MQ2


during the codebook search by the second matrix quantizer


620




2


is given by the equation (3):











d
MQ2



(


X
2

,

X
2



)


=




1


t
=
0







P


i
=
1





w


(

t
,
i

)





(



x
2



(

t
,
i

)


-


x
2




(

t
,
i

)



)

2








(
3
)













The weighted distance is sent to a matrix quantization unit (MQ


2


)


632


for matrix quantization. An 8-bit index, outputted by matrix quantization, is sent to a signal switcher


690


. The weighted distance calculation unit


633


sequentially calculates the weighted distance using the output of the adder


631


. The quantization value minimizing the weighted distance is selected. An output of the adder


631


is sent to the adders


651


,


661


of the first vector quantizer


640




1


frame by frame.




The first vector quantizer


640




1


, performs vector quantization frame by frame. An output of the adder


631


is sent frame by frame to each of weighted distance calculating units


653


,


663


via adders


651


,


661


for calculating the minimum weighted distance.




The difference between the quantization error X


2


and the quantization error X


2


′ is a matrix of (10×2). If the difference is represented as X


2


−X


2


′=[


x




3−1


,


x




3−2


], the distortion measures d


VQ1


, d


VQ2


during codebook search by the vector quantization units


652


,


662


of the first vector quantizer


640




1


, are given by the equations (4) and (5):











d
VQ1



(



x
_


3
-
1


,


x
_


3
-
1




)


=




P


i
=
1





w


(

0
,
i

)





(



x

3
-
1




(

0
,
i

)


-


x

3
-
1





(

0
,
i

)



)

2







(
4
)








d
VQ2



(



x
_


3
-
2


,


x
_


3
-
2




)


=




P


i
=
1





w


(

1
,
i

)





(



x

3
-
2




(

1
,
i

)


-


x

3
-
2





(

1
,
i

)



)

2







(
5
)













The weighted distance is sent to a vector quantization VQ


1




652


and a vector quantization unit VQ


2




662


for vector quantization. Each 8-bit index outputted by this vector quantization is sent to the signal switcher


690


. The quantization value is subtracted by the adders


651


,


661


from the input two-frame quantization error vector. The weighted distance calculating units


653


,


663


sequentially calculate the weighted distance, using the outputs of the adders


651


,


661


, for selecting the quantization value minimizing the weighted distance. The outputs of the adders


651


,


661


are sent to adders


671


,


681


of the second vector quantizer


640




2


.




The distortion measure d


VQ3


, d


VQ4


during codebook searching by the vector quantizers


672


,


682


of the second vector quantizer


640




2


, for










x






4−1




=x




3−1







x






3−1















x






4−2




=x




3−2







x






3−2









are given by the equations (6) and (7):











d
VQ3



(



x
_


4
-
1


,


x
_


4
-
1




)


=




P


i
=
1





w


(

0
,
i

)





(



x

4
-
1




(

0
,
i

)


-


x

4
-
1





(

0
,
i

)



)

2







(
6
)








d
VQ4



(



x
_


4
-
2


,


x
_


4
-
2




)


=




P


i
=
1





w


(

1
,
i

)





(



x

4
-
2




(

1
,
i

)


-


x

4
-
2





(

1
,
i

)



)

2







(
7
)













These weighted distances are sent to the vector quantizer (VQ


3


)


672


and to the vector quantizer (VQ


4


)


682


for vector quantization. The 8-bit output index data from vector quantization are subtracted by the adders


671


,


681


from the input quantization error vector for two frames. The weighted distance calculating units


673


,


683


sequentially calculate the weighted distances using the outputs of the adders


671


,


681


for selecting the quantized value minimizing the weighted distances.




During codebook learning, learning is performed by the general Lloyd algorithm based on the respective distortion measures.




The distortion measures during codebook searching and during learning may be of different values.




The 8-bit index data from the matrix quantization units


622


,


632


and the vector quantization units


652


,


662


,


672


and


682


are switched by the signal switcher


690


and outputted at an output terminal


691


.




Specifically, for a low-bit rate, outputs of the first matrix quantizer


620




1


carrying out the first matrix quantization step, second matrix quantizer


620




2


carrying out the second matrix quantization step and the first vector quantizer


640




1


, carrying out the first vector quantization step are taken out, whereas, for a high bit rate, the output for the low bit rate is summed to an output of the second vector quantizer


640




2


carrying out the second vector quantization step and the resulting sum is taken out.




This outputs an index of 32 bits/40 msec and an index of 48 bits/40 msec for 2 kbps and 6 kbps, respectively.




The matrix quantization unit


620


and the vector quantization unit


640


perform weighting limited in the frequency axis and/or the time axis in conformity to characteristics of the parameters representing the LPC coefficients.




The weighting limited in the frequency axis in conformity to characteristics of the LSP parameters is first explained. If the number of orders P=10, the LSP parameters X(i) are grouped into








L




1




={X


(


i


)|1


≦i


≦2}










L




2




={X


(


i


)|3


≦i


≦6}










L




3




={X


(


i


)|7


≦i


≦10}






for three ranges of low, mid and high ranges. If the weighting of the groups L


1


, L


2


and L


3


is 1/4, 1/2 and 1/4, respectively, the weighting limited only in the frequency axis is given by the equations (8), (9) and (10)











w




(
i
)


=



w


(
i
)






2


j
=
1




w


(
j
)




×

1
4






(
8
)








w




(
i
)


=



w


(
i
)






6


j
=
3




w


(
j
)




×

1
2






(
9
)








w




(
i
)


=



w


(
i
)






10


j
=
7




w


(
j
)




×

1
4






(
10
)













The weighting of the respective LSP parameters is performed in each group only and such weight is limited by the weighting for each group.




Looking in the time axis direction, the sum total of the respective frames is necessarily 1, so that limitation in the time axis direction is frame-based. The weight limited only in the time axis direction is given by the equation (11):











w




(

i
,
t

)


=


w


(

i
,
t

)






10


j
=
1







1


s
=
0




w


(

j
,
s

)









(
11
)













where 1≦i≦10 and 0≦t≦1.




By this equation (11), weighting not limited in the frequency axis direction is carried out between two frames having the frame numbers of t=0 and t=1. This weighting limited only in the time axis direction is carried out between two frames processed with matrix quantization.




During learning, the totality of frames used as learning data, having the total number T, is weighted in accordance with the equation (12):











w




(

i
,
t

)


=


w


(

i
,
t

)






10


j
=
1







T


s
=
0




w


(

j
,
s

)









(
12
)













where 1≦i≦10 and 0≦t≦T.




The weighting limited in the frequency axis direction and in the time axis direction is explained. If the number of orders P=10, the LSP parameters x(i, t) are grouped into







L




1




={x


(


i, t


)|1


≦i


≦2, 0


≦t


≦1}








L




2




={x


(


i, t


)|3


≦i


≦6, 0


≦t


≦1}










L




3




={x


(


i, t


)|7


≦i


≦10, 0


≦t


≦1}






for three ranges of low, mid and high ranges. If the weights for the groups L


1


, L


2


and L


3


ares 1/4, 1/2 and 1/4, the weighting limited only in the frequency axis is given by the equations (13), (14) and (15):











w




(

i
,
t

)


=



w


(

i
,
t

)






2


j
=
1







1


s
=
0




w


(

j
,
s

)





×

1
4






(
13
)








w




(

i
,
t

)


=



w


(

i
,
t

)






6


j
=
3







1


s
=
0




w


(

j
,
s

)





×

1
2






(
14
)








w




(

i
,
t

)


=



w


(

i
,
t

)






10


j
=
7







1


s
=
0




w


(

j
,
s

)





×

1
4






(
15
)













By these equations (13) to (15), weighting limitation is carried out every three frames in the frequency axis direction and across two frames processed with matrix quantization in the time axis direction. This is effective both during codebook search and during learning.




During learning, weighting is for the totality of frames of the entire data. The LSP parameters x(i, t) are grouped into








L




1




={x


(


i, t


)|1


≦i


≦2, 0


≦t≦T}












L




2




={x


(


i, t


)|3


≦i


≦6, 0


≦t≦T}












L




3




={x


(


i, t


)|7


≦i


≦10, 0


≦t≦T}








for low, mid and high ranges. If the weighting of the groups L


1


, L


2


and L


3


is 1/4, 1/2 and 1/4, respectively, the weighting for the groups L


1


, L


2


and L


3


, limited in the frequency axis and in the frequency direction, is given by the equations (16), (17) and (18):











w




(

i
,
t

)


=



w


(

i
,
t

)






2


j
=
1







T


s
=
0




w


(

j
,
s

)





×

1
4






(
16
)








w




(

i
,
t

)


=



w


(

i
,
t

)






6


j
=
3







T


s
=
0




w


(

j
,
s

)





×

1
2






(
17
)








w




(

i
,
t

)


=



w


(

i
,
t

)






10


j
=
7







T


s
=
0




w


(

j
,
s

)





×

1
4






(
18
)













By these equations (16) to (18), weighting can be performed for three ranges in the frequency axis direction and across the totality of frames in the time axis direction.




In addition, the matrix quantization unit


620


and the vector quantization unit


640


perform weighting depending on the magnitude of changes in the LSP parameters. In V to UV or UV to V transient regions, which represent minority frames among the totality of speech frames, the LSP parameters are changed significantly due to difference in the frequency response between consonants and vowels. Therefore, the weighting shown by the equation (19) may be multiplied by the weighting W′(i, t) for carrying out the weighting placing emphasis on the transition regions.










wd


(
t
)


=




10


i
=
1





&LeftBracketingBar;



x
1



(

i
,
t

)


-


x
1



(

i
,

t
-
1


)



&RightBracketingBar;

2






(
19
)













The following equation (20):










wd


(
t
)


=




10


i
=
1





&LeftBracketingBar;



x
1



(

i
,
t

)


-


x
1



(

i
,

t
-
1


)



&RightBracketingBar;







(
20
)













may be used in place of the equation (19).




Thus the LSP quantization unit


134


executes two-stage matrix quantization and two-stage vector quantization to render the number of bits of the output index variable.




The basic structure of the vector quantization unit


116


is shown in

FIG. 8

, while a more detailed structure of the vector quantization unit


116


shown in

FIG. 8

is shown in FIG.


9


. An illustrative structure of weighted vector quantization for the spectral envelope Am in the vector quantization unit


116


is now explained.




First, in the speech signal encoding device shown in

FIG. 3

, an illustrative arrangement for data number conversion for providing a constant number of data of the amplitude of the spectral envelope on an output side of the spectral evaluating unit


148


or on an input side of the vector quantization unit


116


is explained.




A variety of methods may be conceived for such data number conversion. In the present embodiment, dummy data interpolating the values from the last data in a block to the first data in the block, or pre-set data such as data repeating the last data or the first data in a block, are appended to the amplitude data of one block of an effective band on the frequency axis for enhancing the number of data to N


F


, amplitude data equal in number to Os times, such as eight times, are found by Os-tuple, such as octatuple, oversampling of the limited bandwidth type. The ((mMx+1)×Os) amplitude data are linearly interpolated for expansion to a larger N


M


number, such as 2048. This N


M


data is sub-sampled for conversion to the above-mentioned pres-set number M of data, such as 44 data. In effect, only data necessary for formulating M data ultimately required is calculated by oversampling and linear interpolation without finding all of the above-mentioned N


M


data.




The vector quantization unit


116


for carrying out weighted vector quantization of

FIG. 8

at least includes a first vector quantization unit


500


for performing the first vector quantization step and a second vector quantization unit


510


for carrying out the second vector quantization step for quantizing the quantization error vector produced during the first vector quantization by the first vector quantization unit


500


. This first vector quantization unit


500


is a so-called first-stage vector quantization unit, while the second vector quantization unit


510


is a so-called second-stage vector quantization unit.




An output vector


x


of the spectral evaluation unit


148


, that is, envelope data having a pre-set number M, enters an input terminal


501


of the first vector quantization unit


500


. This output vector


x


is quantized with weighted vector quantization by the vector quantization unit


502


. Thus a shape index outputted by the vector quantization unit


502


is outputted at an output terminal


503


, while a quantized value


x




0


′ is outputted at an output terminal


504


and sent to adders


505


,


513


. The adder


505


subtracts the quantized value


x




0


′ from the source vector


x


to give a multi-order quantization error vector y.




The quantization error vector y is sent to a vector quantization unit


511


in the second vector quantization unit


510


. This second vector quantization unit


511


is made up of plural vector quantizers, or two vector quantizers


511




1


,


511




2


in FIG.


8


. The quantization error vector y is dimensionally split so as to be quantized by weighted vector quantization in the two vector quantizers


511




1


,


511




2


. The shape index outputted by these vector quantizers


511




1


,


511




2


is outputted at output terminals


512




1


,


512




2


, while the quantized values y


1


′, y


2


′ are connected in the dimensional direction and sent to an adder


513


. The adder


513


adds the quantized values y


1


′, y


2


′ to the quantized value


x




0


′ to generate a quantized value


x




1


′ which is outputted at an output terminal


514


.




Thus, for the low bit rate, an output of the first vector quantization step by the first vector quantization unit


500


is taken out, whereas, for the high bit rate, an output of the first vector quantization step and an output of the second quantization step by the second quantization unit


510


are outputted.




Specifically, the vector quantizer


502


in the first vector quantization unit


500


in the vector quantization section


116


is of an L-order, such as 44-dimensional two-stage structure, as shown in FIG.


9


.




That is, the sum of the output vectors of the 44-dimensional vector quantization codebook with the codebook size of 32, multiplied with a gain g


i


, is used as a quantized value


x




0


′ of the 44-dimensional spectral envelope vector


x


. Thus, as shown in

FIG. 8

, the two codebooks are CB


0


and CB


1


, while the output vectors are s


1i


, s


1j


, where 0≦i and j≦31. On the other hand, an output of the gain codebook CB


g


is g


1


, where 0≦1≦31, where g


1


is a scalar. An ultimate output


x




0


′ is g


1


(s


1i


+s


1j


)




The spectral envelope Am obtained by the above MBE analysis of the LPC residuals and converted into a pre-set dimension is


x


. It is crucial how efficiently


x


is to be quantized.




The quantization error energy E is defined by












E
=

&LeftDoubleBracketingBar;

W



{


H


x
_


-

H







g
1

(

(



s
_


0

i


+


s
_


1

j



)

}



&RightDoubleBracketingBar;

2









=


&LeftDoubleBracketingBar;

WH


{


x
_

-


g
1



(



s
_


0

i


+


s
_


1

j



)



}


&RightDoubleBracketingBar;

2








(
21
)













where H denotes characteristics on the frequency axis of the LPC synthesis filter and W a matrix for weighting for representing characteristics for perceptual weighting on the frequency axis.




If the α-parameter by the results of LPC analysis of the current frame is denoted as α


i


(1≦i≦P), the values of the L-dimension, for example, 44-dimension corresponding points, are sampled from the frequency response of the equation (22):










H


(
z
)


=

1

1
+




P


i
=
1





α
i



z

-
i










(
22
)













For calculations, 0s are stuffed next to a string of 1, α


1


, α


2


, . . . α


P


to give a string of 1, α


1


, α


2


, . . . α


P


, 0, 0, . . . , 0 to give e.g., 256-point data. Then, by 256-point FFT, (r


e




2


+im


2


)


1/2


are calculated for points associated with a range from 0 to π and the reciprocals of the results are found. These reciprocals are sub-sampled to L points, such as 44 points, and a matrix is formed having these L points as diagonal elements:






H
=

[




h


(
1
)














0










h


(
2
)




































0













h


(
L
)





]











A perceptually weighted matrix W is given by the equation (23):










W


(
z
)


=


1
+




P


i
=
1





α
i



λ
b
i



z

-
i






1
+




P


i
=
1





α
i



λ
a
i



z

-
i










(
23
)













where α


i


is the result of the LPC analysis, and λa, λb are constants, such that λa=0.4 and λb=0.9.




The matrix W may be calculated from the frequency response of the above equation (23). For example, FFT is executed on 256-point data of 1, α


1


λb, α


2


λ


1


b


2


, . . . αpλb


P


, 0, 0, . . . , 0 to find (r


e




2


[i]+Im


2


[i])


1/2


for a domain from 0 to π, where 0≦i≦128. The frequency response of the denominator is found by 256-point FFT for a domain from 0 to π for 1, α


1


λa, α


2


λa


2


, . . . , αpλa


P


, 0, 0, . . . , 0 at 128 points to find (re′


2


[i]+im′


2


[i])


1/2


, where 0≦i≦128. The frequency response of the equation 23 may be found by








w
0



[
i
]


=





re
2



[
i
]


+


im
2



[
i
]








re
′2



[
i
]


+


im
′2



[
i
]















where 0≦i≦128.




This is found for each associated point of, for example, the 44-dimensional vector, by the following method. More precisely, linear interpolation should be used. However, in the following example, the closest point is used instead.




That is,






ω[


i


]=ω


0


[nint{128


i/L


)], where 1


≦i≦L.








In the equation nint(X) is a function which returns a value closest to X.




As for H, h(


1


), h(


2


), . . . h(L) are found by a similar method. That is,










H
=

[




h


(
1
)














0










h


(
2
)




































0













h


(
L
)





]








W
=

[




w


(
1
)














0










w


(
2
)




































0













w


(
L
)





]








WH
=

[





h


(
1
)




w


(
1
)















0











h


(
2
)




w


(
2
)





































0














h


(
L
)




w


(
L
)






]






(
24
)













As another example, H(z)W(z) is first found and the frequency response is then found for decreasing the number of times of FFT. That is, the denominator of the equation (25):











H


(
z
)




W


(
z
)



=


1

1
+




P


i
=
1





α
i



z

-
i






·


1
+




P


i
=
1





α
i



λ
b
i



z

-
i






1
+




P


i
=
1





α
i



λ
a
i



z

-
i











(
25
)













is expanded to








(

1
+




P


i
=
1





α
i



z

-
i





)



(

1
+




P


i
=
1





α
a
i



λ
a
i



z

-
i





)


=

1
+





2

P



i
=
1





β
1



z

-
i















256-point data, for example, is produced by using a string of 1, β


1


, β


2


, . . . , β


2p


, 0, 0, . . . , 0. Then, 256-point FFT is executed, with the frequency response of the amplitude being







rms


[
i
]


=




re
″2



[
i
]


+


im
″2



[
i
]














where 0≦i≦128. From this,








wh
0



[
i
]


=





re
2



[
i
]


+


im
2



[
i
]








re
″2



[
i
]


+


im
″2



[
i
]















where 0≦i≦128. This is found for each of corresponding points of the L-dimensional vector. If the number of points of the FFT is small, linear interpolation should be used. However, the closest value is herein is found by:







wh


[
i
]


=


wh
0



[

n






int


(


128
L

·
i

)



]












where 1≦i≦L. If a matrix having these as diagonal elements is W′,










W


=

[




wh


(
1
)














0










wh


(
2
)




































0













wh


(
L
)





]





(
26
)













The equation (26) is the same matrix as the above equation (24). Alternatively, |H(exp(jω))W(exp(jω))| may be directly calculated from the equation (25) with respect to ω=iπ, where 1≦i≦L, so as to be used for wh[i].




Alternatively, a suitable length, such as 40 points, of an impulse response of the equation (25) may be found and FFTed to find the amplitude frequency response which can be used for matrix W′.




Rewriting the equation (21) using this matrix, that is frequency characteristics of the weighted synthesis filter, we obtain








E=∥W


′(




x






k




−g




1


(


s




0i




+s




1j


))∥  (27)






The method for learning the shape codebook and the gain codebook is explained.




The expected value of the distortion is minimized for all frames k for which a code vector s


0c


is selected for CB


0


. If there are M such frames, it suffices if









J
=


1
M






M


k
=
1





&LeftDoubleBracketingBar;


W
k




(



x
_

k

-


g
k



(



s
_


0

c


+


s
_

ik


)



)


&RightDoubleBracketingBar;

2







(
28
)













is minimized. In the equation (28), W


k


′,


x




k


, g


k


and s


ik


denote the weighting for the k'th frame, an input to the k'th frame, the gain of the k'th frame and an output of the codebook CB1 for the k'th frame, respectively.




For minimizing the equation (28),












J
=






1
M






M


k
=
1




{


(



x
_

k
T

-


g
k



(



s
_


0

c

T

+


s
_


1

k

T


)



)



W
k







T





W
k




(



x
_

k

-


g
k



(



s
_


0

c


+


s
_


1

k



)



)



}









=






1
M






M


k
=
1




{




x
_

k
T



W
k







T




W
k





x
_

k


-

2



g
k



(



s
_


0

c

T

+


s
_


1

k

T


)




W
k







T




W
k





x
_

k


+


















g
k
2



(



s
_


0

c

T

+


s
_


1

k

T


)




W
k







T





W
k




(



s
_


0

c


+


s
_


1

k



)



}







=






1
M






M


k
=
1




{




x
_

k
T



W
k







T




W
k





x
_

k


-

2



g
k



(



s
_

oc
T

+


s
_


1

k

T


)




W
k







T




W
k





x
_

k


+


















g
k
2




s
_

oc
T



W
k







T




W
k





s
_

oc


+

2






g
k
2




s
_

oc
T



W
k







T




W
k





s
_


1

k



+


g
k
2




s
_


1

k

T



W
k







T




W
k





s
_


1

k




}








(
29
)












J





s
_


0

c




=






1
M






M


k
=
1




{



-
2



g
k



W
k







T




W
k





x
_

k


+

2


g
k
2



W
k







T




W
k





s
_


0

c



+

















2


g
k
2



W
k







T




W
k





s
_


1

k



}


=
0







(
30
)













Hence,










M


k
=
1




(



g
k



W
k







T




W
k





x
_

k


-


g
k
2



W
k







T




W
k





S
_

lk



)


=




M


k
=
1





g
k
2



W
k







T




W
k





s
_


0

c














so that










s

0

c


=



{




M


k
=
1





g
k
2



W
k







T




W
k




}


-
1


·

{




M


k
=
1





g
k



W
k







T





W
k




(


x
_

-


g
k




s
_

lk



)




}






(
31
)













where







(
















)

&AutoLeftMatch;










denotes an inverse matrix and W


k





T


denotes a transposed matrix of W


k


′.




Next, gain optimization is considered.




The expected value of the distortion concerning the k'th frame selecting the code word gc of the gain is given by:






&AutoLeftMatch;





J
g

=






1
M






N


k
=
1





&LeftDoubleBracketingBar;


W
k




(



x
_

k

-


g
c



(



s
_


0

k


+


s
_


1

k



)



)


&RightDoubleBracketingBar;

2









=






1
M






M


k
=
1




{




x
_

k
T



W
k







T




W
k





x
_

k


-

2


g
c




x
_

k
T



W
k







T





W
k




(



s
_


0

k


+


s
_


1

k



)



+


















g
c
2



(



s
_


0

k

T

+


s
_


1

k

T


)




W
k







T




W
k







T





W
k




(



s
_


0

k


+


s
_


1

k



)



}















Solving






&AutoLeftMatch;








J
g





g
c



=






1
M






M


k
=
1




{



-
2




x
_

k
T



W
k







T





W
k




(



s
_


0

k


+


s
_


1

k



)



-

















2



g
c



(



s
_


0

k

T

+


s
_


1

k

T


)




W
k







T





W
k




(



s
_


0

k


+


s
_


1

k



)



}


=
0














we obtain










M


k
=
1






x
_

k
T



W
k







T





W
k




(



s
_


0

k


+


s
_


1

k



)




=





k
=
1


M





g
c



(



s
_


0

k

T

+


s
_


1

k

T


)




W
k







T





W




(



s
_


0

k


+


s
_


1

k



)














and










g
c

=





M


k
=
1






x
_

k
T



W
k







T





W
k




(



s
_


0

k


+


s
_


1

k



)








M


k
=
1





(



s
_


0

k

T

+


s
_


1

k

T


)



W
k







T





W




(



s
_


0

k


+


s
_


1

k



)









(
32
)













The above equations (31) and (32) give optimum centroid conditions for the shape s


0i


, s


1i


, and the gain g


1


for 0≦i≦31, 0≦j≦31 and 0≦1≦31, that is an optimum decoder output. Meanwhile, s


1i


may be found in the same way as for s


0i


.




Next, the optimum encoding condition, that is, the nearest neighbor condition, is considered.




The above equation (27) for finding the distortion measure, that is, s


0i


and s


1i


minimizing the equation E=∥W′(


x


−g


1


(s


1i


+s


1j


))∥


2


, are found each time the input


x


and the weight matrix W′ are given, that is, on the frame-by-frame basis.




Intrinsically, E is found on the round robin fashion for all combinations of g


1


(0≦1≦31), s


0i


(0≦i≦31) and s


0j


(0≦j≦31), that is 32×32×32=32768, in order to find the set of s


0i


, s


1i


which will give the minimum value of E. However, since this requires voluminous calculations, the shape and the gain are sequentially searched in the present embodiment. Meanwhile, round robin search is used for the combination of s


0i


and s


1i


. There are 32×32=1024 combinations for s


0i


and s


1i


. In the following description, s


1i


+s


1j


are indicated as s


m


for simplicity.




The above equation (27) becomes E=∥W′(


x


−g


1


s


m


)∥


2


. If, for further simplicity,


x




w


=W′


x


and s


w


=W′s


m


, we obtain








E=∥


x






w




−g




1




s




w





2


  (33)
















E
=



&LeftDoubleBracketingBar;


x
_

w

&RightDoubleBracketingBar;

2

+



&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2




(


g
l

-




x
_

w
T

·


s
_

w




&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2



)

2


-



(



x
_

W
T

·


s
_

w


)

2



&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2







(
34
)













Therefore, if g


1


can be made sufficiently accurate, search can be performed in two steps of




(1) searching for s


w


which will maximize








(



x
_

w
T

·


s
_

w


)

2



&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2











 and




(1) searching for g


1


which is closest to









x
_

w
T

·


s
_

w




&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2











If the above is rewritten using the original notation,




(1)′ searching is made for a set of s


0i


and s


1i


which will maximize








(



x
_

T



W







T





W




(



s
_


0

i


+


s
_


1

j



)



)

2



&LeftDoubleBracketingBar;


W




(



s
_


0

i


+


s
_


1

j



)


&RightDoubleBracketingBar;

2











 and




(2)′ searching is made for g


1


which is closest to










(



x
_

T



W







T





W




(



s
_


0

i


+


s
_


1

j



)



)



&LeftDoubleBracketingBar;


W




(



s
_


0

i


+


s
_


1

j



)


&RightDoubleBracketingBar;

2





(
35
)













The above equation (35) represents an optimum encoding condition (nearest neighbor condition).




The processing volume in case of executing codebook search for vector quantization is now considered.




With the dimension of s


0i


and s


1i


of K, and with the sizes of the codebooks CB


0


, CB


1


of L


0


and L


1


, that is,






0


≦i≦L




0


, 0


≦j≦L




1


,






with the processing volume for addition, sum-of-products and squaring of the numerator each being 1 and with the processing volume of the product and sum-of-products of the denominator each being 1, the processing volume of (1)′ of the equation (35) is approximately such that




numerator: L


0


·L


1


·(K·(1+1)+1)




denominator: L


0


·L


1


·(K·(1+1)




Magnitude comparison: L


0


·L


1






to give a sum of L


0


·L


1


·(4K+2). If L


0


=L


1


=32 and K=44, the processing volume is on the order of 182272.




Thus, all of the processing of (i)′ of the equation (35) is not executed, but the P number each of the vectors s


0i


and s


1i


are pre-selected. Since the negative gain entry is not supposed (or allowed), (1)′ of the equation (35) is searched so that the value of the numerator of (2)′ of the equation (35) will always be of a positive value. That is, (1)′ of the equation (35) is maximized inclusive of the polarity of


x




t


W′


t


W′(s


0i


+s


1i


).




As an illustrative example of the pre-selection method, there may be stated a method of




(sequence 1) selecting the P


0


number of s


0i


, counting from the upper order side, which maximize


x




t


W′


t


W′s


0i


;




(sequence 2) selecting the P


1


number of s


1i


, counting from the upper order side, which maximize


x




t


W′


t


W′s


1i


; and




(sequence 3) evaluating the equation of (1)′ of the equation (35) for all combinations of the P


0


number of s


0i


and the P


1


number of s


1i


.




This is effective if, in the evaluation of












x
_

t



W







t





W




(



s
_


0

i


+


s
_


1

j



)





&LeftDoubleBracketingBar;


W




(



s
_


0

i


+


s
_


1

j



)


&RightDoubleBracketingBar;

2





(a1)













which is the square root of the equation (1)′ of the equation (35), the supposition that the denominator, that is, the weighted norm of s


0i


+s


1i


, is substantially constant without regard to i or j. In actuality, the magnitude of the denominator of the equation (a1) is not constant. The pre-selection method which takes this into account will be explained subsequently.




Here, the effect of diminishing the processing volume in case the denominator of the equation (a1) is supposed to be constant is explained. Since the processing volume of L


0


·K is required for searching of the (sequence 1), while the processing volume of




 (


L




0


−1)+(


L




0


−2)+. . . +(


L




0





P




0


)=


P




0


·


L




0





P




0


(1


+P




0


)/2




is required for magnitude comparison, the sum of the processing volumes is L


0


(K+P


0


)−P


0


(1+P


0


)/2. The sequence 2 also is in need of the similar processing volume. Summing these together, the processing volume for pre-selection is








L




0


(


K+P




0


)+


L




1


(


K+P




1


)−


P




0


(1


+P




0


)/2


−P




1


(1


+P




1


)/2






Turning to processing of ultimate selection of the sequence 3,




numerator: P


0


·P


1


·(1+K+1)




denominator: P


0


·P


1


·K·(1+1)




magnitude comparison: P


0


·P


1






as concerns the processing of (1)′ of the equation (35), to give a total of P


0


·P


1


(3K+3).




For example, if P


0


=P


1


+6, L


0


=L


1


=32 and K=44, the processing volume for the ultimate selection and that for the pre-selection are 4860 and 3158, respectively, to give a total of the order of 8018. If the numbers for pre-selection are increased to 10, such that P


0


=P


1


=10, the processing volume for ultimate selection is 13500, while that for pre-selection is 3346, to give a total of the order of 16846.




If the numbers of the pre-selected vectors are set to 10 for respective codebooks, the processing volume as compared to that for non-omitted computing of 182272 is




16846/182272




which is about one/tenth of the former volume.




Meanwhile, the magnitude of the denominator of the equation (1)′ of the equation (35) is not constant but is changed in dependence upon the selected code vector. The pre-selection method which takes into account the approximate magnitude of this norm to some extent is now explained.




For finding the maximum value of the equation (a1), which is the square root of the equation (1)′ of the equation (35), since













x
_

t



W







t





W




(



s
_


0

i


+


s
_


1

j



)





&LeftDoubleBracketingBar;


W





s
_


0

i



&RightDoubleBracketingBar;

+

&LeftDoubleBracketingBar;


W





s
_


1

j



&RightDoubleBracketingBar;








x
_

t



W







t





W




(



s
_


0

i


+


s
_


1

j



)




&LeftDoubleBracketingBar;


W




(



s
_


0

i


+


s
_


1

j



)


&RightDoubleBracketingBar;






(a2)













it suffices to maximize the left side of the equation (a2). Thus, this left side is expanded to






&AutoLeftMatch;











x
_

t



W







t





W




(



s
_


0

i


+


s
_


1

j



)





&LeftDoubleBracketingBar;


W





s
_


0

i



&RightDoubleBracketingBar;

+

&LeftDoubleBracketingBar;


W





s
_


1

j



&RightDoubleBracketingBar;



=









x
_

t



W







t




W




s

0

i





&LeftDoubleBracketingBar;


W





s
_


0

i



&RightDoubleBracketingBar;

+

&LeftDoubleBracketingBar;


W





s
_


1

j



&RightDoubleBracketingBar;



+















x
_

t



W







t




W





s
_


1

j





&LeftDoubleBracketingBar;


W





s
_


0

i



&RightDoubleBracketingBar;

+

&LeftDoubleBracketingBar;


W





s
_


1

j



&RightDoubleBracketingBar;










(
a3
)














the first and second terms of which are then maximized.




Since the numerator of the first term of the equation (a3) is the function only of s


0i


, the first term is maximized with respect to s


0i


. On the other hand, since the numerator of the second term of the equation (a3) is the function only of s


1j


, the second term is maximized with respect to s


1j


. That is, there is specified such a method in












x
_

t



W







t




W





s
_


0

i




&LeftDoubleBracketingBar;


W





s
_


0

i



&RightDoubleBracketingBar;





(a4)









x
_

t



W







t




W





s
_


1

j




&LeftDoubleBracketingBar;


W





s
_


1

j



&RightDoubleBracketingBar;





(a5)













including




(sequence 1): selecting the Q


0


number of s


0i


from the upper order ones of the vectors which maximize the equation (a4);




(sequence 2): selecting the Q


1


number of s


1j


from the upper order ones of the vectors which maximize the equation (a5); and




(sequence 3): evaluating the equation (1)′ of the equation (35) for all combinations of the selected Q


0


number of s


0i


and the selected Q


1


number of s


1j


.




Meanwhile, W′=WH/∥


x


∥, with both W and H being the functions of the input vector


x


, and W being naturally the functions of the input vector


x


.




Therefore, W should inherently be computed from one input vector


x


to another to compute the denominators of the equations (a4) and (a5). However, it is not desirable to consume the processing volume excessively for pre-selection. Therefore, these denominators are previously calculated for each of s


0i


and s


1j


, using typical or representative values of W′, and stored in the table along with the values of s


0i


and s


1j


, Meanwhile, since division in actual search processing means a load in processing, the values of the equations (a6) and (a7):












1

&LeftDoubleBracketingBar;


W
*




s
_


0

i



&RightDoubleBracketingBar;







0


i


L
0










(a6)








1

&LeftDoubleBracketingBar;


W
*




s
_


1

j



&RightDoubleBracketingBar;







0


j


L
1





(a7)













are stored. In the above equations, W* is given by the following equation (a8):










W
*

=


1
N






k
=
1

N







W
k








(a8)













where W


k


′ is W′ of a frame for which U/UV has been found to be voiced such that










W


=

WH

&LeftDoubleBracketingBar;

x
_

&RightDoubleBracketingBar;






(a9)














FIG. 10

shows a specified example of each of W[


0


] to W[


43


] in case W* is described by the following equation (a10):










W

*





=

(








W


[
0
]



















0










W


[
1
]
































W


[
2
]









































0


















W


[
43
]









)





(a10)













As for the numerators of the equations (a4) and (a5), W′ is found and used from one input vector


x


to another. The reason is that, since at any rate an inner product of s


0i


and s


1j


with


x


needs to be calculated, the processing volume is increased only slightly if


x




t


W′


t


W′ is once calculated.




On approximate estimation of the processing volume required in the pre-selecting method, the processing volume of L


0


(K+1) is required for the search of the sequence 1, while the processing volume of








Q




0


·


L




0





Q




0


(1


+Q




0


)/2






is required for magnitude comparison. The above sequence 2 is also in need of similar processing. Sumrning these processing volumes together, the processing volume for pre-selection is








L




0


(


K+Q




0


+1)+


L




1


(


K+Q




1


+1)−


Q




0


(1


+Q




0


)/2


−Q




1


(1


+Q


1


)/


2






As for processing of ultimate selection of the sequence 3,




numerator: Q


0


·Q


1


·(1+K+1)




denominator: Q


0


·Q


1


·K·(1+1)




magnitude comparison: Q


0


·Q


1






totaling at Q


0


−Q


1


(3K+3).




For example, if Q


0


=Q


1


=6, L


0


=L


1


=32 and K=44, the processing volume of the ultimate selection and that of pre-selection are 4860 and 3222, respectively, totaling at 8082 (of the eighth order of magnitude). If the number of vectors for pre-selection are increased to 10, such that Q


0


=Q


1


=10, the processing volume of the ultimate selection, and that of pre-selection are 13500 and 3410, respectively, totaling at 16910 (of the eighth order of magnitude).




These computed results are of the same order of magnitude as the processing volume of approximately 8018 for P


0


=P


1


=6 or approximately 16846 for P


0


=P


1


=10 in the absence of normalization (that is in the absence of division by the weighted norm). For example, if the numbers of vectors for the respective codebooks are set to 10, the processing volume is decreased by




16910/182272




where 182272 is the processing volume without omission. Thus the processing volume is decreased to not more than one/tenth of the original processing volume.




By way of a specified example of the SNR (S/N ratio) in case pre-selection is made, and the segmental SNR for 20 msec segment, with use of the speech analyzed and synthesized in the absence of the above-described pre-selection as the reference, the SNR is 16.8 dB and the segmental SNR is 18.7 dB in the presence of normalization and in the absence of weighting, while the SNR is 17.8 dB and the segmental SNR is 19.6 dB in the presence of weighting and normalization, with the same number of vectors for pre-selection, as compared to the SNR of 14.8 dB and the segmental SNR of 17.5 dB, in the absence of normalization and with P


0


=P


1


=6. That is, the SNR and segmental SNR are improved by 2 to 3 dB by using the operation in the presence of weighting and normalization instead of the operation in the absence of normalization.




Using the conditions (centroid conditions) of the equations (31) and (32) and the condition of the equation (35), codebooks (CB


0


, CB


1


and CBg) can be trained simultaneously with the use of the so-called generalized Lloyd algorithm (GLA).




In the present embodiment, W′ divided by a norm of an input


x


is used as W′. That is, W′/∥


x


∥ is substituted for W′ in the equations (31), (32) and (35).




Alternatively, the weighting W′, used for perceptual weighting at the time of vector quantization by the vector quantizer


116


, is defined by the above equation (26). However, the weighting W′ taking into account the temporal masking can also be found by finding the current weighting W′ in which past W′ has been taken into account.




The values of wh(


1


), wh(


2


), . . . , wh(L) in the above equation (26), as found at the time n, that is at the n'th frame, are indicated as whn(


1


), whn(


2


), . . . , whn(L), respectively.




If the weights at time n, taking past values into account, are defined as An(i), where 1≦i≦L,








An


(


i


)=λA


n−1


(


i


)+(1−λ)whn(


i


), (whn(


i


)≦


A




n−1


(


i


))=whn(


i


), (whn(


i


)>


A




n−1


(


i


))






where λ may be set to, for example, λ=0.2. In An(i), with 1≦i≦L, thus found, a matrix having such An(i) as diagonal elements may be used as the above weighting.




The shape index values s


0i


, s


1j


, obtained by the weighted vector quantization in this manner, are outputted at output terminals


520


,


522


, respectively, while the gain index g


1


is outputted at an output terminal


521


. Also, the quantized value


x




0


′ is outputted at the output terminal


504


, while being sent to the adder


505


.




The adder


505


subtracts the quantized value from the spectral envelope vector


x


to generate a quantization error vector y. Specifically, this quantization error vector y is sent to the vector quantization unit


511


so as to be dimensionally split and quantized by vector quantizers


511




1


to


511




8


with weighted vector quantization. The second vector quantization unit


510


uses a larger number of bits than the first vector quantization unit


500


. Consequently, the memory capacity of the codebook and the processing volume (complexity) for codebook searching are increased significantly. Thus it becomes impossible to carry out vector quantization with the 44-dimension which is the same as that of the first vector quantization unit


500


. Therefore, the vector quantization unit


511


in the second vector quantization unit


510


is made up of plural vector quantizers and the input quantized values are dimensionally split into plural low-dimensional vectors for performing weighted vector quantization.




The relation between the quantized values y


0


to y


7


, used in the vector quantizers


511




1


to


511




8


, the number of dimensions and the number of bits are shown in FIG.


11


.




The index values Id


vq0


to Id


vq7


outputted from the vector quantizers


511




1


to 511


8


are outputted at output terminals


523




1


to


523




8


. The sum of bits of these index data is 72.




If a value obtained by connecting the output quantized values y


0


′ to y


7


′ of the vector quantizers


511




1


to


511




8


in the dimensional direction is y′, the quantized values y′ and


x




0


′ are summed by the adder


513


to give a quantized value


x




1


′. Therefore, the quantized value


x




1


′ is represented by











x
_

1


=







x
_

0


+


y
_










=






x
_

-

y
_

+


y
_

















That is, the ultimate quantization error vector is y′−y.




If the quantized value


x




1


′ from the second vector quantizer


510


is to be decoded, the speech signal decoding apparatus is not in need of the quantized value


x




1


′ from the first quantization unit


500


. However, it is in need of index data from the first quantization unit


500


and the second quantization unit


510


.




The learning method and code book search in the vector quantization section


511


will be hereinafter explained.




As for the learning method, the quantization error vector y is divided into eight low-dimension vectors y


0


to y


7


, using the weight W′, as shown in FIG.


11


. If the weight W′ is a matrix having 44-point sub-sampled values as diagonal elements:










W


=

(








wh


(
1
)














0










wh


(
2
)




































0













wh


(
44
)









)





(36)













the weight W′ is split into the following eight matrices:










W
1


=





[








wh


(
1
)









0



















0








wh


(
4
)









]








W
2


=





[








wh


(
5
)









0



















0








wh


(
8
)









]








W
3


=





[








wh


(
9
)









0



















0








wh


(
12
)









]








W
4


=





[








wh


(
13
)









0



















0








wh


(
16
)









]








W
5


=





[








wh


(
17
)









0



















0








wh


(
20
)









]








W
6


=





[








wh


(
21
)




0
























0








wh


(
28
)









]








W
7


=





[








wh


(
29
)









0



















0








wh


(
36
)









]








W
8


=





[








wh


(
37
)









0



















0








wh


(
44
)









]














y and W′, thus split in low dimensions, are termed Y


i


and W


i


′, where 1≦i≦8, respectively.




The distortion measure E is defined as








E=∥W




i


′(


y




i




−s


)∥


2


  (37)






The codebook vector s is the result of quantization of y


i


. Such code vector of the codebook minimizing the distortion measure E is searched.




In the codebook learning, further weighting is performed using the general Lloyd algorithm (GLA). The optimum centroid condition for learning is first explained. If there are M input vectors y which have selected the code vector s as optimum quantization results, and the training data is y


k


, the expected value of distortion J is given by the equation (38) minimizing the center of distortion on weighting with respect to all frames k:












J
=






1
M






k
-
1

M








&LeftDoubleBracketingBar;


W
k




(



y
_

k

-

s
_


)


&RightDoubleBracketingBar;

2









=






1
M






k
-
1

M









(



y
_

k

-

s
_


)

T



W
k







T





W
k




(



y
_

k

-

s
_


)











=







1
M






k
-
1

M









y
_

k
T



W
k







T




W
k





y
_

k




-

2



y
_

k
T



W
k







T




W
k




s
_






+







s
_

T



W
k







T




W
k




s
_










(38)













Solving









J




s
_



=



1
M






k
-
1

M







(



-
2




y
_

k
T



W
k







T




W
k



+

2



s
_

T



W
k







T




W
k




)



=
0











we obtain










k
=
1

M










y
_






k
T



W
k







T




W
k




=




k
=
1

M










s
_






T



W
k







T




W
k














Taking transposed values of both sides, we obtain










k
=
1

M








W
k







T




W
k






y
_






k



=




k
=
1

M








W
k







T




W
k




s
_













Therefore,







s
_

=



(




k
=
1

M








W
k







T




W
k




)


-
1







k
=
1

M








W
k







T




W
k





y
_

k














In the above equation (39), s is an optimum representative vector and represents an optimum centroid condition.




As for the optimum encoding condition, it suffices to search for s minimizing the value of ∥W


i


′(y


i


−s)∥


2


. W


i


′ during searching need not be the same as W


i


′ during learning and may be non-weighted matrix:







[







1












0









1


































0












1







]

&AutoLeftMatch;










By constituting the vector quantization unit


116


in the speech signal encoder by two-stage vector quantization units, it becomes possible to render the number of output index bits variable.




Meanwhile, the number of data of spectral components of the harmonics, obtained at a spectral envelope evaluation unit


148


, is changed with the pitch, such that, if, for example, the effective frequency band is 3400 kHz, the number of data ranges from 8 to 63. The vector v, comprised of these data, blocked together, is the variable dimensional vector. In the above specified example, vector quantization is preceded by dimensional conversion into a pre-set number of data, such as 44-dimensional input vector


x


. This variable/fixed dimensional conversion means the above-mentioned data number conversion and may be implemented specifically using the above-mentioned oversampling and linear interpolation.




If error processing is performed on the vector


x


, thus converted into the fixed dimension, for codebook searching for niizing the error, the code vector is not necessarily selected which minimizes the error with respect to the original variable dimensional vector v.




Thus, with the present embodiment, plural code vectors are selected temporarily in selecting the code vectors of the fixed dimension, and ultimate optimum variable-dimension code vectors are finally selected from these temporarily selected plural code vectors. Meanwhile, only variable dimension selective processing may be executed without executing fixed dimension transient selection.





FIG. 12

shows an illustrative structure for original variable-dimension optimum vector selection. To an input terminal


541


is entered data of a variable number of data of the spectral envelope obtained by the spectral envelope evaluation unit


148


, that is the variable dimensional vector v. This variable dimensional input vector v is converted by a variable/fixed dimension converting circuit


542


, as the above-mentioned data number converting circuit, into fixed-dimensional vector


x


(such as 44-dimensional vector made up of 44 data), which is sent to a terminal


501


. The fixed dimensional input vector


x


and the fixed-dimensional code vector read out from a fixed-dimensional codebook


530


are sent to a fixed-dimension selection circuit


535


where a selective operation or codebook searching which selects from the codebook


530


such code vector which will reduce the weighted error or distortion therebetween to a minium is carried out.




In the embodiment of

FIG. 12

, the fixed two-dimensional code vector, obtained from the fixed-dimensional codebook


530


, is converted by a fixed/variable dimension conversion circuit


544


which is of the same variable dimension as the original dimension. The converted dimensional code vectors are sent to a variable-dimensional conversion circuit


545


for calculating the weighed distortion between the code vector and the input vector v and selective processing or codebook searching is then carried out for selecting from the codebook


530


the code vector which will reduce the distortion to a minimum.




That is, the fixed-dimensional selection circuit


535


selects, by way of transient selection, several code vectors as candidate code vectors which will minimize the weighted distortion and executes weighted distortion calculations in the variable-dimension conversion circuit


545


on these candidate code vectors for ultimately selecting the code vector which will reduce the distortion to a minimum.




The range of application of the vector quantization employing the transient selection and ultimate selection is now briefly explained. This vector quantization can be applied not only to weighted vector quantization of the variable-dimension harmonics using the dimension conversion on spectral components of the harmonics in harmonic coding, harmonic coding of LPC residuals, multi-band excitation (MBE) encoding as disclosed by the present Assignee in the Japanese laid-Open Patent4-91422 or to MBE encoding of LPC residuals, but to vector quantization of the variable dimension input vector using the fixed dimension codebook.




For transient selection, it is possible to select part of the multi-stage quantizer configuration or to search only a shape codebook for transient selection if a codebook is comprised of the shape codebook and a gain codebook and to determine the gain by variable dimension distortion calculations. Alternatively, the above-mentioned pre-selection may be used for the transient selection. Specifically, the similarity between the vector


x


of the fixed dimension and all code vectors stored in this codebook may be found by approximations (approximation of the weighted distortion) for selecting plural code vectors bearing high degree of similarity. In this case, it is possible to execute the transient fixed-dimension selection by the above-mentioned pre-selection and to execute ultimate selection on the pre-selected candidate code vectors which will minimize the weighted distortion for the variable dimension. It is alternatively possible to execute not only the pre-selection but also the high-precision distortion calculations for precise selection prior to performing the ultimate selection.




Referring to the drawings, specified examples of vector quantization employing the transient selection and ultimate selection will be explained in detail.




In

FIG. 12

, the codebook


530


is made up of a shape codebook


531


and a gain codebook


532


. The shape codebook


531


is made up of two codebooks CB


0


, CB


1


. The output code vectors of these shape codebooks CB


0


and CB


1


are denoted as s


0


, s


1


, while the gain g of a gain circuit


533


as determined by the gain codebook


532


is denoted as g. The variable-dimension input vector v from an input terminal


541


is processed with dimensional conversion (referred to herein as D


1


) by a variable/fixed dimension conversion circuit


542


and thence supplied via terminal


501


as a fixed dimensional vector


x


to a subtractor


536


of a selection circuit


535


where the difference of the vector


x


from the fixed-dimension code vector read out from the codebook


530


is found and weighted by a weighting circuit


537


so as to be supplied to an error minimizing circuit


538


. The weighting circuit


537


applies a weight W′. The fixed-dimension code vector, read out from the codebook


530


, is processed with dimensional conversion (referred to herein as D


2


) by the variable/fixed dimension conversion circuit


544


and thence supplied to a selector


546


of a variable-dimension selection circuit


545


where the difference of the code vector from the variable dimension input vector v is taken and weighted by a weighting circuit


547


so as to be thence supplied to an error minimizing circuit


548


. The weighting circuit


537


applies a weight W


v


.




The error of the error minimizing circuits


538


,


548


means the above-mentioned distortion or distortion measure. The fact that the error or distortion becomes small is equivalent to increased similarity or correlation.




The selection circuit


535


executing the fixed-dimension transient selection searches for s


0


, s


1


, g which will minimize the distortion measure E


1


represented by the equation (b1):







E




1




=∥W


′(




x


−g


(


s




0




+s




1


))∥


2


  (b1)




substantially as explained with reference to the equation (27).




It is noted that the weight W in the weighting circuit


537


is given by








W′=WH/∥


x





  (b2)






where H denotes a matrix having frequency response characteristics of an LPC synthesis filter as a diagonal element and W denotes a matrix having frequency response characteristics of a perceptual weighting filter as a diagonal element.




First, s


0


, s


1


, g which will minimize the distortion measure E


1


of the equation (b1) are searched. It is noted that L sets of s


0


, s


1


, g are taken, beginning from upper order sides, in the order of reducing the distortion measure E


1


, by way of transient selection in the fixed dimension. Then, ultimate selection is carried out on the set of L sets of s


0


, s


1


, g which minimizes








E




2


=∥


W




v


(


v−D




2




g


(


s




0




+s




1


))∥


2


  (b3)






as an optimum code vector.




The searching and learning for the equation (b1) is as explained with reference to the equation (27) and the following equations.




The centroid condition for codebook learning based on the equation (b3) is now explained.




For the codebook CB


0


, as one of the shape codebooks


531


in the codebook


530


, an expected value of the distortion concerning all frames k, from which to select the code vector s


0


, is minimized. If there M such frames, it suffices to minimize









J
=


1
M






k
=
1

M








&LeftDoubleBracketingBar;


W
vk



(



v
_

k

-


g
k




D

2

k




(



s
_


0

c


+


s
_


1

k



)




)


&RightDoubleBracketingBar;

2







(b4)













For minimizing the equation (b4), the equation (b5):












J





s
_


0

c




=



1
M






k
=
1

M







{



-
2



g
k



D

2

k

T



W
vk
T



W
vk




v
_

k


+

2


g
k
2



D

2

k

T



W
vk
T



W
vk



D

2

k





s
_

oc


+

2


g
k
2



D

2

k

T



W
vk
T



W
vk



D

2

k





s
_


1

k




}



=
0





(b5)













is solved to give











s
_


0

c


=



{




k
=
1

M








g
k
2



D

2

k

T



W
vk
T



W
vk



D

2

k




}


-
1


×




k
=
1

M







{


g
k



D

2

k

T



W
vk
T




W
vk



(



v
_

k

-


g
k



D

2

k





s
_


1

k




)



}







(b6)













In this equation (b6), ( )


−1


denotes an inverse matrix and W


vk




T


denotes a transposed matrix of W


vk


. This equation (b6) represents an optimum centroid condition for the shape vector s


0


.




The selection of the code vector s


1


for the codebook CB


1


of another shape codebook


531


in the codebook


530


is carried out in the same manner as described above and hence the description is omitted for simplicity.




Then, the centroid condition for the gain g from the gain codebook


532


in the codebook


530


is now considered.




An expected value of the distortion for the k'th frame from which to select the code word g


c


is given by the equation (b7):










J
g

=


1
M










k
=
1

M








&LeftDoubleBracketingBar;


W
vk



(



v
_

k

-


g
c




D

2

k




(



s
_


0

k


+


s
_


1

k



)




)


&RightDoubleBracketingBar;

2







(
b7
)













For minimizing the equation (b7), the following equation (b8):






&AutoLeftMatch;











J
g





g
c



=






1
M










k
=
1

M







{



-
2




v
_

vk
T



W
vk
T



W
vk




D

2

k




(



s
_


0

k


+


s
_


1

k



)



+
















2




g
c



(


D

2

k




(



s
_


0

k


+


s
_


1

k



)


)


T



W
vk
T



W
vk




D

2

k




(



s
_


0

k


+


s
_


1

k



)



}







=




0







(
b8
)














is solved to give










g
c

=





k
=
1

M





v
_

vk
T



W
vk
T



W
vk




D

2

k




(



s
_


0

k


+


s
_


1

k



)








k
=
1

M









(


D

2

k




(



s
_


0

k


+


s
_


1

k



)


)

T



W
vk
T



W
vk




D

2

k




(



s
_


0

k


+


s
_


1

k



)









(
b9
)













This equation (b9) represents the centroid condition for the gain.




Next, the nearest neighbor condition based on the equation (b3) is considered.




Since the number of sets of s


0


, s


1


, g to be searched by the equation (b3) is limited to L by the transient selection of the fixed dimension, the equation (b3) is directly calculated with respect to the L sets of s


0


, s


1


, g in order to select the set of s


0


, s


1


, g which minimizes the distortion E


2


as an optimum code vector.




The method of sequentially searching for the shape and the gain which are accepted as being effective when L for transient selection is very large or if s


0


, s


1


, g are directly selected in the variable dimension without executing the transient selection, is now explained.




If indices i, j and 1 are added to s


0


, s


1


, g of the equation (b3) and the equation (b3) in this form is rewritten, we obtain:








E




2


=∥


W




v


(


v−D




2




g




1


(


s




0i




+s




1j


))∥


2


  (b10)






Although g, s


0i


, s


1j


which minimize the equation (b10) can be searched on the round robin fashion, if 0≦1<32, 0≦i≦32 and 0≦j≦32, the above equation (b10) needs to be calculated for 32


3


=32768 patterns, thus leading to voluminous processing. The method of sequentially searching the shape and the gain is now explained.




The gain g


1


is determined after deciding the shape code vectors s


0i


, s


1j


. Setting s


0i


+s


1j


=s


m


, the equation (b10) can be represented by








E




2




=∥W




v


(


v−D




2




g




1




s




m


)∥


2


  (b11)






If we set v


w


=W


v


v, s


w


=W


v


D


2


s


m


, the equation (b11) becomes










E
2

=



&LeftDoubleBracketingBar;



v
_

w

-


g
l




s
_

w



&RightDoubleBracketingBar;

2

=



&LeftDoubleBracketingBar;


v
_

w

&RightDoubleBracketingBar;

2

+



&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2




(


g
l

-




v
_

w
T




s
_

w




&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2



)

2


-


(




v
_

w
T




s
_

w




&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2


)

2







(
b12
)













Therefore, if g


1


can be of sufficient precision, s


w


which maximizes











(



v
_

w
T




s
_

w


)

2



&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2





(
b13
)













and g


1


closest to












v
_

w
T




s
_

w




&LeftDoubleBracketingBar;


s
_

w

&RightDoubleBracketingBar;

2





(
b14
)













are searched.




Rewriting the equations (b13) and (b14) by substituting the original variables, we obtain the following equations (b15) and (b16).




The sets of s


0i


, s


1j


which maximizes











(



v
_

T



W
v
T



W
v




D
2



(



s
_


0

i


+


s
_


1

j



)



)

2



&LeftDoubleBracketingBar;


W
v




D
2



(



s
_


0

i


+


s
_


1

j



)



&RightDoubleBracketingBar;

2





(
b15
)













and g


1


closest to












v
_

T



W
v
T



W
v




D
2



(



s
_


0

i


+


s
_


1

j



)





&LeftDoubleBracketingBar;


W
v




D
2



(



s
_


0

i


+


s
_


1

j



)



&RightDoubleBracketingBar;

2





(
b16
)













are searched.




Using the centroid conditions for the shape and the gain of the equations (b6) and (b9) and the optimum encoding conditions (nearest neighbor condition) of the equations ((b15) and (b16), the codebooks (CB


0


, CB


1


, CBg)can be learned simultaneously by the generalized LLoyd algorithm (GLA).




As compared to the method employing the equation (27) and so forth, in particular the equations (31), (32) and (35), as described previously, the learning methods employing the above equations (b6), (b9), b(15) and b(16) are excellent in minimizing the distortion after conversion of the original input vector v into variable-dimension vector.




However, since the processing by the equations (b6) and (b9), in particular the equation (b6), is complex, the centroid condition derived from optimizing the equation (b27), that is b(1), using only the nearest neighbor condition of the equations (b15) and (b16) may be used.




It is also advisable to use the method as explained with reference to the Equation (27) and so forth during codebook learning and to use the method employing the equations (b15), (b16) only during searching. It is also possible to execute transient selection in the fixed dimension by the method explained with reference to the equation (27) and so forth and to directly evaluate the equation (b3) only for the set of selected plural (L) vectors for searching.




In any case, by using the search by distortion evaluation by the equation (b3) after the transient selection or in the round robin fashion, it becomes ultimately possible to carry out learning or code vector search with less distortion.




The reason why it is desirable to carry out distortion calculations in the same variable dimension as that of the original input vector v is briefly explained.




If the miniization of the distortion on the fixed dimension is coincident with that on the variable dimension, the distortion minimization in the variable dimension is unnecessary. However, since the dimensional conversion D


2


by the fixed/variable dimension conversion circuit


544


is not an orthogonal matrix, the two minimizations are not coincident with each other. Thus, if the distortion is minimized in the fixed dimension, such minimization is not necessarily distortion minimization in the variable dimension, such that, if the vector of the resulting variable dimension vector is to be optimized, it becomes necessary to optimize the distortion in the variable dimension.





FIG. 13

shows an instance in which the gain when dividing the codebook into a shape codebook and a gain codebook is the gain in the variable dimension and the distortion is optimized in the variable dimension.




Specifically, the code vector of the fixed dimension read out from the shape codebook


531


is sent to the fixed/variable dimension conversion circuit


544


for conversion into the vector in the variable dimension which is then sent to the gain control circuit


533


. It is sufficient if the selection circuit


545


selects the optimum gain in the gain circuit


533


for the code vector processed with the fixed/variable dimension conversion based on the code vector of the variable dimension from the gain control circuit


533


and on the input vector v. Alternatively, the optimum gain may be selected based on the inner product of the input vector to the gain circuit


533


and the input vector v. The structure and the operation are otherwise the same as those of the embodiment shown in FIG.


12


.




Turning to the shape codebook


531


, the sole code vector may be selected during selection in the variable dimension in the selection circuit


535


, while selection in the variable dimension may be made only of the gain.




By multiplying the code vector converted by the fixed/variable dimension conversion circuit


544


with the gain, an optimum gain can be selected with the effect by the fixed/variable dimension conversion taken into account in contrast to the method of fixed/variable dimension conversion of the code vector multiplied by the gain, as shown in FIG.


12


.




A further example of vector quantization combining transient selection in the fixed dimension and ultimate selection in the variable dimension is now explained.




In the following example, the first code vector of the fixed dimension, read out from the first codebook, is converted into a variable dimension of the input vector and the second code vector in the fixed dimension read out from the second codebook is summed to the first code vector of the variable dimension processed by the fixed/variable dimension conversion as described above. From the resulting sum code vectors, resulting from the addition, an optimum code vector minimizing the error in the input vector is selected from at least the second codebook.




In the example of

FIG. 14

, the first code vector a in the fixed dimension, read out from the first codebook CB


0


, is sent to the fixed/variable dimension conversion circuit


544


so as to be converted into the variable dimension equal to that of the input vector v at terminal


541


. The second code vector of the fixed dimension, read out from the second codebook CB


1


, is sent to an adder


549


so as to be summed to the code vector of the variable dimension from the fixed/variable dimension conversion circuit


544


. The resulting code vector sum of the adder


549


is sent to the selection circuit


545


where the sum vector from the adder


549


or the optimum code vector minimizing the error from the input vector v is selected. The code vector of the second codebook CB


1


is applied to a range from the low side of the harmonics of the input vector to the dimension of the codebook CB


1


. The gain circuit


533


of the gain g is provided only between the first codebook CB


0


and the fixed/variable dimension conversion circuit


544


. Since the structure is otherwise the same as that of

FIG. 12

, similar portions are depicted by the same reference numerals and the corresponding description is omitted for simplicity.




Thus, by adding the code vector remaining in the fixed dimension from the codebook CB


1


and the codebook read out from the codebook CB


0


and converted into the variable dimension are summed together for subtracting the distortion produced by fixed/variable dimension conversion by the code vector of the fixed dimension from the codebook CB


1


.




A distortion E


3


calculated by the selection circuit


545


of

FIG. 14

is given by:








E




3




=∥W




v


(


v


−(


D




2




gs




0




+s




1


))∥


2


  (b17)






In the example of

FIG. 15

, the gain circuit


533


is arranged on an output side of the adder


549


. Thus, the result of addition of the code vector read out from the first codebook CB


0


and converted by the fixed/variable dimension conversion circuit


544


and the code vector read out from the second codebook CB


1


is multiplied with the gain g. The common gain is used because the gain to be multiplied with the code vector from the CB


0


exhibits strong similarity to the gain multiplied with the code vector from the codebook CB


1


for the correcting portion (quantization of the quantization error). The distortion E


4


calculated by the selection circuit


545


of

FIG. 15

is given by:








E




4




=∥W




v


(


v−g


(


D




2




gs




0




+s




1


))∥


2


  (b18)






This example is otherwise the same as that of the example of FIG.


14


and hence the explanation is omitted for simplicity.




In the example of

FIG. 16

, not only a gain circuit


535


A having a gain g is provided on an output side of the first codebook CB


0


in the example of

FIG. 14

, but a gain circuit


533


B having a gain g is provided on the output side of the second codebook CB


1


. The distortion calculated by the selection circuit


545


of

FIG. 16

is equal to the distortion E


4


shown in the equation (b18). The configuration of the example of

FIG. 16

is otherwise the same as that of the example of

FIG. 14

, so that the corresponding description is omitted for simplicity.





FIG. 17

shows an example n which the first codebook of

FIG. 14

is constructed by two shape codebooks CB


0


, CB


1


. The code vectors s


0


, s


1


from these shape codebooks are summed together and the resulting sum is multiplied by the gain g by the gain circuit


533


before being sent to the fixed/variable dimension conversion circuit


544


. The variable dimension code vector from the fixed/variable dimension conversion circuit


544


and the code vector s


2


from the second cdebook CB


2


are summed together by the adder


549


before being sent to the selection circuit


545


. The distortion E


5


as found by the selection circuit


545


of

FIG. 17

is given by:








E




5




=∥W




v


(


v


−(


gD




2


(


s




0




+s




1


)+


s




2


))∥


2


  (b19)






The configuration of the example of

FIG. 16

is otherwise the same as that of the example of

FIG. 14

, so that the corresponding description is omitted for simplicity.




The searching method in the equation (b18) is now explained.




As an example, the first searching method includes searching s


0i


, g


1


which minimizes







E




4




′=∥W


′(




x


−g




1




s




0i


))∥


2


  (b20)




and then searching s which minimizes








E




4




=∥W




v


(


v−g




1


(


D




2




s




0i




+s




0i


))∥


2


  (b21)






As another example, such s


0i


that maximizes











(



s
_


0

i

T



W







T




W




x
_


)

2



&LeftDoubleBracketingBar;


W





s
_


0

i



&RightDoubleBracketingBar;

2





(
b22
)













is searched, such s


1j


that maximizes











(



v
_

T



W
v
T




W
v



(



D
2




s
_


0

i



+


s
_


1

j



)



)

2



&LeftDoubleBracketingBar;


W
v



(



D
2




s
_


0

i



+


s
_


1

j



)


&RightDoubleBracketingBar;

2





(
b23
)













is searched and such gain g


1


that is closest to












v
_

T



W
v
T




W
v



(



D
2




s
_


0

i



+


s
_


1

j



)





&LeftDoubleBracketingBar;


W
v
T



(



D
2




s
_


0

i



+


s
_


1

j



)


&RightDoubleBracketingBar;

2





(
b24
)













is searched.




As a third searching method, such s


0i


and g


1


as minimizes







E




4




′=∥W


(




x


−g




1




s




0i


)∥


2


  (b25)




are searched, then such s


1j


as maximizes











(



v
_

T



W
v
T




W
v



(



D
2




s
_


0

i



+


s
_


1

j



)



)

2



&LeftDoubleBracketingBar;


W
v



(



D
2




s
_


0

i



+


s
_


1

j



)


&RightDoubleBracketingBar;

2





(
b26
)













is searched, and the gain g


1


closest to












v
_

T



W
v
T




W
v



(



D
2




s
_


0

i



+


s
_


1

j



)





&LeftDoubleBracketingBar;


W
v
T



(



D
2




s
_


0

i



+


s
_


1

j



)


&RightDoubleBracketingBar;

2





(
b27
)













is ultimately selected.




Next, the centroid condition of the equation (b20) of the first searching method is explained. With the centroid s


0c


of the code vector s


0i


,









J
=


1
M










k
=
1

M








&LeftDoubleBracketingBar;


W
k




(



x
_

k

-


g
k




s
_


0

c




)


&RightDoubleBracketingBar;

2







(
b28
)













is minimized. For this minimization,












J





S
_


0

C




=



1
M










k
=
1

M







(



-
2



g
k



W
k







T




W
k





x
_

k


+

2


g
k
2



W
k







T




W
k





s
_


0

c




)



=
0





(
b29
)













is solved to give











s
_


0

C


=



{




k
=
1

M








g
k
2



W
k







T




W
k




}


-
1







k
=
1

M








g
k



W
k







T




W
k





x
_

k








(
b30
)













Similarly, for the centroid g


c


of the gain g,









J
=


1
M










k
=
1

M








&LeftDoubleBracketingBar;


W
k




(



x
_

k

-


g
c




s
_


0

k




)


&RightDoubleBracketingBar;

2







(
b31
)













and












J




g
c



=



1
M










k
=
1

M







(



-
2




s
_


0

k




W
k







T




W
k





x
_

k


+

2


g
c




s
_


0

k

T



W
k







T




W
k





s
_


0

k




)



=
0





(
b32
)













from the above equation (b20) are solved to give










g
c

=





k
=
1

M









s
_


0

k

T



W
k







T




W
k





x
_

k







k
=
1

M









s
_


0

k

T



W
k







T




W
k





s
_


0

k









(
b33
)













On the other hand, as the centroid condition of the equation (b21) of the first search method,









J
=


1
M






M


k
=
1





&LeftDoubleBracketingBar;


W
vk



(



v
_

k

-


g
k



(



D

2

k





s
_


0

k



+


s
_


1

C



)



)


&RightDoubleBracketingBar;

2







(b34)













and












J





S
_


1

C




=



1
M






M


k
=
1




{



-
2



g
k



W
vk
T



W
vk




v
_

k


+

2


g
k
2



W
vk
T



W
vk



D

2

k





s
_

ok


+

2


g
k
2



W
vk
T



W
vk




s
_


1

c




}



=
0





(b35)













are solved for the centroid s


1c


of the vector s


1j


to give











s
_


1

C


=



{




M


k
=
1





g
k
2



W
vk
T



W
vk



}


-
1







M


k
=
1





g
k



W
vk
T




W
vk



(



v
_

k

-


g
k



D

2

k





s
_


0

k




)









(b36)













From the equation (b21), the centroid s


0c


of the vector s


0i


is found to give









J
=


1
M






M


k
=
1





&LeftDoubleBracketingBar;


W
vk



(



v
_

k

-


g
k



(



D

2

k





s
_


0

c



+


s
_


1

k



)



)


&RightDoubleBracketingBar;

2







(b37)






















J





S
_


0

C




=



1
M






M


k
=
1




{



-
2



g
k



D

2

k

T



W
vk
T



W
vk




v
_

k


+

2


g
k
2



D

2

k

T



W
vk
T



W
vk



D

2

k





s
_


0

c



+

2


g
k
2



D

2

k

T



W
vk
T



W
vk




s
_


1

k




}



=
0





(b38)













and











s
_


0

C


=



{




M


k
=
1





g
k
2



D

2

k

T



W
vk
T



W
vk



D

2

k




}


-
1







M


k
=
1





g
k



D

2

k

T



W
vk
T




W
vk



(



v
_

k

-


g
k




s
_


1

k




)









(b39)













Similarly, the centroid g


c


of te gain g can be found by










g
c

=





M


k
=
1





v
k
T



W
vk
T




W
vk



(



D

2

k





s
_


0

k



+


s
_


1

k



)








M


k
=
1






(



D

2

k





s
_


0

k



+


s
_


1

k



)

T



W
vk
T




W
vk



(



D

2

k





s
_


0

k



+


s
_


1

k



)









(b40)













The method of calculating the centroid of the code vector s


0i


by the above equation (b20) and the method of calculating the centroid g


c


of the of the gain g are shown by the equation (b33). As the methods of calculating the centroid by the equation (b21), the centroid s


1c


of the vector s


1j


, the centroid s


1c


of the vector s


1j


and the centroid g


c


of the gain g are shown by the equations (b36), (b39) and (b)40), respectively.




In the learning of the codebook by the actual GLA, a method of simultaneously learning s


0


, s


1


, g using the equations (b30), (b36) and (b40) may be cited. It is noted that the above equations (b22), (b23) and (b24) may be used for the searching method (nearest neighbor condition). In addition, various combinations of the centroid conditions, shown by the equations (b30), (b33), (b36), (b39), (b36) or (b40) may optionally be employed.




The search method for the distortion measure of the equation (b17) corresponding to

FIG. 14

is explained. In this case, it suffices to search s


0i


, g


1


which will minimize








E




3




′=∥W


′(




x


−g




1




s




0i


))∥


2


  (b41)






and subsequently to search s


1j


which minimizes








E




3




=∥W




v


(




x


−g




1


(


D




2




s




0i




+s




1j


))∥


2


  (b42)






In the above equation (b41), it is not practical to poll all sets of g


1


and s


0i


, so an upper L number of the vectors s


0i


which maximize











(



x
_

T



W







T




W





s
_


0

i



)

2



&LeftDoubleBracketingBar;


W





s
_


0

i



&RightDoubleBracketingBar;

2





(b43)













an L number of the gain closest to












x
_

T



W







T




W





s
_


0

i





&LeftDoubleBracketingBar;


W





s
_


0

i



&RightDoubleBracketingBar;

2





(b44)













in association with the above equation (b43) and s


1j


which minimzes








E




3




=∥W




v


(


v


−(


D




2




gs




0i




+s




1j


))∥


2


  (b45)






are searched.




Next the centroid conditions are derived from the equations (b41) and (b42). In this case, the procedure is varied depending on which equation is used.




First, if the equation (b41) is used, and the centroid of the code vectors s


0i


is s


0i


, then









J
=


1
M






M


k
=
1





&LeftDoubleBracketingBar;


W
k




(



x
_

k

-


g
k




s
_


0

c




)


&RightDoubleBracketingBar;

2







(b46)













is minimized to obtain











s
_


0

c


=



{




M


k
=
1





g
k
2



W
k







T




W
k




}


-
1







M


k
=
1




{


g
k



W
k







T




W
k





x
_

k


}







(b47)













Similarly, as for the centroid g


c


, the following equation:










g
c

=





M


k
=
1






s
_


0

k

T



W
k







T




W
k











x
_

k







M


k
=
1






s
_


0

k

T



W
k







T




W
k




s
_


0

k









(b48)













is obtained by the above equation (b41), as in the case of the equation (b43).




If the centroid s


1c


of the vector s


1j


is to be found using the equation (b42),









J
=


1
M






M


k
=
1





&LeftDoubleBracketingBar;


W
vk



(



v
_

k

-


D

2

k




g
k




s
_


0

k



-


s
_


1

c



)


&RightDoubleBracketingBar;

2







(b49)













and












J





S
_


1

C




=



1
M






M


k
=
1




{


2


W
vk
T



W
vk




s
_


1

c



-

2


W
vk
T



W
vk




v
_

k


+

2


g
k



W
vk
T



W
vk



D

2

k





s
_


0

k




}



=
0





(b50)













are solved to give











s
_


1

C


=



{




M


k
=
1





W
vk
T



W
vk



}


-
1







M


k
=
1





W
vk
T




W
vk



(



v
_

k

-


g
k



D

2

k





s
_


0

k




)









(b51)













Similarly, the centroid s


0c


of the code vector s


0i


and the centroid g


c


of the gain g can be found from the equation (b42).











s
_


0

C


=



{




k
=
1

M








g
k
2



D

2

k

T



W
vk
T



W
vk



D

2

k




}


-
1







k
=
1

M








g
k



D

2

k

T



W
vk
T




W
vk



(



v
_

k

-


g
k




s
_


1

k




)









(
b52
)






















J
=






1
M










k
=
1

M







{




v
_

k
T



W
vk
T



W
vk




v
_

k


+




g
c
2



(


D

2

k





s
_


0

k



)


T



W
vk
T



W
vk



D

2

k





s
_


0

k



+


















s
_


1

k

T



W
vk
T



W
vk




s
_


1

k



-

2




g
c



(


D

2

k





s
_


0

k



)


T



W
vk
T



W
vk




v
_

k


-














2



s
_


1

k

T



W
vk
T



W
vk




v
_

k


+

2



s
_


1

k

T



W
vk
T



W
vk



D

2

k





s
_


0

k




g
c



}








(
b53
)












J




g
c



=






1
M










k
=
1

M







{


2




g
c



(


D

2

K





s
_


0

k



)


T



W
vk
T



W
vk



D

2

K





s
_


0

k



-


















2



(


D

2

K





s
_


0

k



)

T



W
vk
T



W
vk




v
_

k


+

2



s
_


1

k

T



W
vk
T



W
vk



D

2

K





s
_


0

k




}


=
0







(
b54
)







g
c

=





k
=
1

M







{



(


D

2

K





s
_


0

k



)

T



W
vk
T




W
vk



(



v
_

k

-


s
_


1

k



)



}






k
=
1

M







{



(


D

2

K





s
_


0

k



)

T



W
vk
T



W
vk



D

2

K





s
_


0

k



}







(
b55
)













Meanwhile, the codebook learning by GLA may be carried out using the above equations (b47), (b48) or (b51) or using the above equations (b51), (b52) or (b55).




The second encoding unit


120


employing the CELP encoding configuration of the present invention has a multi-stage vector quantization processing portions (a two-stage encoding portions


120




1


and


120




2


in the embodiment of FIG.


18


). The configuration of

FIG. 18

is designed to cope with the transmission bit rate of 6 kbps in case the transmission bit rate can be switched between e.g., 2 kbps and 6 kbps, and to switch the shape and gain index output between 23 bits/5 msec and 15 bits/5msec. The processing flow in the configuration of

FIG. 18

is as shown in FIG.


19


.




Referring to

FIG. 18

, a first encoding unit


300


of

FIG. 18

is equivalent to the first encoding unit


113


of

FIG. 3

, an LPC analysis circuit


302


of

FIG. 18

corresponds to the LPC analysis circuit


132


shown in

FIG. 3

, while an LSP parameter quantization circuit


303


corresponds to the constitution from the α to LSP conversion circuit


133


to the LSP to α conversion circuit


137


of

FIG. 3 and a

perceptually weighted filter


304


of

FIG. 18

corresponds to the perceptual weighting filter calculation circuit


139


and the perceptually weighted filter


125


of FIG.


3


. Therefore, in

FIG. 18

, an output which is the same as that of the LSP to α conversion circuit


137


of the first encoding unit


113


of

FIG. 3

is supplied to a terminal


305


, while an output which is the same as the output of the perceptually weighted filter calculation circuit


139


of

FIG. 3

is supplied to a terminal


307


and an output which is the same as the output of the perceptually weighted filter


125


of

FIG. 3

is supplied to a terminal


306


. However, in distinction from the perceptually weighted filter


125


, the perceptually weighted filter


304


of

FIG. 18

generates the perceptually weighted signal, that is the same signal as the output of the perceptually weighted filter


125


of

FIG. 3

, using the input speech data and pre-quantization α-parameter, instead of using an output of the LSP-α conversion circuit


137


.




In the two-stage second encoding units


120




1


and


120




2


, shown in

FIG. 18

, subtractors


313


and


323


correspond to the subtractor


123


of

FIG. 3

, while the distance calculation circuits


314


,


324


correspond to the distance calculation circuit


124


of FIG.


3


. In addition, the gain circuits


311


,


321


correspond to the gain circuit


126


of

FIG. 3

, while stochastic codebooks


310


,


320


and gain codebooks


315


,


325


correspond to the noise codebook


121


of FIG.


3


.




In the constitution of

FIG. 18

, the LPC analysis circuit


302


at step S


1


of

FIG. 19

splits input speech data


x


supplied from a terminal


301


into frames as described above to perform LPC analysis in order to find an α-parameter. The LSP parameter quantization circuit


303


converts the α-parameter from the LPC analysis circuit


302


into LSP parameters to quantize the LSP parameters. The quantized LSP parameters are interpolated and converted into α-parameters. The LSP parameter quantization circuit


303


generates an LPC synthesis filter function 1/H (z) from the α-parameters converted from the quantized LSP parameters, that is, the quantized LSP parameters, and sends the generated LPC synthesis filter function 1/H (z) to a perceptually weighted synthesis filter


312


of the first-stage second encoding unit


120




1


via terminal


305


.




The perceptual weighting filter


304


finds data for perceptual weighting, which is the same as that produced by the perceptually weighting filter calculation circuit


139


of

FIG. 3

, from the α-parameter from the LPC analysis circuit


302


, that is, pre-quantization α-parameter. These weighting data are supplied via terninal


307


to the perceptually weighting synthesis filter


312


of the first-stage second encoding unit


120




1


. The perceptual weighting filter


304


generates the perceptually weighted signal, which is the same signal as that outputted by the perceptually weighted filter


125


of

FIG. 3

, from the input speech data and the pre-quantization α-parameter, as shown at step S


2


in FIG.


19


. That is, the LPC synthesis filter function W(z) is first generated from the pre-quantization α-parameter. The filter function W(z) thus generated is applied to the input speech data


x


to generate


x




w


which is supplied as the perceptually weighted signal via terminal


306


to the subtractor


313


of the first-stage second encoding unit


120




1


.




In the first-stage second encoding unit


120




1


, a representative value output of the stochastic codebook


310


of the 9-bit shape index output is sent to the gain circuit


311


which then multiplies the representative output from the stochastic codebook


310


with the gain (scalar) from the gain codebook


315


of the 6-bit gain index output. The representative value output, multiplied with the gain by the gain circuit


311


, is sent to the perceptually weighted synthesis filter


312


with 1/A(z)=(1/H(z))*W(z). The weighting synthesis filter


312


sends the 1/A(z) zero-input response output to the subtractor


313


, as indicated at step S


3


of FIG.


19


. The subtractor


313


performs subtraction on the zero-input response output of the perceptually weighting synthesis filter


312


and the perceptually weighted signal


x




w


from the perceptual weighting filter


304


and the resulting difference or error is taken out as a reference vector r. During searching at the first-stage second encoding unit


120




1


, this reference vector r is sent to the distance calculating circuit


314


where the distance is calculated and the shape vector s and the gain g minimizing the quantization error energy E are searched, as shown at step S


4


in FIG.


19


. Here, 1/A(z) is in the zero state. That is, if the shape vector s in the codebook synthesized with 1/A(z) in the zero state is s


syn


, the shape vector s and the gain g minimizing the equation (40):









E
=




n
=
0


N
-
1









(


r


(
n
)


-


gs
syn



(
n
)



)

2






(
40
)













are searched.




Although s and g minimzg the quantization error energy E may be full-searched, the following method may be used for reducing the amount of calculations.




The first method is to search the shape vector s minimizing E


s


defined by the following equation (41):










E
s

=





n
=
o


N
-
1









r


(
n
)





s
syn



(
n
)









n
=
0


N
-
1










s
syn



(
n
)


2








(
41
)













From s obtained by the first method, the ideal gain is as shown by the equation (42):










g
ref

=





n
=
0


N
-
1









r


(
n
)





s
syn



(
n
)








n
=
0


N
-
1










s
syn



(
n
)


2







(
42
)













Therefore, as the second method, such g minimizing the equation (43):








Eg


=(


g




ref




−g


)


2


  (43)






is searched.




Since E is a quadratic function of g, such g miriinzing Eg minimizes E.




From s and g obtained by the first and second methods, the quantization error vector e can be calculated by the following equation (44):








e=r−gs




syn


  (44)






This is quantized as a reference of the second-stage second encoding unit


120




2


as in the first stage.




That is, the signal supplied to the terminals


305


and


307


are directly supplied from the perceptually weighted synthesis filter


312


of the first-stage second encoding unit


120




1


to a perceptually weighted synthesis filter


322


of the second stage second encoding unit


120




2


. The quantization error vector e found by the first-stage second encoding unit


120




1


is supplied to a subtractor


323


of the second-stage second encoding unit


120




2


.




At step S


5


of

FIG. 19

, processing similar to that performed in the first stage occurs in the second-stage second encoding unit


120




2


is performed. That is, a representative value output from the stochastic codebook


320


of the 5-bit shape index output is sent to the gain circuit


321


where the representative value output of the codebook


320


is multiplied with the gain from the gain codebook


325


of the 3-bit gain index output. An output of the weighted synthesis filter


322


is sent to the subtractor


323


where a difference between the output of the perceptually weighted synthesis filter


322


and the first-stage quantization error vector e is found. This difference is sent to a distance calculation circuit


324


for distance calculation in order to search the shape vector s and the gain g minimizing the quantization error energy E.




The shape index output of the stochastic codebook


310


and the gain index output of the gain codebook


315


of the first-stage second encoding unit


120




1


and the index output of the stochastic codebook


320


and the index output of the gain codebook


325


of the second-stage second encoding unit


120




2


are sent to an index output switching circuit


330


. If 23 bits are outputted from the second encoding unit


120


, the index data of the stochastic codebooks


310


,


320


and the gain codebooks


315


,


325


of the first-stage and second-stage second encoding units


120




1


,


120




2


are summed and outputted. If 15 bits are outputted, the index data of the stochastic codebook


310


and the gain codebook


315


of the first-stage second encoding unit


120




1


are outputted.




The filter state is then updated for calculating zero-input response output as shown at step S


6


.




In the present embodiment, the number of index bits of the second-stage second encoding unit


120




2


is as small as 5 for the shape vector, while that for the gain is as small as 3. If suitable shape and gain are not present in this case in the codebook, the quantization error is likely to be increased, instead of being decreased.




Although 0 may be provided in the gain for preventing this problem from occurring, there are only three bits for the gain. If one of these is set to 0, the quantizer performance is significantly deteriorated. In this consideration, an all-0 vector is provided for the shape vector to which a larger number of bits have been allocated. The above-mentioned search is performed, with the exclusion of the all-zero vector, and the all-zero vector is selected if the quantization error has ultimately been increased. The gain is arbitrary. This makes it possible to prevent the quantization error from being increased in the second-stage second encoding unit


120




2


.




Although the two-stage arrangement has been described above with reference to

FIG. 18

, the number of stages may be larger than 2. In such case, if the vector quantization by the first-stage closed-loop search has come to a close, quantization of the N'th stage, where 2≦N, is carried out with the quantization error of the (N−1)st stage as a reference input, and the quantization error of the of the N'th stage is used as a reference input to the (N+1)st stage.




It is seen from

FIGS. 18 and 19

that, by employing multi-stage vector quantizers for the second encoding unit, the amount of calculations is decreased as compared to that with the use of straight vector quantization with the same number of bits or with the use of a conjugate codebook. In particular, in CELP encoding in which vector quantization of the time-axis waveform employing the closed-loop search by the analysis by synthesis method is performed, a smaller number of search operations is crucial. In addition, the number of bits can be easily switched by switching between employing both index outputs of the two-stage second encoding units


120




1


,


120




2


and employing only the output of the first-stage second encoding unit


120


, without employing the output of the second-stage second encoding unit


120




1


. If the index outputs of the first-stage and second-stage second encoding units


120




1


,


120




2


are combined and outputted, the decoder can easily cope with the configuration by selecting one of the index outputs. That is, the decoder can easily cope with the configuration by decoding the parameter encoded with e.g., 6 kbps using a decoder operating at 2 kbps. In addition, if zero-vector is contained in the shape codebook of the second-stage second encoding unit


120




2


, it becomes possible to prevent the quantization error from being increased with lesser deterioration in performance than if 0 is added to the gain.




The code vector of the stochastic codebook (shape vector) can be generated by, for example, the following method.




The code vector of the stochastic codebook, for example, can be generated by clipping the so-called Gaussian noise. Specifically, the codebook may be generated by generating the Gaussian noise, clipping the Gaussian noise with a suitable threshold value and normalizing the clipped Gaussian noise.




However, there are a variety of types of sounds in speech. For example, the Gaussian noise can cope with speech of consonant sounds close to noise, such as “sa, shi, su, se and so”, while the Gaussian noise cannot cope with the speech of acutely rising consonants, such as “pa, pi, pu, pe and po”.




According to the present invention, the Gaussian noise is applied to some of the code vectors, while the remaining portion of the code vectors is dealt with by learning, so that both the consonants having sharply rising consonant sounds and the consonant sounds close to the noise can be coped with. If, for example, the threshold value is increased, such vector is obtained which has several larger peaks, whereas, if the threshold value is decreased, the code vector is approximate to the Gaussian noise. Thus, by increasing the variation in the clipping threshold value, it becomes possible to cope with consonants having sharp rising portions, such as “pa, pi, pu, pe and po” or consonants close to noise, such as “sa, shi, su, se and so”, thereby increasing clarity.

FIG. 20

shows the appearance of the Gaussian noise and the clipped noise by a solid line and by a broken line, respectively.

FIGS. 20A and 20B

show the noise with the clipping threshold value equal to 1.0, that is with a larger threshold value, and the noise with the clipping threshold value equal to 0.4, that is with a smaller threshold value. It is seen from

FIGS. 20A and 20B

that, if the threshold value is selected to be larger, there is obtained a vector having several larger peaks, whereas, if the threshold value is selected to a smaller value, the noise approaches to the Gaussian noise itself.




For realizing this, an initial codebook is prepared by clipping the Gaussian noise and a suitable number of non-learning code vectors are set. The non-learning code vectors are selected in the order of the increasing variance value for coping with consonants close to the noise, such as “sa, shi, su, se and so”. The vectors found by learning use the LBG algorithm for learning. The encoding under the nearest neighbor condition uses both the fixed code vector and the code vector obtained on learning. In the centroid condition, only the code vector to be learned is updated. Thus the code vector to be learned can cope with sharply rising consonants, such as “pa, pi, pu, pe and po”.




An optimum gain may be learned for these code vectors by usual learning.





FIG. 21

shows the processing flow for the constitution of the codebook by clipping the Gaussian noise.




In

FIG. 21

, the number of times of learning n is set to n=0 at step S


10


for initialization. With an error D


0


=∞, the maximum number of times of learning n


max


is set and a threshold value ε setting the learning end condition is set.




At the next step S11, the initial codebook by clipping the Gaussian noise is generated. At step S12, part of the code vectors is fixed as non-learning code vectors.




At the next step S13, encoding is done using the above codebook. At step S14, the error is calculated. At step S15, it is judged if (D


n−1


−D


n


/D


n


<ε, or n=n


max


. If the result is YES, processing is terminated. If the result is NO, processing transfers to step S16.




At step S


16


, the code vectors not used for encoding are processed. At the next step S


17


, the code books are updated. At step S


18


, the number of times of learning n is incremented before retuniing to step S


13


.




In the speech encoder of

FIG. 3

, a specified example of a voiced/unvoiced (V/UV) discrimination unit


115


is now explained.




The V/UV discrimination unit


115


performs V/UV discrimination of a frame in subject based on an output of the orthogonal transform circuit


145


, an optimum pitch from the high precision pitch search unit


146


, spectral amplitude data from the spectral evaluation unit


148


, a maximum normalized autocorrelation value r(p) from the open-loop pitch search unit


141


and a zero-crossing count value from the zero-crossing counter


412


. The boundary position of the band-based results of V/UV decision, similar to that used for MBE, is also used as one of the conditions for the frame in subject.




The condition for V/UV discrimination for the MBE, employing the results of band-based V/UV discrimination, is now explained.




The parameter or amplitude |A


m


| representing the magnitude of the m'th harmonics in the case of MBE may be represented by









&LeftBracketingBar;

A
m

&RightBracketingBar;


=




j
=

a
m



b
m









&LeftBracketingBar;

S


(
j
)


&RightBracketingBar;




&LeftBracketingBar;

E


(
j
)


&RightBracketingBar;

/




j
=

a
m



b
m









&LeftBracketingBar;

E


(
j
)


&RightBracketingBar;

2















In this equation, |S(j)| is a spectrum obtained on DFTing LPC residuals, and |E(j)| is the spectrum of the basic signal, specifically, a 256-point Hamming window, while a


m


, b


m


are lower and upper limit values, represented by an index j, of the frequency corresponding to the m'th band corresponding in turn to the m'th harmonics. For band-based V/UV discrimination, a noise to signal ratio (NSR) is used. The NSR of the m'th band is represented by






NSR
=





j
=

a
m



b
m









(

|


S


(
j
)


-


&LeftBracketingBar;

A
m

&RightBracketingBar;



&LeftBracketingBar;

E


(
j
)


&RightBracketingBar;




)

2






j
=

a
m



b
m









&LeftBracketingBar;

S


(
j
)


&RightBracketingBar;

2













If the NSR value is larger than a re-set threshold, such as 0.3, that is, if an error is larger, it may be judged that approximation of |S(j)|−|A


m


| |E(j)| in the band in subject is not good, that is, that the excitation signal |E(j)| is not appropriate as the base. Thus the band in subject is determined to be unvoiced (UV). If otherwise, it may be judged that approximation has been done fairly well and hence is determined to be voiced (V).




It is noted that the NSR of the respective bands (harmonics) represent similarity of the harmonics from one harmonics to another. The sum of gain-weighted harmonics of the NSR is defined as NSR


all


by:






NSR


all


=(Σ


m




|A




m


|NSR


m


)/(Σ


m




|A




m


|)






The rule base used for V/UV discrimination is determined depending on whether this spectral siniiarity NSR


all


is larger or smaller than a certain threshold value. This threshold is herein set to Th


NSR


=0.3. This rule base is concerned with the maximum value of the autocorrelation of the LPC residuals, frame power and the zero-crossing. In the case of the rule base used for NSR


all


<Th


NSR


, the frame in subject becomes V and UV if the rule is applied and if there is no applicable rule, respectively.




A specified rule is as follows:




For NSR


all


<TH


NSR


,




if numZero XP<24, frmPow>340 and r


0


>0.32, then the frame in subject is V;




For NSR


all


≧TH


NSR


,




If numZero XP>30, frmpow<900 and r


0


>0.23, then the frame in subject is UV;




wherein respective variables are defined as follows:




numZeroXP: number of zero-crossings per frame




frmPow: frame power




r


0


: maximum value of auto-correlation




The rule representing a set of specified rules such as those given above are consulted for doing V/UV discrirnination.




The constitution of essential portions and the operation of the speech signal decoder of

FIG. 4

will be explained in more detail.




In the inverse vector quantizer


212


of the spectral envelope, an inverse vector quantizer configuration corresponding to the vector quantizer of the speech encoder is used.




For example, if the vector quantization is applied by the configuration shown in

FIG. 12

, the decoder side reads out the code vectors s


0


, s


1


, and the gain g are read from the shape codebooks CB


0


and CB


1


and the gain codebook DB


g


and taken out as the vectors of a fixed dimension of g(s


0


+s


1


), such as 44-dimension, so as to be converted to variable-dimension vectors corresponding to the number of dimensions of the vector of the original harmonics spectrum (fixed/variable dimension conversion).




If the encoder has the configuration of a vector quantizer of summing the fixed-dimension code vector to the variable-dimension code vector, as shown in

FIGS. 14

to


17


, the code vector read out from the codebook for variable dimension (codebook CB


0


of

FIG. 14

) is fixed/variable dimension converted and summed to a number of the code vectors for fixed dimension read out from the codebook for fixed dimension (codebook CB


1


in

FIG. 14

) corresponding to the number of dimensions from the low range of the harmonics. The resulting sum is taken out.




The LPC synthesis filter


214


of

FIG. 4

is separated into the synthesis filter


236


for the voiced speech (V) and into the synthesis filter


237


for the voiced speech (UV), as previously explained. If LSPs are continuously interpolated every 20 samples, that is every 2.5 msec, without separating the synthesis filter without making V/UV distinction, LSPs of totally different properties are interpolated at V to UV or UV to V transient portions. The result is that LPC of UV and V are used as residuals of V and UV, respectively, such that strange sound tends to be produced. For preventing such ill effects from occurring, the LPC synthesis filter is separated into V and UV and LPC coefficient interpolation is independently performed for V and UV.




The method for coefficient interpolation of the LPC filters


236


,


237


in this case is now explained. Specifically, LSP interpolation is switched depending on the V/UV state, as shown in FIG.


22


.




Taking an example of the 10-order LPC analysis, the equal interval LSP in

FIG. 22

is such LSP corresponding to α-parameters for flat filter characteristics and the gain equal to unity, that is α


0


=1, α


1





2


= . . . =α


10


=0, with 0≦α≦10.




Such 10-order LPC analysis, that is, 10-order LSP, is the LSP corresponding to a completely flat spectrum, with LSPs being arrayed at equal intervals at 11 equally spaced apart positions between 0 and π, as shown in FIG.


23


. In such case, the entire band gain of the synthesis filter has minimum through-characteristics at this time.





FIG. 24

schematically shows the manner of gain change. Specifically,

FIG. 15

shows how the gain of 1/H


uv(z)


and the gain of 1/H


v(z)


are changed during transition from the unvoiced (UV) portion to the voiced (V) portion.




As for the unit of interpolation, it is 2.5 msec (20 samples) for the coefficient of 1/H


v(z)


, while it is 10 msec (80 samples) for the bit rates of 2 kbps and 5 msec (40 samples) for the bit rate of 6 kbps, respectively, for the coefficient of 1/H


uv(z)


. For UV, since the second encoding unit


120


performs waveform matching employing an analysis by synthesis method, interpolation with the LSPs of the neighboring V portions may be performed without performing interpolation with the equal interval LSPs. It is noted that, in the encoding of the UV portion in the second encoding portion


120


, the zero-input response is set to zero by clearing the inner state of the 1/A(z) weighted synthesis filter


122


at the transient portion from V to UV.




Outputs of these LPC synthesis filters


236


,


237


are sent to the respective independently provided post-filters


238




u


,


238




v


. The intensity and the frequency response of the post-filters are set to values different for V and UV for setting the intensity and the frequency response of the post-filters to different values for V and UV.




The windowing of junction portions between the V and the UV portions of the LPC residual signals, that is, the excitation as an LPC synthesis filter input, is now explained. This windowing is carried out by the sinusoidal synthesis circuit


215


of the voiced speech synthesis unit


211


and by the windowing circuit


223


of the unvoiced speech synthesis unit


220


shown in FIG.


4


. The method for synthesis of the V-portion of the excitation is explained in detail in JP Patent Application No.4-91422, proposed by the present Assignee, while the method for fast synthesis of the V-portion of the excitation is explained in detail in JP Patent Application No.6-198451, similarly proposed by the present Assignee. In the present illustrative embodiment, this method of fast synthesis is used for generating the excitation of the V-portion using this fast synthesis method.




In the voiced (V) portion, in which sinusoidal synthesis is performed by interpolation using the spectrum of the neighboring frames, all waveforms between the n'th and (n+1)st frames can be produced, as shown in FIG.


25


. However, for the signal portion astride the V and UV portions, such as the (n+1)st frame and the (n+2)nd frame in

FIG. 25

, or for the portion astride the UV portion and the V portion, the UV portion encodes and decodes only data of±80 samples (a sum total of 160 samples is equal to one frame interval). The result is that windowing is carried out beyond a center point CN between neighboring frames on the V-side, while it is carried out as far as the center point CN on the UV side, for overlapping the junction portions, as shown in FIG.


26


. The reverse procedure is used for the UV to V transient portion. The windowing on the V-side may also be as shown by a broken line in FIG.


26


.




The noise synthesis and the noise addition at the voiced (V) portion is explained. These operations are performed by the noise synthesis circuit


216


, weighted overlap-and-add circuit


217


and by the adder


218


of

FIG. 4

by adding to the voiced portion of the LPC residual signal the noise which takes into account the following parameters in connection with the excitation of the voiced portion as the LPC synthesis filter input.




That is, the above parameters may be enumerated by the pitch lag Pch, spectral amplitude Am[i] of the voiced sound, maximum spectral amplitude in a frame Amax and the residual signal level Lev. The pitch lag Pch is the number of samples in a pitch period for a pre-set sampling frequency fs, such as fs=8 kHz, while i in the spectral amplitude Am[i] is an integer such that 0<i<I for the number of harmonics in the band of fs/2 equal to I=Pch/2.




The processing by this noise synthesis circuit


216


is carried out in much the same way as in synthesis of the unvoiced sound by, for example, multi-band encoding (MBE).

FIG. 27

illustrates a specified embodiment of the noise synthesis circuit


216


.




That is, referring to

FIG. 27

, a white noise generator


401


outputs the Gaussian noise which is then processed with the short-term Fourier transform (STFT) by an STFT processor


402


to produce a power spectrum of the noise on the frequency axis. The Gaussian noise is the time-domain white noise signal waveform windowed by an appropriate windowing function, such as the Hanning window, having a pre-set length, such as 256 samples. The power spectrum from the STFT processor


402


is sent for amplitude processing to a multiplier


403


so as to be multiplied with an output of the noise amplitude control circuit


410


. An output of the amplifier


403


is sent to an inverse STFT (ISTFT) processor


404


where it is ISTFTed using the phase of the original white noise as the phase for conversion into a time-domain signal. An output of the ISTFT processor


404


is sent to a weighted overlap-add circuit


217


.




In the embodiment of

FIG. 27

, the time-domain noise is generated from the white noise generator


401


and processed with orthogonal transform, such as STFT, for producing the frequency-domain noise. Alternatively, the frequency-domain noise may also be generated directly by the noise generator. By directly generating the frequency-domain noise, orthogonal transform processing operations such as for STFT or ISTFT, may be eliminated.




Specifically, a method of generating random numbers in a range of ±x and handling the generated random numbers as real and imaginary parts of the FFT spectrum, or a method of generating positive random numbers ranging from 0 to a maximum number (max) for handling them as the amplitude of the FFT spectrum and generating random numbers ranging −π to +π and handling these random numbers as the as the phase of the FFT spectrum, may be employed.




This renders it possible to eliminate the STFT processor


402


of

FIG. 27

to simplify the structure or to reduce the processing volume.




The noise amplitude control circuit


410


has a basic structure shown for example in FIG.


28


and finds the synthesized noise amplitude Am_noise[i] by controlling the multiplication coefficient at the multiplier


403


based on the spectral amplitude Am[i] of the voiced (V) sound supplied via a terminal


411


from the quantizer


212


of the spectral envelope of FIG.


4


. That is, in

FIG. 28

, an output of an optimum noise_mix value calculation circuit


416


, to which are entered the spectral amplitude Am[i] and the pitch lag Pch, is weighted by a noise weighting circuit


417


, and the resulting output is sent to a multiplier


418


so as to be multiplied with a spectral amplitude Am[i] to produce a noise amplitude Am_noise[i].




As a first specified embodiment for noise synthesis and addition, a case in which the noise amplitude Am_noise[i] becomes a function of two of the above four parameters, namely the pitch lag Pch and the spectral amplitude Am[i], is now explained.




Among these functions f


1


(Pch, Am[i]) are:








f




1


(Pch, Am[


i


])=0 where 0


<i


<Noise







b×I


),










f




1


(Pch, Am[


i


])=Am[


i


]×noise_mix where Noise







b×I≦i≦I


, and








noise_mix=


K


×Pch/2.0.






It is noted that the maximum value of noise_max is noise_mix_max at which it is clipped. As an example, K=0.02, noise_mix_max=0.3 and Noise_b=0.7, where Noise b is a constant which determines from which portion of the entire band this noise is to be added. In the present embodiment, the noise is added in a frequency range higher than 70%-position, that is, if fs=8 kHz, the noise is added in a range from 4000×0.7=2800 kHz as far as 4000 kHz.




As a second specified embodiment for noise synthesis and addition, in which the noise amplitude Am_noise[i] is a function f


2


(Pch, Am[i], Amax) of three of the four parameters, namely the pitch lag Pch, spectral amplitude Am[i] and the maximum spectral amplitude Amax, is explained.




Among these functions f


2


(Pch, Am[i], Amax) are:








f




2


(Pch, Am[


i


], Amax)=0, where 0


<i


≦Noise







b×I


),










f




1


(Pch, Am[


i


], Amax)=Am[


i


]×noise_mix where Noise







b×I≦i≦I, and










noise_mix=


K


×Pch/2.0.






It is noted that the maximum value of noise_mix is noise_mix_max and, as an example, K=0.02, noise_mix_max=0.3 and Noise_b=0.7.




If Am[i]×noise_mix>A max×C×noise_mix, f


2


(Pch, Am[i], Amax)=Amax×C×noise_mix, where the constant C is set to 0.3 (C=0.3). Since the level can be prohibited by this conditional equation from being excessively large, the above values of K and noise_mix_max can be increased further and the noise level can be increased further if the high-range level is higher.




As a third specified embodiment of the noise synthesis and addition, the above noise amplitude Am_noise[i] may be a function of all of the above four parameters, that is f


3


(Pch, Am[i], Amax, Lev).




Specific examples of the function f


3


(Pch, Am[i], Am[max], Lev) are basically similar to those of the above function f


2


(Pch, Am[i], Amax). The residual signal level Lev is the root mean square (RMS) of the spectral amplitudes Am[i] or the signal level as measured on the time axis. The difference from the second embodiment is that the values of K and noise_mix_max are set so as to be functions of Lev. That is, if Lev is smaller or larger, the values of K, and noise_mix_max are set to larger and smaller values, respectively. Alternatively, the value of Lev may be set so as to be inversely proportionate to the values of K and noise_mix_max.




The post-filters


238




v


,


238




u


will now be explained.





FIG. 29

shows a post-filter that may be used as post-filters


238




u


,


238




v


in the embodiment of

FIG. 4. A

spectrum shaping filter


440


, as an essential portion of the post-filter, is made up of a formant emphasizing filter


441


and a high-range emphasizing filter


442


. An output of the spectrum shaping filter


440


is sent to a gain adjustment circuit


443


adapted for correcting gain changes caused by spectrum shaping. The gain adjustment circuit


443


has its gain G determined by a gain control circuit


445


by comparing an input x to an output y of the spectrum shaping filter


440


for calculating gain changes used for calculating correction values.




If the coefficients of the denominators Hv(z) and Huv(z) of the LPC synthesis filter, that is ∥-parameters, are expressed as α


i


, the characteristics PF(z) of the spectrum shaping filter


440


may be expressed by:







PF


(
z
)


=






i
=
0

P








α
i



β
i



z

-
i








i
=
0

P








α
i



γ
i



z
i










(

1
-

kz

-
1



)












The fractional portion of this equation represents characteristics of the formant emphasizing filter, while the portion (1−kz


−1


) represents characteristics of a high-range emphasizing filter. β, γ and k are constants, such that, for example, β=0.6, γ=0.8 and k=0.3.




The gain of the gain adjustment circuit


443


is given by:






G
=






i
=
0

159








x
2



(
i
)







i
=
0

159








y
2



(
i
)















In the above equation, x(i) and y(i) represent an input and an output of the spectrum shaping filter


440


, respectively.




It is noted that, as shown in

FIG. 30

, while the coefficient updating period of the spectrum shaping filter


440


is 20 samples or 2.5 msec as is the updating period for the α-parameter which is the coefficient of the LPC synthesis filter, the updating period of the gain G of the gain adjustment circuit


443


is 160 samples or 20 msec.




By setting the coefficient updating period of the spectrum shaping filter


443


so as to be longer than that of the coefficient of the spectrum shaping filter


440


as the post-filter, it becomes possible to prevent ill effects otherwise caused by gain adjustment fluctuations.




That is, in a generic post filter, the coefficient updating period of the spectrum shaping filter is set so as to be equal to the gain updating period and, if the gain updating period is selected to be 20 samples and 2.5 msec, variations in the gain values are caused even in one pitch period, thus producing the click noise, as shown in FIG.


30


. In the present embodiment, by setting the gain switching period so as to be longer, for example, equal to one frame or 160 samples or 20 msec, abrupt gain value changes may be prohibited from occurring. Conversely, if the updating period of the spectrum shaping filter coefficients is 160 samples or 20 msec, no smooth changes in filter characteristics can be produced, thus producing ill effects in the synthesized waveform. However, by setting the filter coefficient updating period to shorter values of 20 samples or 2.5 msec, it becomes possible to realize more effective post-filtering.




By way of gain junction processing between neighboring frames, the filter coefficient and the gain of the previous frame and those of the current frame are multiplied by triangular windows of








W


(


i


)=


i


/20(0


≦i


≦20) and






1−W(i) where 0≦i≦20 for fade-in and fade-out and the resulting products are summned together.

FIG. 31

shows how the gain G


1


of the previous frame merges to the gain G


1


of the current frame. Specifically, the proportion of using the gain and the filter coefficients of the previous frame is decreased gradually, while that of using the gain and the filter coefficients of the current filter is increased gradually. The inner states of the filter for the current frame and that for the previous frame at a time point T of

FIG. 31

are started from the same states, that is from the final states of the previous frame.




The above-described signal encoding and signal decoding apparatus may be used as a speech codebook employed in, for example, a portable communication terminal or a portable telephone set shown in

FIGS. 32 and 33

.





FIG. 32

shows a transmitting side of a portable terminal employing a speech encoding unit


160


configured as shown in

FIGS. 1 and 3

. The speech signals collected by a microphone


161


are amplified by an amplifier


162


and converted by an analog/digital (A/D) converter


163


into digital signals which are sent to the speech encoding unit


160


configured as shown in

FIGS. 1 and 3

. The digital signals from the A/D converter


163


are supplied to the input terminal


101


. The speech encoding unit 160 performs encoding as explained in connection with

FIGS. 1 and 3

. Output signals of output terminals of

FIGS. 1 and 2

are sent as output signals of the speech encoding unit


160


to a transmission channel encoding unit


164


which then performs channel coding on the supplied signals. Output signals of the transmission channel encoding unit


164


are sent to a modulation circuit


165


for modulation and thence supplied to an antenna


168


via a digital/analog (D/A) converter


166


and an RF amplifier


167


.





FIG. 33

shows a reception side of the portable terminal employing a speech decoding unit


260


configured as shown in

FIGS. 2 and 4

. The speech signals received by the antenna


261


of

FIG. 33

are amplified in an RF amplifier


262


and sent via an analog/digital (A/D) converter


263


to a demodulation circuit


264


, from which demodulated signal are sent to a transmission channel decoding unit


265


. An output signal of the decoding unit


265


is supplied to a speech decoding unit


260


configured as shown in

FIGS. 2 and 4

. The speech decoding unit


260


decodes the signals in a manner as explained in connection with

FIGS. 2 and 4

. An output signal at an output terminal


201


of

FIGS. 2 and 4

is sent as a signal of the speech decoding unit


260


to a digital/analog (D/A) converter


266


. An analog speech signal from the D/A converter


266


is sent to a speaker


268


.




The present invention is not limited to the above-described embodiments. For example, the construction of the speech analysis side (encoder) of

FIGS. 1 and 3

or the speech synthesis side (decoder) of

FIGS. 2 and 4

, described above as hardware, may be realized by a software program using, for example, a digital signal processor (DSP). The synthesis filters


236


,


237


or the post-filters


238




v


,


238




u


on the decoding side may be designed as a sole LPC synthesis filter or a sole post-filter without separation into those for the voiced speech or the unvoiced speech. The present invention is also not limited to transmission or recording/reproduction and may be applied to a variety of usages such as pitch conversion, speed conversion, synthesis of the computerized speech or noise suppression.



Claims
  • 1. A vector quantization method in which an input vector is compared to code vectors stored in a plurality of codebooks for outputting an index of one of the code vectors in each of the codebooks, comprising:a pre-selecting step for finding a degree of similarity between the input vector and all the code vectors stored in each of the plurality of codebooks and for pre-selecting a plurality of code vectors exhibiting a high degree of similarity from each of the plurality of codebooks in a fixed dimension, wherein the input vector is formed of a parameter on a frequency axis derived from a speech signal; and an ultimate selecting step of further selecting from the pre-selected plurality of code vectors a code vector for each codebook that has a minimum error from the input vector in a variable dimension.
  • 2. The vector quantization method as claimed in claim 1, wherein said degree of similarity is one of:an inner product of the input vector and said code vector, a weighted inner product of the input vector and said code vector, a value of said inner product divided by a norm of each code vector, a value of said inner product divided by a weighted norm of each code vector, a value of said weighted inner product divided by the norm of each code vector, and a value of said weighted inner product divided by the weighted norm of each code vector.
  • 3. A speech encoding method in which an input speech signal is divided on a time axis in terms of pre-set encoding units and encoded in terms of the pre-set encoding units, comprising the steps of:finding spectral components of harmonics by sinusoidal analysis of a signal derived from the input speech signal; and vector quantizing parameters derived from encoding unit-based spectral components of the harmonics as an input vector for encoding, wherein said step of vector quantizing includes: a pre-selecting step for finding a degree of similarity between the input vector and all code vectors stored in each of a plurality of codebooks and for pre-selecting a plurality of code vectors exhibiting a high degree of similarity from each of the plurality of codebooks in a fixed dimension, wherein the input vector is formed of a parameter on a frequency axis derived from the input speech signal; and an ultimate selection step of further selecting from the pre-selected plurality of code vectors selected by said pre-selecting step, a code vector that has a minimum error from the input vector in a variable dimension.
  • 4. The speech encoding method as claimed in claim 3, wherein said degree of similarity is one of:an inner product of the input vector and said code vector, a weighted inner product of the input vector and said code vector, a value of said inner product divided by a norm of each code vector, a value of said inner product divided by a weighted norm of each code vector, a value of said weighted inner product divided by the norm of each code vector, and a value of said weighted inner product divided by the weighted norm of each code vector.
  • 5. A speech encoding apparatus in which an input speech signal is divided on a time axis in terms of pre-set encoding units and encoded in terms of the pre-set encoding units, comprising:prediction encoding means for finding short-term prediction residuals of the input speech signal; and sinusoidal analytic encoding means including: vector quantization means form quantizing a parameter derived from spectral components of harmonics obtained sinusoidal analysis as an input vector, wherein said vector quantization means includes: pre-selecting means for finding a degree of similarity between the input vector and all code vectors stored in each of a plurality of codebooks and for pre-selecting a plurality of code vectors exhibiting a high degree of similarity from each of the plurality of codebooks in a fixed dimension, and an ultimate selection means for further selecting from the pre-selected plurality of code vectors from each of the plurality to codebooks selected by said preselecting means, a code vector that has a minimum error thereof from the input vector in a variable dimension.
  • 6. The speech encoding apparatus as claimed in claim 5, wherein said degree of similarity is one of:an inner product of the input vector and said code vector, a weighted inner product of the input vector and said code vector, a value of said inner product divided by a norm of each code vector, a value of said inner product divided by a weighted norm of each code vector, a value of said weighted inner product divided by the norm of each code vector, and a value of said weighted inner product divided by the weighted norm of each code vector.
  • 7. The speech encoding apparatus as claimed in claim 6, wherein as a weight of the norm, one of an inner product of the input vector and said code vector, and a weighted inner product of the input vector and said code vector, divided by one of the norm and the weighted norm of each code vector, is used.
Priority Claims (1)
Number Date Country Kind
8-251614 Sep 1996 JP
US Referenced Citations (12)
Number Name Date Kind
5307441 Tzeng Apr 1994 A
5451951 Elliott et al. Sep 1995 A
5677986 Amada et al. Oct 1997 A
5774838 Miseki et al. Jun 1998 A
5778335 Ubale et al. Jul 1998 A
5819213 Oshikiri et al. Oct 1998 A
5890110 Gersho et al. Mar 1999 A
5926788 Nishiguchi Jul 1999 A
5950155 Nishiguchi Sep 1999 A
5960386 Janiszewski et al. Sep 1999 A
6003001 Maeda Dec 1999 A
6018707 Nishiguchi et al. Jan 2000 A
Foreign Referenced Citations (1)
Number Date Country
0770989 Oct 1996 EP
Non-Patent Literature Citations (6)
Entry
Trancoso et al., “High Quality Mid-Rate Speech Coding,” Electrotechnical Conference, 1989. Proceedings. ‘Integrated Research, Industry and Education in Energy and Communication Engineering,’ MELECON '89., Mediterranean, pp. 217-220, Apr. 1989.*
Nagaratnam et al., “Spectral Magnitude Modelling for Sinusoidal Coding,” 1995 IEEE Workshop on Speech Coding for Telecommunications, pp. 81-82, Sep. 1995.*
Das et al., “Variable-dimension vector quantization of speech spectra for low-rate vocoders,” DCC '94 Proceedings, Data Compression Conference, Mar. 1994, pp. 420 to 429.*
Akitoshi Kataoka, et al., “An 8-kbit/s Speech Coder Based On Conjugate Structure CELP,” IEEE, Apr. 27, 1993.
Masayuki Nishiguchi, et al., Harmonic and Noise Coding of LPC Residuals With Classified Vector Quantization, IEEE, May 9, 1995.
M. Elshafei, et al., “Fast Methods for Code Search in CELP,” IEEE, Jul., 1993.