Enhanced waveform interpolative coder

Information

  • Patent Grant
  • 7643996
  • Patent Number
    7,643,996
  • Date Filed
    Wednesday, December 1, 1999
    24 years ago
  • Date Issued
    Tuesday, January 5, 2010
    14 years ago
Abstract
An Enhanced analysis-by-synthesis Waveform Interpolative speech coder able to operate at 4 kbps. Novel features include analysis-by-synthesis quantization of the slowly evolving waveform, analysis-by-synthesis vector quantization of the dispersion phase, a special pitch search for transitions, and switched-predictive analysis-by-synthesis gain vector quantization. Subjective quality tests indicate that it exceeds MPEG-4 at 4 kbps and of G.723.1 at 6.3 kbps.
Description
BACKGROUND OF THE INVENTION

Recently, there has been growing interest in developing toll-quality speech coders at rates of 4 kbps and below. The speech quality produced by waveform coders such as code-excited linear prediction (CELP) coders degrades rapidly at rates below 5 kbps [B. S. Atal, and M. R. Schroder, “Stochastic Coding of Speech at Very Low Bit Rate”, Proc. Int. Conf. Comm, Amsterdam, pp. 1610-1613, 1984]. On the other hand, parametric coders such as the waveform-interpolative (WI) coder, the sinusoidal-transform coder (STC), and the multiband-excitation (MBE) coder produce good quality at low rates, but they do not achieve toll quality [Y. Shoham, “High Quality Speech Coding at 2.4 and 4.0 kbps Based on Time Frequency-Interpolation”, IEEE ICASSP'93, Vol. II, pp. 167-170, 1993; W. B. Kleijn, and J. Haagen, “Waveform Interpolation for Coding and Synthesis”, in Speech Coding Synthesis by W. B. Kleijn and K. K. Paliwal, Elsevier Science B. V., Chapter 5, pp. 175-207, 1995; I. S. Burnett, and D. H. Pham, “Multi-Prototye Waveform Coding using Frame-by-Frame Analysis-by-Synthesis”, IEEE ICASSP'97, pp. 1567-1570, 1997; R. J. McAulay, and T. F. Quatieri, “Sinusoidal Coding”, in Speech Coding Synthesis by W. B. Kleijn and K. K. Paliwal, Elsevier Science B. V., Chapter 4, pp. 121-173, 1995; and D. Griffin, and J. S. Lim, “Multiband Excitation Vocoder”, IEEE Trans. ASSP, Vol. 36, No. 8, pp. 1223-1235, August 1988]. This is mainly due to lack of robustness to parameter estimation, which is commonly done in open loop, and to inadequate modeling of non-stationary speech segments. Also, in parametric coders the phase information is commonly not transmitted, and this is for two reasons: first, the phase is of secondary perceptual significance; and second, no efficient phase quantization scheme is known. WI coders typically use a fixed phase vector for the slowly evolving waveform [Shoham, supra; Kleijn et al, supra; and Burnett et al, supra]. For example, in Kleijn et al, a fixed male speaker extracted phase was used. On the other hand, waveform coders such as CELP, by directly quantizing the waveform, implicitly allocate an excessive number of bits to the phase information—more than is perceptually required.


SUMMARY OF THE INVENTION

The present invention overcomes the foregoing drawbacks by implementing a paradigm that incorporates analysis-by-synthesis (AbS) for parameter estimation, and a novel pitch search technique that is well suited for the non-stationary segments. In one embodiment, the invention provides a novel, efficient AbS vector quantization (VQ) encoding of the dispersion phase of the excitation signal to enhance the performance of the waveform interpolative (WI) coder at a very low bit-rate, which can be used for parametric coders as well as for waveform coders. The enhanced analysis-by-synthesis waveform interpolative (EWI) coder of this invention employs this scheme, which incorporates perceptual weighting and does not require any phase unwrapping.


The WI coders use non-ideal low-pass filters for downsampling and unsampling of the slowly evolving waveform (SEW). In another embodiment of the invention, A novel AbS SEW quantization scheme is provided, which takes the non-ideal filters into consideration. An improved match between reconstructed and original SEW is obtained, most notably in the transitions.


Pitch accuracy is crucial for high quality reproduced speech in WI coders. Still another embodiment of the invention provides a novel pitch search technique based on varying segment boundaries; it allows for locking onto the most probable pitch period during transitions or other segments with rapidly varying pitch.


Commonly in speech coding, the gain sequence is downsampled and interpolated. As a result it is often smeared during plosives and onsets. To alleviate this problem, a further embodiment of the invention provides a novel switched-predictive AbS gain VQ scheme based on temporal weighting.


More particularly, the invention provides a method for interpolative coding of input signals at low data rates in which there may be significant pitch transitivity, the signals having an evolving waveform, the method incorporating at least one, and preferably all, of the following steps:


(a) AbS VQ of the SEQ whereby to reduce distortion in the signal by obtaining the accumulated weighted distortion between an original sequence of waveforms and a sequence of quantized and interpolated waveforms;


(b) AbS quantization of the dispersion phase;


(c) locking onto the most probable pitch period of the signal using both a spectral domain pitch search and a temporal domain pitch search;


(d) incorporating temporal weighting in the AbS VQ of the signal gain, whereby to emphasize local high energy events in the input signal;


(e) applying both high correlation and low correlation synthesis filters to a vector quantizer codebook in the AbS VQ of the signal gain whereby to add self correlation to the codebook vectors and maximize similarity between the signal waveform and a codebook waveform;


(f) using each value of gain in the AbS VQ of the signal gain to obtain a plurality of shapes, each composed of a predetermined number of values, and comparing said shapes to a vector quantized codebook of shapes, each having said predetermined number of values, e.g., in the range of 2-50, preferably 5-20; and


(g) using a coder in which a plurality of bits, e.g. 4 bits, are allocated to the SEW dispersion phase.


The method of the invention can be used in general with any waveform signal, and is particularly useful with speech signals. In the step of AbS VQ of the SEW, distortion is reduced in the signal by obtaining the accumulated weighted distortion between an original sequence of waveforms and a sequence of quantized and interpolated waveforms. In the step of AbS quantization of the dispersion phase, at least one codebook is provided that contains magnitude and phase information for predetermined waveforms. The linear phase of the input is crudely aligned, then iteratively shifted and compared to a plurality of waveforms reconstructed from the magnitude and phase information contained in one or more codebooks. The reconstructed waveform that best matches one of the iteratively shifted inputs is selected.


In the step of locking onto the most probable pitch period of the signal, the invention includes searching the temporal domain pitch, defining a boundary for a segment of said temporal domain pitch, maximizing the length of the boundary by iteratively shrinking and expanding the segment, and maximizing the similarity by shifting the segment. The searches are preferably conducted respectively at 100 Hz and 500 Hz.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of the AbS SEW vector quantization;



FIG. 2 shows amplitude-time plots illustrating the improved waveform matching obtained for a non-stationary speech segment by interpolating the optimized SEW;



FIG. 3 is a block diagram of the AbS dispersion phase vector quantization;



FIG. 4 is a plot of the segmentally weighted signal-to-noise ratio of the phase vector quantization versus the number of bits, for modified intermediate reference system (MIRS) and for non-MIRS (flat) speech;



FIG. 5 shows the results of subjective A/B tests comparing a 4-bit phase vector quantization and a male extracted fixed phase;



FIG. 6 is a block diagram of the pitch search of the EWI coder; and



FIG. 7 is a block diagram of the switch-predictive AbS gain VQ using temporal weighting.





DETAILED DESCRIPTION OF THE INVENTION

The invention has a number of embodiments, some of which can be used independently of the others to enhance speech and other signal coding systems. The embodiments cooperate to produce a superior coding system, involving AbS SEW optimization, and novel dispersion phase quantizer, pitch search scheme, switched-predictive AbS gain VQ, and bit allocation.


AbS SEW Quantization


Commonly in WI coders the SEW is distorted by downsampling and upsampling with non-ideal low-pass filters. In order to reduce such distortion, an AbS SEW quantization scheme, illustrated in FIG. 1, was used. Consider the accumulated weighted distortion, Dwl, between the input SEW vectors, rm, and the interpolated vectors, {circumflex over (r)}m, given by:











D
wI



(



r
^

M

,


{

r
m

}


m
=
1


M
+
L
-
1



)


=

[








m
=
1

M









[


r
m

-


r
~

m


]

H




w
m



[


r
m

-


r
~

m


]




+









m
=

M
+
1



M
+
L
-
1












[

1
-

α


(

t
m

)



]

2



[


r
m

-


r
~

M


]


H




w
m



[


r
m

-


r
~

M


]







]





(
1
)








where the first sum is that of many current distortions and the second sum is that of lookahead distortions. H denotes Hermitian (transposed+complex conjugate), M is the number of waveforms per frame, L is the lookahead number of waveforms, α(t) is some increasing interpolation function in the range 0≦α(t)≦1, and Wm is diagonal matrix whose elements, wkk, and the combined spectral-weighting and synthesis of the k-th harmonic given by:












w
kk

=



1
K







gA


(

z
/

γ
1


)





A
^



(
z
)




A


(

z
/

γ
2


)






2






z

=




j


(


2





π

P

)



k




;

k
=
1


,





,
K




(
2
)








where P is the pitch period, K is the number of harmonics, g is the gain , A(z) and Â(z) are the input and the quantized LPC polynomials respectively, and the spectral weighting parameters satisfy 0≦γ22≦1. It is also possible to leave out the inverse of the number of harmonics, i.e., the 1/K parameter, the gain, i.e. the g parameter, or another combination of input and quantized LPC polynomials, i.e. the A(Z) and Â(Z) parameters.


The interpolated SEW vectors are given by:

{circumflex over (r)}m=[1−α(tm)]{circumflex over (r)}0+α(tm){circumflex over (r)}M; m=1, . . . M   (3)

where t is time, m is the number of waveforms in a frame, and {circumflex over (r)}0 and {circumflex over (r)}M are the quantized SEW at the previous and at the current frame respectively. The parameter α is an increasing linear function from 0 to 1. It can be shown that the accumulated distortion in equation (1) is equal to the sum of modeling distortion and quantization distortion:











D
wI



(



r
^

M

,


{

r
m

}


m
=
1


M
+
L
-
1



)


=



D
wI



(


r

M
,
opt


,


{

r
m

}


m
=
1


M
+
L
-
1



)


+


D
w



(



r
^

M

,

r

M
,
opt



)







(
4
)








where the quantization distortion is given by:

Dw({circumflex over (r)}M,rM,opt)=({circumflex over (r)}M−rM,opt)HWM,opt({circumflex over (r)}M−rM,opt)   (5)

The optimal vector, rM,opt, which minimizes the modeling distortion, is given by:











r

M
,
opt


=


w

M
,
opt


-
1




[





m
=
1

M








α


(

t
m

)





w
m



[


r
m

-


[

1
-

α


(

t
m

)



]




r
^

0



]




+




m
=

M
+
1



M
+
L
-
1










[

1
-

α


(

t
m

)



]

2



W
m



r
m




]









where
,





(
6
)







w

M
,
opt


=





m
=
1

M









α


(

t
m

)


2



w
m



+




m
=

M
+
1



M
+
L
-
1










[

1
-

α


(

t
m

)



]

2



w
m








(
7
)







Therefore, VQ with the accumulated distortion of equation (1) can be simplified by using the distortion of equation (5), and:











r
^

M

=



arg





min


r
i





{



(


r
i


-

r

M
,
opt



)

H




w

M
,
opt




(


r
i


-

r

M
,
opt



)



}






(
6
)







An improved match between reconstructed and original SEW is obtained, most notably in the translations. FIG. 2 illustrates the improved waveform matching obtained for a non-stationary speech segment by interpolating the optimized SEW.


AbS Phase Quantization


The dispersion-phase vector quantization scheme is illustrated in FIG. 3. Consider a pitch cycle which is extracted from the residual signal, and is cyclically shifted such that its pulse is located at position zero. Let its discrete Fourier transform (DFT) are denoted by r; the resulting DFT phase is the dispersion phase, φ, which determines, along with the magnitude |r|, the waveform's pulse shape. The SEW waveform r is the vector of complex DFT coefficients. The complex number can represent magnitude and phase. After quantization, the components of the quantized magnitude vector, |{circumflex over (r)}|, are multiplied by the exponential of the quantized phases, {circumflex over (φ)}(k), to yield the quantized waveform DFT, {circumflex over (r)}, which is subtracted from the input DFT to produce the error DFT. The error DFT is then transformed to the perceptual domain by weighting it by the combined synthesis and weighting filter W(z)/A(z). In a crude linear phase alignment, the encoder searches for the phase that minimizes the energy of the perceptual domain error, shifting the signal such that the peak is located at time zero. It then allows a refining cyclic shift of the input waveform during the search, incrementally increasing or decreasing the linear phase, to eliminate any residual phase shift between the input waveform and the quantized waveform. Although shown in FIG. 3 as occurring immediately after the crude linear phase alignment, the refined linear phase alignment step can occur elsewhere in the cycle, e.g., between the X and + steps. Phase dispersion quantization aims to improve waveform matching. Efficient quantization can be obtained by using the perceptually weighted distortion:

Dw(r,{circumflex over (r)})=(r−{circumflex over (r)})HW(r−{circumflex over (r)})   (7)


The magnitude is perceptually more significant than the phase; and should therefore be quantized first. Furthermore, if the phase were quantized first, the very limited bit allocation available for the phase would lead to an excessively degraded spectral matching of the magnitude in favor of a somewhat improved, but less important, matching of the waveform. For the above distortion, the quantized phase vector is given by:










φ
^

=



arg





min



φ
^

i




{



(

r
-




j







φ
^

i







r
^





)

H



w


(

r
-




j







φ
^

i







r
^





)



}






(
8
)








where i is the running phase codebook index, and ej{circumflex over (φ)}i is the respective diagonal phase exponent matrix where i is the running phase codebook index, and the respective phase exponent matrix is given by













φ
^



j
i



=

diagonal



{



j




φ
^

i



(
k
)




}

.






(
9
)








The AbS search for phase quantization is based on evaluating (8) for each candidate phase codevector. Since only trigonometric functions of the phase candidates are used, phase unwrapping is avoided. The EWI coder uses the optimized SEW, rM,opt, and the optimized weighting, wM,opt, for the AbS phase quantization.







Equation






(
8
)


=



arg





max



φ
^

i




{



0

2

π






r
w



(
ϕ
)






r
^

w



(



φ
^

i

,
ϕ

)









ϕ



}







Equivalently, the quantized phase vector can be simplified to:










φ
^

=



arg





max



φ
^

i




{




k
=
1

K








w
kk






r


(
k
)










r
^



(
k
)







cos


(


φ


(
k
)


-



φ
^



(
k
)


i


)




}






(
10
)








where {circumflex over (φ)}(k) is the phase of, r(k), the k-th input DFT coefficient. The average global distortion measure for M vector set is:










D

w
,
Global


=



1
M






m
=

{

Data





Vectors

}










D
w



(


r
m

,




j



φ
^

m








r
^



m



)




=


1
M






m
=

{

Data





Vectors

}










1

K
m







k
=
1


K
m









w

kk
,
m









r


(
k
)


m

-




j




φ
^



(
k
)


m









r
^



(
k
)




m





2











(
11
)







The centroid equation [A. Gersho et al, “Vector Quantization and Signal Compression”, Kluwer Academic Publishers, 1992] of the k-th harmonic's phase for the j-th cluster, which minimizes the global distortion in equation (11), is given by:









φ
^



(
k
)




j
th



-


cluster


=

atan


[





m
=

{


j
th



-


cluster

}










1

K
m




w

kk
,
m








r
^



(
k
)


m








r


(
k
)


m





sin


(


φ


(
k
)


m

)








m
=

{


j
th



-


cluster

}










1

K
m




w

kk
,
m








r
^



(
k
)


m








r


(
k
)


m





cos


(


φ


(
k
)


m

)





]






These centroid equations use trigonometric functions of the phase, and therefore do not require any phase unwrapping. It is possible to use |r(k)m|2 instead of |{tilde over (r)}(k)m∥r(k)m|.


The phase vector's dimension depends on the pitch period and, therefore, a variable dimension Q has been implemented. In the WI system the possible pitch period value was divided into eight ranges, and for each range of pitch period an optimal codebook was designed such that vectors of dimension smaller than the largest pitch period in each range are zero padded.


Pitch changes over time cause the quantizer to switch among the pitch-range codebooks. In order to achieve smooth phase variations whenever such switch occurs, overlapped training clusters were used.


The phase-quantization scheme has bene implemented as a part of WI coder, and used to quantize the SEW phase. The objective performance of the suggested phase VQ has been tested under the following conditions:

    • Phase Bits: 0-6 ever 20 ms, a bitrate of 0-300 bit/second.
    • 8 pitch ranges were selected, and training has been performed for each range.
    • Modified IRS (MIRS) filtered speech (Female+Male)
      • Training Set: 99,323 vectors.
      • Test Score: 83,099 vectors.
    • Non-MIRS filtered speech (Female+Male)
      • Training Set: 101,359 vectors.
      • Test Set: 95,446 vectors.
    • The magnitude was not quantized.


      The segmental weighted signal-to-noise ratio (SNR) of the quantizer is illustrated in FIG. 4. The proposed system achieves approximately 14 dB SNR for as low as 6 bits for non-MIRS filtered speech, and nearly 10 dB for MIRS filtered speech.


Recent WI coders have used a male speaker extracted dispersion phase [Kleijn et al, supra: Y. Shoham, “Very Low Complexity Interpolative Speech Coding at 1.2 to 2.4 KBPS”, IEEE ICASSP '97, pp. 1599-1602, 1997]. A subjective A/B testw as conducted to compare the dispersion phase of this invention, using only 4 bits, to a male extracted dispersion phase. The test data included 16 MIRS speech sentences, 8 of which are of female speakers, and 8 of male speakers. During the test, all pairs of file were played twice in alternating order, and the listeners could vote for either of the systems, or for no preference. The speech material was synthesized using WI system in which only the dispersion phase was quantized every 20 ms. Twenty one listeners participated in the test. The test results, illustrated in FIG. 5, show improvement in speech quality by using the 4-bit phase VQ. The improvement is larger for female speakers than for male. This may be explained by a higher number of bits per vector sample for female, by less spectral masking for female's speech, and by a larger amount of phase-dispersion variation for female. The codebook design for the dispersion-phase quantization involves a tradeoff between robustness in terms of smooth phase variations and waveform matching. Locally optimized codebook for each pitch value may improve the waveform matching on the average, but may occasionally yield abrupt and excessive changes which may cause temporal artifacts.


Pitch Search


The pitch search of the EWI coder consists of a spectral domain search employed at 100 Hz and a temporal domain search employed at 500 Hz, as illustrated in FIG. 6. The spectral domain pitch search is based on haromonic matching [McAuley et al, supra; Griffin et al, supra; and E. Shiomot, V. Cuperman, and A. Gersho, “Hybrid Coding of Speech at 4 kbps”, IEEE Speech Coding Workshop, pp. 37-38, 1997]. The temporal domain pitch search is based on varying segment boundaries. It allows for locking onto the most probable pitch period even during transitions or other segments with rapidly varying pitch (e.g., speech onset or offset or fast changing periodicity). Initially, pitch periods, P(ni), are searched every 2 ms at instances ni by maximizing the normalized correlation of the weighted speech sw(n), that is:










P


(

n
i

)


=




arg





max


τ
,

N
1

,

N
2





{

ρ


(


n
i

,
τ
,

N
1

,

N
2


)


}


=



arg





max


τ
,

N
1

,

N
2





{






n
=


n
i

-


N
1


Δ





n
i

+
τ
+


N
2


Δ







s
w



(
n
)





s
w



(

n
-
τ

)















n
=


n
i

-


N
1


Δ





n
i

+
τ
+


N
2


Δ











s
w



(
n
)





s
w



(
n
)











n
=


n
i

-


N
1


Δ





n
i

+
τ
+


N
2


Δ











s
w



(

n
-
τ

)





s
w



(

n
-
τ

)







}







(
12
)








where τ is the shift in the segment, Δ is some incremental segment used in the summations for computational simplicity, and 0≦Nj≦└160/Δ┘. Then, every 10 ms a weighted-mean pitch value is calculated by:










P
mean

=




i
=
1

5








ρ


(

n
i

)





P


(

n
i

)


/




i
=
1

5







ρ


(

n
i

)










(
13
)








where p(ni) is the normalized correlation for P(ni). The above values (160, 10, 5) are for the particular coder and is used for illustration. Equation (12) describes the temporal domain pitch search and the temporal domain pitch refinement blocks of FIG. 6. Equation (13) describes the weighted average pitch block of FIG. 6.


Gain Quantization


The gain trajectory is commonly smeared during plosives and onsets by downsampling and interpolation. This problem is addressed and speech crispness is improved in accordance with an embodiment of the invention that provides a novel switched-predictive AbS gain VQ technique, illustrated in FIG. 7. Switched-prediction is introduced to allow for different levels of gain correlation, and to reduce the occurrence of gain outliers. In order to improve speech crispness, especially for plosives and onsets, temporal weighting is incorporated in the AbS gain VQ. The weighting is a monotonic function of the temporal gain. Two codebooks of 32 vectors each are used. Each codebook has an associated predictor coefficient, Pi, and a DC offset Di. The quantization target vector is the DC removed log-gain vector denoted by t(m). The search for the minimal weighted mean squared error (WMSE) is performed over all the vectors, cij(m), of the codebooks. The quantized target, î(m), is obtained by passing the quantized vector, cij(m), through the synthesis filter. Since each quantized target vector may have a different value of the removed DC, the quantized DC is added temporarily to the filter memory after the state update, and the next quantized vector's DC is subtracted from its before filtering is performed. Since the predictor coefficients are known, direct VQ can be used to simplify the computations. The synthesis filter adds self correlation to the codebook vector. All combinations are tried and whether high or low self correlation is used depends on which yields the best results.


Bit Allocation


The bit allocation of the coder is given in Table 1. The frame length is 20 ms, and ten waveforms are extracted per frame. The pitch and the gain are coded twice per frame.









TABLE 1







Bit allocation for EWI coder











Parameter
Bits/Frame
Bits/second















LPC
18
900



Pitch
2 × 6 = 12
600



Gain
2 × 6 = 12
600



REW
20
1000



SEW magn.
14
700



SEW phase
 4
200



Total
80
4000











Subjective Results


A subjective A/B test was conducted to compare the 4 kbps EWI coder of this invention to MPEG-4 at 4 kbps, and to G.723.1. The test data included 24 MIRS speech sentences, 12 of which are of female speakers, and 12 of male speakers. Fourteen listeners participated in the test. The test results, listed in Tables 2 to 4, indicate that the subjective quality of EWI exceeds that of MPEG-4 at 4 kbps an of G.723.1 at 5.3 kbps, and it is slightly better than that of G.723.1 at 6.3 kbps.













TABLE 2







Test
4 kbps WI
4 kbps MPEG-4




















Female
65.48%
34.52%



Male
61.90%
38.10%



Total
63.69%
36.31%











Table 2 shows the results of subjective A/B tests for comparison between the 4 kbps WI coder and th 4 kbps MPEG-4. Within 95% certainty the WI preference lies in [58.63%, 68.75%].













TABLE 3







Test
4 kbps WI
5.3 kbps G.723.1




















Female
57.74%
42.26%



Male
61.31%
38.69%



Total
59.52%
40.48%











Table 3 shows the results of subjective A/B tests for comparison between the 4 kbps WI coder to 5.3 kbps G.723.1. With 95% certainty the WI preference lies in [54.17%, 64.88%].













TABLE 4







Test
4 kbps WI
6.3 kbps G.723.1




















Female
54.76%
45.24%



Male
52.98%
47.02%



Total
53.87%
46.13%











Table 4. Results of subjective A/B test for comparison between the 4 kbps WI coder to 6.3 kbps G.723.1. With 95% certainty the WI preference lies in [48.51%, 59.23%].


The present invention incorporates several new techniques that enhance the performance of the WI coder, analysis-by-synthesis vector-quantization of the dispersion-phase, AbS optimization of the SEW, a special pitch search for transitions, and switched-predictive analysis-by-synthesis gain VQ. These features improve the algorithm and its robustness. The test results indicate that the performance of the EWI coder slightly exceeds that of G.723.1 at 6.3 kbps and therefore EWI achieve very close to toll quality, at least under clean speech conditions.

Claims
  • 1. A method for using a computer processor to interpolatively code a digitized audio waveform input signal having a first bitrate into a coded audio waveform output signal having a second bitrate lower than said first bitrate, said method comprising the steps of: extracting a slowly evolving waveform from the digitized audio waveform input signal;estimating a dispersion phase of an excitation signal;locking onto a most probable pitch period;quantizing a sequence of gain trajectory correlation values;using the computer processor to transform the extracted slowly evolving waveform, the estimated dispersion phase, the most probable pitch period and the quantized sequence of gain trajectory values into an interpolatively coded audio waveform output signal with said lower bitrate; andoutputting said coded audio waveform output signal,wherein said method comprises using the computer processor to execute at least one step selected from the group consisting of: (a) performing an analysis-by-synthesis vector quantization of the dispersion phase such that a linear shift phase residual is minimized;(b) computing a weighted average of a group of adjacent pitch values in order to computer the most probable pitch period;(c) performing spectral and temporal pitch searching in order to compute the most probable pitch period, such that the temporal pitch searching is performed at a different rate than the spectral pitch searching;(d) incorporating temporal weighting in an analysis-by-synthesis vector-quantization of the gain trajectory correlation values;(e) quantizing adjacent gain trajectory correlation values by analysis-by-synthesis vector-quantization without downsampling or interpolation;(f) incorporating switched prediction filtering in an analysis-by-synthesis vector-quantization of the sequence of gain trajectory correlation values;(g) temporal pitch searching with varying segment boundaries.
  • 2. The method of claim 1 in which said method incorporates all of steps (a) through (g).
  • 3. The method of claim 2 in which said digitized audio waveform input signal is representative of speech and said coded output signal has a subjective speech quality at 4 kbps better than that of G.723 coding at 6.3 kbps.
  • 4. The method of claim 1, wherein distortion is reduced by obtaining an accumulated weighted distortion between a sequence of input waveforms and a sequence of quantized and interpolated waveforms.
  • 5. The method of claim 1 wherein said at least one step is step (a) further comprising providing at least one codebook comprising magnitude and dispersion phase information for predetermined waveforms, approximately aligning a linear phase or output, then iteratively shifting the approximately aligned linear phase input or output, comparing the shifted input or output to a plurality of waveforms reconstructed from the magnitude and dispersion phase information contained in said at least one codebook, and selecting the reconstructed waveform that best matches one of the iteratively shifted inputs or outputs.
  • 6. The method of claim 1 wherein said at least one step includes step (g) and said varying segment boundaries are used to compute a best boundary by iteratively shifting and changing the length of the segments.
  • 7. The method of claim 1 wherein said at least one step is step (c), the spectral pitch search is conducted at a first rate and the temporal pitch searching is conducted at a second rate different from said first rate.
  • 8. The method of claim 1 wherein said at least one step is step (d) and said temporal weighting emphasizes local high energy events in the input signal.
  • 9. The method of claim 1, wherein said at least one step is step (e) or step (f) and both high correlation and low correlation synthesis filters are applied to a vector quantizer codebook and a selected one of the high and low correlation synthesis filters maximizes similarity between an input target gain vector and a reconstructed vector.
  • 10. A method for using a computer to quantize audio waveforms comprising: inputting digitized audio waveform signals to the computer,using the computer to generate a plurality of adjacent quantized and interpolated output waveforms having a lower bitrate than the input waveform signals;using the computer to determine an accumulated distortion between the input waveform signals and each of said adjacent quantized and interpolated output waveforms; andgenerating a reconstructed waveform using said accumulated distortion.
  • 11. The method of claim 10 including using accumulated spectrally weighted distortion.
  • 12. A method for using a computer to interpolatively code digitized audio waveform signals comprising: inputting the digitized audio waveform signals to the computer,extracting a slowly evolving waveform from said signals;extracting a dispersion phase from said slowly evolving waveform;performing an analysis-by-synthesis quantization of said dispersion phase; andusing the quantized dispersion phase to transform the input waveform signals into an interpolatively coded output waveform signals having a lower bitrate than said input waveform signals.
  • 13. The method of claim 12 further comprising: providing at least one codebook containing magnitude and dispersion phase information for predetermined waveforms,approximately aligning a linear phase of the digitized audio waveform signals,then iteratively shifting the approximately aligned linear phase relative to a plurality of vectors reconstructed from the magnitude and dispersion phase information contained in said at least one codebook, andselecting one of the thus reconstructed vectors that best matches one of the iteratively shifted input vectors.
  • 14. A method for using a computer processor to interpolatively code an audio waveform having certain attributes and components including a slowly evolving waveform and an associated dispersion phase, comprising: inputting digitized audio waveform signals to the computer processor and using the computer to perform analysis-by-synthesis quantization of the associated dispersion phase, including providing at least one codebook containing magnitude and dispersion phase information for predetermined waveforms,crudely aligning a linear phase of the input vector, then iteratively shifting said crudely aligned linear phase input vector relative to a plurality of vectors reconstructed from the magnitude and dispersion phase information contained in said at least one codebook, andselecting the reconstructed vector that best matches the input vector, in which a distortion measure for a given data vector is determined by a perceptually weighted average of distortion measures for harmonics of the given data vector, wherein the perceptual weighted average combines a spectral-weighting and synthesis in which an average global distortion measure for a particular vector set M is an average of distortion measures for the vectors in M and global distortion is minimized by using a control formula to determine phases of harmonics; andusing the thus selected best matching reconstructed vector to transform the input waveform signals into interpolatively coded output waveform signals having a lower bitrate than said input waveform signals.
  • 15. The method of claim 14, wherein the centroid formula uses both input waveform coefficients and quantized slowly evolving waveform coefficients.
  • 16. A method for using a computer to interpolatively code digitized audio waveform signals, comprising: inputting the digitized audio waveform signals to the computer performing spectral pitch searching on said signals,performing temporal pitch searching on said signals;determining a number of adjacent pitch values;computing a most probable pitch value by computing a weighted average pitch value from the adjacent pitch values; andusing the thus computed most probable pitch value to transform the input waveform signals into interpolatively coded output waveform signals having a lower bitrate than said input waveform signals.
  • 17. The method of claim 16 in which in the step of performing temporal domain pitch searching comprises defining a boundary for a segment used for summations in a computed measure used for the pitch searching, andselecting the boundaries of the segment that optimizes the computed measure measure by iteratively shifting and expanding the segment.
  • 18. The method of claim 16 in which the step of computing a number of adjacent pitch values includes using a respective function of normalized autocorrelations obtained for each pitch value as an associated probability weight to compute the weighted average pitch value.
  • 19. A method for using a computer to interpolatively code digitized audio waveform signals comprising: inputting the digitized audio waveform signals to the computer,performing spectral domain and temporal domain pitch searches to lock onto a most probable pitch period of each of the signals,determining a number of adjacent pitch values,then computing the most probable pitch value by computing a weighted average pitch value, andusing the thus computed most probable pitch value to transform the digitized audio waveform signals into interpolatively coded output waveform signals having a lower bitrate than said digitized audio waveform signals,wherein the temporal domain pitch searching is based on harmonic matching using varying segment boundaries.
  • 20. The method of claim 19 in which the spectral domain and temporal domain pitch searches are conduced respectively at 100 Hz and 500 Hz.
  • 21. A method of using a computer to interpolatively code digitized audio waveform input signals comprising inputting the digitized audio waveform signals to a computer;using a weighted average using normalized correlations for weights to compute a weighted average pitch value out of a set of pitch values of the waveform signals, wherein each of the pitch values is used to regenerate a respective reconstructed waveform; andusing the thus computed weighted average pitch value to transform a digitized audio waveform signal into an interpolatively coded output waveform signal having a lower bitrate than said digitized audio waveform signals.
  • 22. A method for using a computer to interpolatively code digitized audio waveform signals, comprising: inputting the digitized audio waveform signals to the computer;performing analysis-by-synthesis vector quantization of a gain sequence of each of the waveform input signals, and regenerating an output signal using said gain sequence; andusing the resultant vector quantized gain sequence value to transform a digitized audio waveform signal into an interpolatively coded output waveform signal having lower bitrate than said digitized audio waveform signals.
  • 23. The method of claim 22 including using temporal weighting which is changed as a function of time whereby to emphasize local high energy events in the input signals.
  • 24. The method of claim 23, further comprising applying a synthesis filter or predictor, which introduces selected correlation to a vector quantizer codebook in the analysis-by-synthesis vector-quantization of the signal gain sequence to add selected self correlation to the codebook vectors.
  • 25. The method of claim 24 in which selection between the high and low correlation synthesis filters or predictor is made to maximize similarity between signal and reconstructed vectors.
  • 26. The method of claim 22, comprising using each value of gain index in the analysis-by-synthesis vector-quantization of the signal gain.
  • 27. The method of claim 22 wherein each value of gain index is used to select from a plurality of shapes and associated predictors or filters, each of which is used to generate an output shape vector, and comparing the output shape vector to an input shape vector.
  • 28. The method of claim 27 in which said plurality of shapes has a predetermined number of values in the range of 2 to 50.
  • 29. The method of claim 27 in which said plurality of shapes has a predetermined number of values in the range of 5 to 20.
  • 30. The method of claim 22 including using a switch predictive synthesis filter or predictor.
  • 31. A method for using a computer to interpolatively code audio waveforms signals, comprising: inputting a digitized waveform signal to the computer;decomposing said signal into a slowly evolving waveform,performing a vector-quantization of a dispersion phase by the slowly evolving waveform from which a linear shift attribute was reduced or removed andtransforming the digitized audio waveform signals into interpolatively coded output waveform signals having a lower bitrate than said digitized audio waveform signals, wherein a plurality of bits of the coded output waveform signals are allocated to the vector-quantized dispersion phase with the reduced linear shift attribute.
  • 32. The method of claim 31 in which at least one bit is allocated to the dispersion phase.
  • 33. A method for using a computer to interpolatively code audio waveform signals comprising: inputting digitized audio waveform signals to a computer;using at least one processor of the computer to: determine input vectors representing the waveform signals;determine interpolated vectors for modeling the input vectors;compute an accumulated weighted distortion between the input vectors and the interpolated vectors as a sum of a modeling distortion and a quantization distortion; anddetermine an optimal vector which minimizes the modeling distortion; andusing the thus computed accumulated weighted distortion to transform the digitized audio waveform signals into interpolatively coded output signals having a lower bitrate than said digitized audio waveform signals.
  • 34. The method of claim 33 further comprising: using at least one processor of the computer to determine a respective quantized vector from the optimal vector.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of Provisional Patent Application Nos. 60/110,522, filed Dec. 1, 1998 and 60/110,641 filed Dec. 1, 1998.

PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/US99/28449 12/1/1999 WO 00 8/13/2001
Publishing Document Publishing Date Country Kind
WO00/33297 6/8/2000 WO A
US Referenced Citations (5)
Number Name Date Kind
4653098 Nakata et al. Mar 1987 A
5086471 Tanaka et al. Feb 1992 A
5517595 Kleijn May 1996 A
6418408 Udaya Bhaskar et al. Jul 2002 B1
6493664 Udaya Bhaskar et al. Dec 2002 B1
Provisional Applications (2)
Number Date Country
60110522 Dec 1998 US
60110641 Dec 1998 US