The present invention generally relates to speech coding, in particular to pitch-synchronous speech coding using timbre vectors.
Speech coding is an important field of speech technology. The original speech signal is analog. The transmission of original speech signal takes a huge bandwidth and it is error prone. For several decades, coding methods and systems have been developed, to compress the speech signal to a low-bit-rate digital signal for transmission. The current status of the technology is summarized in a number of monographs, for example, Part C of “Springer Handbook of Speech Processing”, Springer Verlag 2007; and “Digital Speech”, Second Edition, by A. M. Kondoz, Wiley, 2004. There are several hundreds of patents and patent applications with “speech coding” in the title. The system of speech coding has two components. The encoder converts speech signal to a compressed digital signal. The decoder converts the compressed digital signal back into analog speech signal. The current technology for low bit rate speech coding is based on the following principles:
For encoding, first, speech signal is segmented into frames with a fixed duration. Second, a program determines whether a frame is voiced or unvoiced. Third, for voiced frames, find the pitch period in the frame. Fourth, extract the linear predictive code (LPC) of each frame. The voicedness index (voice or unvoiced), the pitch period, and LPC coefficients are then quantized to a limited number of bits, to become the encoded speech signal for transmission. In the decoding process, the voiced segments and the unvoiced segments are treated differently. For voiced segments, a string of pulses are generated according to the pitch period, and then filtered by the LPC based spectrum to generate the voiced sound. For unvoiced segments, a noise signal is generated, and then filtered by the LPC based spectrum to generate an unvoiced consonant. Because pitch period is a property of the frame, each frame must be longer than the maximum pitch period of human voice, which is typically 25 msec. The frame must be multiplied with a window function, typically a Hamming window function, to make the ends approximately matching. To ensure that no information is neglected, each frame must overlap with the previous frame and the following frame, with a typical frame shift of 10 msec.
The quality of LPC-based speech coding is limited by the intrinsic properties of the LPC coefficients, which is pitch-asynchronous, and has a rather small number of parameters because of non-converging behavior when the number of coefficients is increased. The usual limit is 10 to 16 coefficients. The quality of the LPC-based speech coding is always compared with the 8-kHz sample rate 8 bit voice signal, the so-called legacy telephone standard, toll quality speech signal, or narrow-band speech signal. Coming to the 21th century, all voice recording device and voice production device can provide CD-quality speech signal, with at least 32 kHz sample rate and 16 bit resolution. Toll-quality speech signal is considered poor. Speech coding should be able to generate quality comparable to the CD-quality speech signal.
It is well known that the voiced speech signal is pseudo-periodic, and the LPC coefficients become inaccurate at the onset time of a pitch period. To improve the quality of speech coding, pitch-synchronous speech coding has been proposed, researched and patented. See for example, R. Taori et al, “Speech Compression Using Pitch Synchronous Interpolation”, Proceedings of ICASSP-1995, vol. 1, pages 512-515; H. Yang et al., “Pitch Synchronous Multi-Band (PSMB) Speech Coding”, Proceedings of ICASSP-1995, vol. 1, page 516-519; C. Sturt et al., “LSF Quantization for Pitch Synchronous Speech Coders”, Proceedings of ICASSP-2003, vol. 2, pages 165-168; and U.S. Pat. No. 5,864,797 by M. Fujimoto, “Pitch-synchronous Speech Coding by Applying Multiple Analysis to Select and Align a Plurality of Types of Code Vectors”, Jan. 26, 1999. They showed that by using pitch-synchronous LPC coefficients or using pitch-synchronous multi-band coding, the quality can be improved.
In the two previous patents by the current applicant (U.S. Pat. No. 8,719,030 entitled “System and Method for Speech Synthesis”, U.S. Pat. No. 8,942,977 entitled “System and Method for Speech Recognition Using Pitch-Synchronous Spectral Parameters”), a pitch-synchronous segmentation scheme and a new mathematical representation, timbre vectors, are proposed, as an alternative to the fixed-window-size segmentation and LPC coefficients. The new methods enable the parameterization and reproduction of wide-band speech signal with high fidelity, thus provide a new method of speech coding, especially for CD-quality speech signals. The current patent application discloses systems and methods of speech coding using timbre vectors.
The present invention discloses a pitch-synchronous method and system for speech coding using timbre vectors, following U.S. Pat. No. 8,719,030 and U.S. Pat. No. 8,942,977.
According to an exemplary embodiment of the invention, see
On the decoding side, as shown in
Because the period by period process duplicates the natural process of speech production, and the timbre vectors catches detailed information about the spectrum of the speech segment, the decoded voice can have a much higher quality than the speech coding algorithm based on fixed-duration frames and linear prediction coding (LPC) parameterization, and can still be transmitted with very low bandwidth.
Various exemplary embodiments of the present invention are implemented on a computer system including one or more processors and one or more memory units. In this regard, according to exemplary embodiments, steps of the various methods described herein are performed on one or more computer processors according to instructions encoded on a computer-readable medium.
During the above process, the type of the said frame (pitch period) is determined, see 118. If the amplitude is smaller than a silence threshold, the frame is silence, type 0. If the intensity is higher than the silence threshold but there is no pitch marks, the frame is unvoiced, type 1. For frames bounded by pitch marks, if the amplitude spectrum is concentrated in the low-frequency range (0 to 5 kHz), than the period is voiced, type 3. If the amplitude spectrum in the higher-frequency range (5 to 16 kHz) is substantial, for example, has 30% or more power, then the period is transitional which is voices fricative or a transition frame between voiced and unvoiced, type 2. The type information is encoded in a 2-bit type index, 119. For voiced periods, the pitch value, 120, is conveniently expressed in MIDI unit. Using a pitch codebook 121, the said pitch is scalar-quantized by unit 122. The said intensity 124 is conveniently expressed in decibel (dB) unit. Using an intensity codebook 125, through scalar quantization 126, the intensity index 127 of the frame is generated. Furthermore, using a timbre codebook 128, using vector quantization 129, the timbre index 130 of the frame is generated. Notice that for each type of frame, there is a different codebook. Details will be disclosed later with respect to
The ± sign is used to accommodate the polarity of the PCM signals. If a positive sign is taken, the value is positive for 0<n<N, but becomes zero at n=N; and it is negative for −N<n<0, again becomes zero at n=−N. Denoting the PCM signal as p(n), A profile function is generated
Typical result is shown in
As shown in
If the first byte of a group of bytes has highest bits of 01, see 504 and 505, the frame is unvoiced. The frame duration is also 8 msec. Pitch index is not required. The rest 6 bits are the intensity index, 506. By looking up from an unvoiced intensity codebook 507, the intensity of the said unvoiced frame is determined. Each unvoiced frame is represented by two bytes. The first two bits of the second byte represent number of repetition. If two consecutive frames have the identical timbre vector, the repetition index is 1. If three consecutive frames have the identical timbre vector, the repetition index is 2. The maximum repetition is set to 3. This upper bound is designed for two purposes. First, the intensity of the repeated frames has to be interpolated from the end-point frames. To ensure quality, a limit of four frames is needed. Second, the encoding of four repeated unvoiced frames takes 32 msec. Because the tolerable encoding delay is 70 to 80 msec, as 32 msec is acceptable, too many frames would cause too much encoding delay.
If the first two bits of the leading byte 512 or 513 are 10 or 11, see 513 and 523, the frame is voiced or transitional, and two following bytes should be fetched from the transmission stream, ch1 and ch2. Similar to the case of unvoiced frames, the rest 6 bits of the leading byte represent intensity index, 514 or 524. By looking up from an intensity codebook, 515 or 525, the intensity is determined. The second byte, 516 or 526, carries a repetition index, 516 or 526, and a pitch index, 518 or 528. The repetition index is limited to 4, and both intensity and pitch have to be linearly interpolated from the two ending-point frames. By looking up from a pitch codebook, 519 or 529, the pitch value is determined. The third byte 520 or 530 is timbre index. By looking up from a timbre codebook, 521 or 531, the timbre vector is determined. Because the type of frame is separated, a codebook size of 256 for each type seems adequate.
During encoding, the determination of type 2 (transitional) and type 3 (voiced) is based on the spectral distribution, as presented above: If the speech power in a frame with a well-defined pitch period is concentrated in the low-frequency range (o to 5 kHz), the frame is voiced. If the power in the high frequency range (5 kHz and up) is substantial, then it is a transitional frame. During encoding, different types of frames are treated differently. For voiced frames, below 5 kHz, the phase is generated by the Kramers-Knonig relations; and above 5 kHz, the phase is random. For transitional frames, below 2.5 kHz, the phase is generated by the Kramers-Knonig relations; and above 2.5 kHz, the phase is random. For unvoiced frames, the phase is random on the entire frequency scale. For details, see U.S. Pat. No. 8,719,030.
To improve naturalness, jitter may be added to the pitch values. To do this, a few percentages (usually 1% to 3%) of random number is added to the pitch value. Furthermore, shimmer may also be added to the intensity value. To do this, a few percentages (usually 1% to 3%) of random number is added to the intensity value.
Fast Fourier transform (FFT) is an efficient method for Fourier analysis. However, FFT is much more efficient if the period is an integer power of 2, such as 64, 128, 256, etc. For voiced frames, the pitch period is a variable. In order to utilize FFT, the PCM values in each pitch period is first linearly interpolated into 2n points, in the exemplary embodiment presented here, it is 8×32=256 points. After FFT, the amplitude spectrum is reversely interpolated to the true values of the pitch period.
The art of building of codebooks is well known in the literature, see for example, A. Gersho and R. M. Gray, “Vector Quantization and Signal Compression”, Kluwer Academic Publishers, Boston, 1991. The basic method of building codebooks is the K-means clustering algorithm. A brief summary of the said algorithm can be found in F. Jelinek, “Statistical Methods for Speech Recognition”, The MIT Press, Cambridge Mass., 1997, page 10-11. Briefly, the K-means clustering process for timbre vectors is as follows: A large database of timbre vectors of a category (voiced, unvoiced or transitional) is collected; choose randomly a fixed number of timbre vectors as seeds; divide the entire vector space to find clusters of timbre vectors closest to each seed; find the center of each cluster. Use the cluster centers as the new seeds, repeat the said process until the centers of clusters converge. The number of seeds, and consequently the number of cluster centers, is called the size of the codebook.
An example of the encoded speech is shown in
While this invention has been described in conjunction with the exemplary embodiments outlined above, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the exemplary embodiments of the invention, as set forth above, are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention.
The present application is a continuation in part of U.S. Pat. No. 8,942,977, entitled “System and Method for Speech Recognition Using Pitch-Synchronous Spectral Parameters”, issued Jan. 27, 2015, to inventor Chengjun Julian Chen.
Number | Name | Date | Kind |
---|---|---|---|
H0002172 | Staelin et al. | Sep 2006 | H |
20020173951 | Ehara | Nov 2002 | A1 |
Entry |
---|
Hess, Wolfgang. “A pitch-synchronous digital feature extraction system for phonemic recognition of speech.” Acoustics, Speech and Signal Processing, IEEE Transactions on 24.1 (1976): 14-25. |
Mandyam, Giridhar, Nasir Ahmed, and Neeraj Magotra. “Application of the discrete Laguerre transform to speech coding.” Asilomar Conference on Signals, Systems and Computers. IEEE Computer Society, 1995. |
Number | Date | Country | |
---|---|---|---|
Parent | 14216684 | Mar 2014 | US |
Child | 14605571 | US |