The present invention relates generally to coding of speech and audio signals and, more specifically, to an improved excitation modeling procedure in analysis-by-synthesis coders.
Speech and audio coding algorithms have a wide variety of applications in wireless communication, multimedia and voice storage systems. The development of the coding algorithms is driven by the need to save transmission and storage capacity while maintaining the quality of the synthesized signal at a high level. These requirements are often quite contradictory, and thus a compromise between capacity and quality must typically be made. The use of speech coding is particularly important in mobile telecommunication systems since the transmission of the full speech spectrum would require significant bandwidth in an environment where spectral resources are relatively limited. Therefore the use of signal compression techniques are employed through the use of speech encoding and decoding, which is essential for efficient speech transmission at low bit rates.
Speech coding algorithms and systems can be categorized in different ways depending on the criterion used. One way of classifying them consists of waveform coders, parametric coders, and hybrid coders. Waveform coders, as the name implies, try to preserve the waveform being coded as closely as possible without paying much attention to the characteristics of the speech signal. Waveform coders also have the advantage of being relatively less complex and typically perform well in noisy environments. However, they generally require relatively higher bit rates to produce high quality speech. Hybrid coders use a combination of waveform and parametric techniques in that they typically use parametric approaches to model, e.g., the vocal tract by an LPC filter. The input signal for the filter is then coded by using what could be classified as waveform coding method. Currently, hybrid speech coders are widely used to produce near wireline speech quality at bit rates in the range of 8–12 kbps.
In many current hybrid coders, the transmitted parameters are determined in an Analysis-by-Synthesis (AbS) fashion where the selected distortion criterion is minimized between the original speech signal and the reconstructed speech corresponding to each possible parameter value. These coders are thus often called AbS speech coders. By way of example, in a typical AbS coder, an excitation candidate is taken from a codebook, filtered through the LPC filter, in which the error between the filtered and input signal is calculated such that the one providing the smallest error is chosen.
In a typical AbS speech coder, the input speech signal is processed in frames. Usually the frame length is 10–30 ms, and a look-ahead segment of 5–15 ms of the subsequent frame is also available. In every frame, a parametric representation of the speech signal is determined by an encoder. The parameters are quantized, and transmitted through a communication channel or stored in a storage medium in digital form. At the receiving end, a decoder constructs a synthesized speech signal representative of the original signal based on the received parameters.
One important class of analysis-by-synthesis speech coder is the Code Excited Linear Predictive (CELP) speech coder which is widely used in many wireless digital communication systems. CELP is an efficient closed loop analysis-by-synthesis coding method that has proven to work well for low bit rate systems in the range of 4–16 kbps. In CELP coders, speech is segmented into frames (e.g. 10–30 ms) such that an optimum set of linear prediction and pitch filter parameters are determined and quantized for each frame. Each speech frame is further divided into a number of subframes (e.g. 5 ms) where, for each subframe, an excitation codebook is searched to find an input vector to the quantized predictor system that gives the best reproduction of the original speech signal.
The basic underlying structure of most AbS coders is quite similar. Typically they employ a type of linear predictive coding (LPC) technique, for example, a cascade of time variant pitch predictor and an LPC filter. An all-pole LPC filter:
where q−1 is unit delay operator and s is subframe index, is used to model the short-time spectral envelope of the speech signal. The order na of the LPC filter is typically 8–12.
A pitch predictor of the form:
utilizes the pitch periodicity of speech to model the fine structure of the spectrum. Typically, the gain b(s) is bounded to the interval [0, 1.2], and the pitch lag τ(s) to the interval [20, 140] samples (assuming a sampling frequency of 8000 Hz). The pitch predictor is also referred to as long-term predictor (LTP) filter.
Although AbS speech coders generally provide good performance at low bit rates they are relatively computationally demanding. Another characteristic is that at low bit rates, e.g. below 4 kbps, the matching to the original speech waveform becomes a severe constraint in improving the coding efficiency further. This applies to the coding of speech in general which includes voiced, unvoiced, and plosive speech. Although there have been solutions put forth for improvements in modeling voiced speech, substantial improvements in modeling nonstationary speech such as plosives have so far not been presented. As known by those skilled in the art, plosives and unvoiced speech tend to be abrupt such as in the stop consonants like /p/, /k/, and /t/, for example. These speech waveforms are particularly difficult to model accurately in prior-art low bit rate AbS coders since there is often a clear mismatch between the original and coded excitation signals due to the lack of bits to accurately model the original excitation. The differences in the overall waveform shape causes the energy of the coded excitation to be much smaller than that of the ideal excitation due to the parameter estimation method. This often results in synthesized speech that can sound unnatural at a very low energy level.
The resulting energy disparity between the synthesized excitations is clearly evident when using a codebook having fewer pulse positions whereby the lower energy excitation results in a sound that is unsatisfactory and barely audible. In view of the foregoing, an improved method is needed which enable AbS speech coders to more accurately produce high quality speech in speech signals containing nonstationary speech.
Briefly described and in accordance with an embodiment and related features of the invention, in a method aspect of the invention there is provided a method of encoding a speech signal wherein the speech signal is encoded in an encoder using a first excitation codebook having a first position grid and a second excitation codebook having a second position grid to produce a coded excitation signal, wherein the first position grid contains a higher population density of pulse positions than the second position grid.
In a further method aspect, there is provided a method of transmitting a speech signal from a sender to a receiver comprising the steps of:
In a device aspect, there is provided an encoder for encoding speech signals wherein the encoder comprises a first excitation codebook and a second excitation codebook for use in encoding said speech signals, wherein the first excitation codebook contains a higher population density of pulse positions than the second excitation codebook.
In a further device aspect, there is provided a device comprising a speech coder for encoding and decoding speech signals, wherein the device further comprises a first pulse codebook for use with the encoder and a second pulse codebook for use with the decoder, wherein the first codebook contains a higher population density of pulse positions than the second codebook.
The invention, together with further objectives and advantages thereof, may best be understood by reference to the following description taken in conjunction with the accompanying drawings in which:
As mentioned in the preceding sections, it has generally been difficult for prior art AbS speech coders to accurately model speech segments containing plosives or unvoiced speech. High quality speech can be attained by having a good understanding of the speech signal and a good knowledge of the properties of human perception. By way of example, it is known that certain types of coding distortion are imperceptible since they are masked by the signal, and taken together with exploitation of signal redundancy, improved speech quality to be attained at low bit rates.
In block 410, the coefficients of the LPC filter are determined based on the input speech signal. Typically, the speech signal is windowed into segments and the LPC filter coefficients are determined using e.g. a Levinson-Durbin algorithm. It should be noted that the term speech signal can refer to any type of signal derived from a sound signal (e.g. speech or music) which can be the speech signal itself or a digitized signal, a residual signal etc. In many coders, the LPC coefficients are typically not determined for every subframe. In such cases the coefficients can be interpolated for the intermediate subframes. In block 420, the input speech is filtered with A(q, s) to produce an LPC residual signal. The LPC residual is subsequently used to reproduce the original speech signal when fed through an LPC filter 1/A(q, s). Therefore it is sometimes referred to as ideal excitation.
In block 430, an open loop lag is determined by finding the delay value that gives the highest autocorrelation value for the speech or the LPC residual signal. In block 440, a target signal x(k) for the closed loop lag search is computed by subtracting the zero input response of the LPC filter from the speech signal. This occurs in order to take into account the effect of the initial states of the LPC filter for a smoothly evolving signal. In block 450, a closed loop lag and gain are searched by minimizing the mean sum-squared error between the target signal and the synthesized speech signal. A closed loop lag is searched around the open loop lag value. For example, an open-loop lag value is an estimate which is not searched using AbS and around which the closed-loop lag is searched. Typically, integer precision is used for open-loop lag while the fractional resolution can be used for closed-loop lag search. A detailed explanation can be found in the IS-641 specification mentioned previously, for example.
In block 460, the target signal x2(k) for the excitation search is computed by subtracting the contribution of the LTP filter from the target signal of the closed loop lag search.
The excitation signal and its gain are then searched by minimizing the sum-squared error between the target signal and the synthesized speech signal in block 470. Typically, some heuristic rules may be employed at this stage to avoid an exhaustive search of the codebook for all possible excitation signal candidates in order to reduce the search time. In block 480, the filter states in the encoder are updated to keep them consistent with the filter states in the decoder. It should be noted that the encoding procedure also includes quantization of the parameters to be transmitted where discussion of which has been omitted for reasons of simplification.
In prior-art, the optimal excitation sequence as well as the LTP gain and excitation sequence is searched by minimizing the sum-squared error between the target signal and the synthesized signal,
J(g(s),uc(s))=∥x2(s)−{circumflex over (x)}2(s)∥2=∥x2(s)−g(s)H(s)uc(s)∥2, (3)
where x2(s) is a target vector consisting of the x2(k) samples over the search horizon, {circumflex over (x)}2 (s) the corresponding synthesized signal, and uc(s) the excitation vector as represented in
Where we obtain by substituting (4) into (3), it is found that,
The optimal excitation is usually searched by maximizing the latter term of equation (5), x2(s)TH(s) and H(s)TH(s) can be computed prior to the excitation search.
In the present invention, a method for excitation modeling during nonstationary speech segments in analysis-by-synthesis speech coders is described. The method takes advantage of aural perception features where the insensitivity of human ear to accurate phase information in speech signals is exploited by relaxing the waveform matching constraints of the coded excitation signal. Preferably, this is applied to the nonstationary speech or unvoiced speech. Furthermore, introduction of adaptive phase dispersion to the coded excitation is used to efficiently preserve the important relevant signal characteristics.
In an embodiment of the invention, the waveform matching constraint is relaxed in the fixed codebook excitation generation. In the embodiment, two pulse position codebooks; codebook 1 and codebook 2 are used to derive the transmitted excitation together with its gain. The first pulse position codebook is used in encoder only and contains a dense position grid (or script). The second codebook is sparser and includes the transmitted pulse positions, which is thus used in both the encoder and decoder. The transmitted excitation signal with the corresponding gain value may be derived in the following way. Firstly, an optimal excitation signal with its gain is searched using codebook 1. Due to the relatively dense grid of codebook 1, the shape and energy of the ideal excitation signal are efficiently preserved. Secondly, the found pulse locations are quantized to the possible pulse locations of codebook 2 e.g. by finding the closest pulse position from codebook 2 for the ith pulse to the position for the same pulse found by using codebook 1. Thus, he quantized pulse location Q(xt,1) of ith pulse is derived e.g. by minimizing,
where xt,1 is the position of the ith pulse from codebook 1 and Ci,2 contains the possible pulse positions for the ith pulse in codebook 2. The gain value obtained by using codebook 1 is transmitted to the decoder. It should be noted that the terms pulses and pulse locations are referred to herein but other types of representations (e.g. samples, waveforms, wavelets) may be used to mark the locations in the codebooks or represent the pulses in the encoded signal, for example. It should be noted that the pulses and pulse locations are referred to above but other types of representations (e.g. waveforms or wavelets) may be used to mark the locations in the codebooks or represent the pulses in the encoded signal, for example.
Another significant aspect is the energy dispersion of the coded excitation signal. To mimic the energy dispersion of the ideal excitation, an adaptive filtering mechanism is introduced to the coded excitation signal. There are a number of filtering methods that can be use with the invention. In the embodiment, a filtering method is used where the desired dispersion is achieved by randomizing the appropriate phase components of the coded excitation signal. For a more detailed discussion of the filtering mechanism, the interested reader may refer to “Removal of sparse-excitation artifacts in CELP,” by R. Hagen, E. Ekudden and B. Johansson and W. B. Kleijn, Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, Seattle, May 1998.
In the filtering method, a threshold frequency is defined above which the phase components are randomized and below which they remain unchanged. The phase dispersion implemented only in the decoder to the coded signal has been observed to produce high quality. In the embodiment, an adaptation method for the threshold frequency is introduced to control the amount of dispersion. The threshold frequency is derived from the “peakiness” value of the ideal excitation signal, where the “peakiness” value defines the energy spread within the frame. The “peakiness” value P is generally defined for the ideal excitation r(n) given by,
where N is the length of the frame from which the “peakiness” value is calculated, and r(n) is the ideal excitation signal.
In the embodiment, adaptive phase dispersion is introduced to the coded excitation to better preserve the energy dispersion of the ideal excitation. The overall shape of the energy envelope of the decoded speech signal is important for natural sounding synthesized speech. Due to human perception characteristics, it is known that during plosives, for example, the accurate location of the signal peak positions or the accurate representation of the spectral envelope is not crucial for high quality speech coding.
The adaptive threshold frequency above which the phase information is randomized is defined as a function of the “peakiness” value in the invention. It should be noted that there are several ways that could be used to define this relationship. One example, but no means the only example, is a piecewise linear function that can be defined as follows,
where α∈[0,1] defines the lower bound to the threshold frequency below which the dispersion is kept constant, and Plow and Phigh define the range for the “peakiness” value beyond which the threshold frequency is kept constant.
The digitized speech signal is then encoded in speech encoder 910 in accordance with the embodiment of the invention. Processing of the base frequency signal is performed on the encoded signal to provide the appropriate channel coding in block 915. The channel coded signal is then converted to a radio frequency signal and transmitted from transmitter 920 through a duplex filter 925. The duplex filter 925 permits the use of antenna 930 for both the transmission and reception of radio signals. The received radio signals are processed by the receiving branch 935 where they are decoded by speech decoder 940 in accordance with the embodiment of the invention. The decoded speech signal is sent through a D/A-converter 945 for conversion to an analog signal prior to being sent to loudspeaker 950 for reproduction of the synthesized speech.
The present invention contemplates a technique to improve the coded speech quality in AbS coders without increasing the bit rate. This is accomplished by relaxing the waveform matching constraints for nonstationary (plosive) or unvoiced speech signals in locations where accurate pitch information is typically perceptually insignificant to the listener. It should be noted that the invention is not limited to the “peakiness” method described for detecting plosive speech and that any other suitable method can be used successfully. By way of example, techniques that measure the local signal qualities such as rate of change or energy can be used. Furthermore, techniques that use the standard deviation or correlation may also be employed to detect plosives.
Although the invention has been described in some respects with reference to a specified embodiment thereof, variations and modifications will become apparent to those skilled in the art. In particular, the inventive concept is not limited to speech signals but may be applied to music and other types of audible sounds, for example. It is therefore the intention that the following claims not be given a restrictive interpretation but should be viewed to encompass variations and modifications that are derived from the inventive subject matter disclosed.
Number | Date | Country | Kind |
---|---|---|---|
20011329 | Jun 2001 | FI | national |
Number | Name | Date | Kind |
---|---|---|---|
4868867 | Davidson et al. | Sep 1989 | A |
5187745 | Yip et al. | Feb 1993 | A |
5778334 | Ozawa et al. | Jul 1998 | A |
5809459 | Bergstrom et al. | Sep 1998 | A |
5890108 | Yeldener | Mar 1999 | A |
5970444 | Hayashi et al. | Oct 1999 | A |
6233550 | Gersho et al. | May 2001 | B1 |
6408268 | Tasaki | Jun 2002 | B1 |
6493664 | Udaya Bhaskar et al. | Dec 2002 | B1 |
6526376 | Villette et al. | Feb 2003 | B1 |
6556966 | Gao | Apr 2003 | B1 |
Number | Date | Country | |
---|---|---|---|
20030055633 A1 | Mar 2003 | US |