1. Field of the Invention
The present invention relates to speech processing, and more particularly to a voicing determination of the speech signal having a particular, but not exclusive, application to the field of mobile telephones.
2. Description of the Prior Art
In known speech codecs the most common phonetic classification is a voicing decision, which classifies a speech frame as voiced or unvoiced. Generally speaking, voiced segments are typically associated with high local energy and exhibit a distinct periodicity corresponding to the fundamental frequency, or equivalently pitch, of the speech signal, whereas unvoiced segments resemble noise. However, a speech signal also contains segments, which can be classified as a mixture of voiced and unvoiced speech where both components are present simultaneously. This category includes voiced fricatives and breathy and creaky voices. The appropriate classification of mixed segments as either voiced or unvoiced depends on the properties of the speech codec.
In a typical known analysis-by-synthesis (A-b-S) based speech codec, the periodicity of speech is modelled with a pitch predictor filter, also referred to as a long-term prediction (LTP) filter. It characterizes the harmonic structure of the spectrum based on the similarity of adjacent pitch periods in a speech signal. The most common method used for pitch extraction is the autocorrelation analysis, which indicates the similarity between the present and delayed speech segments. In this approach the lag value corresponding to the major peak of the autocorrelation function is interpreted as the pitch period. It is typical that for voiced speech segments with a clear pitch period the voicing determination is closely related to pitch extraction.
According to a first aspect of the present invention there is provided a method for determining the voicing of a speech signal segment, comprising the steps of: dividing a speech signal segment into sub-segments, determining a value relating to the voicing of respective speech signal sub-segments, comparing said values with a predetermined threshold, and making a decision on the voicing of the speech segment based on the number of the values on one side of the threshold.
According to a second aspect of the present invention there is provided a device for determining the voicing of a speech signal segment, comprising means (106) for dividing a speech signal segment into sub-segments, means (110) for determining a value relating to the voicing of respective speech signal sub-segments, means (112) for comparing said values with a predetermined threshold and means (112) for making a decision on the voicing of the speech segment based on the number of the values on one side of the threshold.
The invention provides a method for voicing determination to be used particularly, but not exclusively, in a narrow-band speech coding system. The invention addresses the problems of prior art by determining the voicing of the speech segment based on the periodicity of its sub-segments The embodiments of the present invention give an improvement in the operation in a situation where the properties of the speech signal vary rapidly such that the single parameter set computed over a long window does not provide a reliable basis for voicing determination.
A preferred embodiment of the voicing determination of the present Invention divides a segment of speech signal further into sub-segments. Typically the speech signal segment comprises one speech frame. Furthermore, it may optionally include a possible lookahead which is a certain portion of the speech signal from the next speech frame. A normalized autocorrelation is computed for each sub-segment. The normalized autocorrelation values of the sub-segments are forwarded to classification logic, which compares the sub-segments to the predefined threshold value. In this embodiment, if a certain percentage of normalized autocorrelation values exceeds a threshold, the segment is classified as voiced.
In one embodiment of the present invention, a normalized autocorrelation is computed for each sub-segment using a window whose length is proportional to the estimated pitch period. This ensures that a suitable number of pitch periods is included to the window.
In addition to the above, a critical design problem in voicing determination algorithms is the correct classification of transient frames. This is especially true in transients from unvoiced to voiced speech as the energy of the speech signal is usually growing. if no separate algorithm is designed for classifying the transient frames, the voicing determination algorithm is always a compromise between the misclassification rate and the sensitivity to detecting transient frames appropriately.
To improve the performance of the voicing determination algorithm during transient frames without increasing the misclassification rate practically at all, one embodiment of the present invention provides rules for classifying the speech frame as voiced. This is done by emphasizing the voicing decisions of the last sub-segments in a frame to detect the transients from unvoiced to voiced speech. That is, in addition to having a certain number of sub-segments having a normalized autocorrelation value exceeding a threshold value, the frame is classified as voiced also if all of a predetermined number of the last sub-segments have a normalized autocorrelation value exceeding the same threshold value. Detection of unvoiced to voiced transients is thus further improved by emphasizing the last sub-segments in the classification logic.
The frame may be classified as voiced if only the last sub-segment has a normalized autocorrelation value exceeding the threshold value.
Alternatively, the frame may be classified as voiced if a portion of the subsegments out of the whole speech frame have a normalized autocorrelation value exceeding the threshold, The portion may, for example be substantially a half, or substantially a third of the sub-segments of the speech frame.
The voiced/unvoiced decision can be used for two purposes. One option is to allocate bits within the speech codec differently for voiced and unvoiced frames. In general, voiced speech segments are perceptually more important than unvoiced segments and thus it is especially important that a speech frame is correctly classified as voiced. In the case of A-b-S type of codec, this can be done for example by re-allocating bits from the adaptive codebook (for example from LTP-gain and LTP-lag parameters) to the excitation signal when the speech frame is classified as unvoiced to improve the coding of the excitation signal. On the other hand the adaptive codebook in a speech codec can then be even switched off during the unvoiced speech frame which will lead to reduced total bit rate. Because of this on/off switching of LTP-parameters it is especially important that a speech frame is correctly classified as voiced. It has been noticed that, if a voiced speech frame is incorrectly classified as unvoiced and the LTP parameters are switched off, this leads to a decreased sound quality at the receiving end. Accordingly, the present invention provides a method and device for a voiced/unvoiced decision to make a reliable decision, especially, so that voiced speech frames are not incorrectly decided as unvoiced.
Exemplary embodiments of the invention are hereinafter described with the reference to the accompanying drawings, in which:
where y(t) is the first speech sample belonging to the window of length N, τ is the integer pitch period and g(t) is the gain.
The optimum value of g(t) is found by setting the partial derivative of the cost function (1) with respect to the gain equal to zero. This yields
where
is the autocorrelation of y(t) with delay τ and
By substituting the optimum gain to equation (1), the pitch period is estimated by maximizing the latter term of
with respect to delay τ. The pitch extraction block 108 is also arranged to send the above determined estimated open-loop pitch estimate τ at line 113 to the segmentation block 106 and to a value determination block 110. An example of the operation of the segmentation is shown in
The value determination block 110 also receives the speech signal y(t) from the segmentation block 106 at line 107. The value determination block 110 is arranged to operate as follows:
To eliminate the effects of the negative values of the autocorrelation function when maximizing the function, a square root of the latter term of
equation (5) is taken. The term to be maximized is thus:
During voiced segments, the gain g(t) tends to be near unity and thus it is often used for voicing determination. However, during unvoiced and transient regions, the gain g(t) fluctuates achieving also values near unity. A more robust voicing determination is achieved by observing the values of equation (6). To cope with the power variations of the signal, R(t,τ) is normalized to have a maximum value of unity resulting:
According to one aspect of the invention, the window length in (7) is set to the found pitch period τ plus some offset M to overcome the problems related to a fixed-length window. The periodicity measure used is thus
where
The parameter M can be set, e.g. to 10 samples. A voicing decision block 112 is to receive the above determined periodicity measure C2(t, τ) at line 111 from the value determination block 110 and parameters K, Ktr, Ctr to make the voicing decision. The decision logic of voiced/unvoiced decision is further described in
It should be emphasized that the pitch period used in (8) can also be estimated in other ways than described in equations (1)-(6) above. A common modification is to use pitch tracking in order to avoid pitch multiples described in a Finnish patent application FI 971976. Another optional function for the open-loop pitch extraction is that the effect of the formant frequencies is removed from the speech signal before pitch extraction. This can be done for example by a weighting filter.
Modified signals for example a residual signal, weighted residual signal or weighted speech signal, can also be used for voicing determination instead of the original speech signal. The residual signal is obtained by filtering the original speech signal by a linear prediction analysis filter.
It may also be advantageous to estimate the pitch period from the residual signal of the linear prediction filter instead of the speech signal, because the residual signal is often more clearly periodic.
The residual signal can be further low-pass filtered and down-sampled before the above procedure. Down-sampling reduces the complexity of correlation computation. In one further example, the speech signal is first filtered by a weighting filter before the calculation of autocorrelation is applied as described above.
The exact parameter values Ctr, Ktr and K presented above are not limited to certain values but are dependent on the system specified and can be selected empirically using a large speech database. For example, if the speech segment is divided into 9 subsegments, suitable values can be for example Ctr,=0.6, Ktr=4 and K=6. An appropriate value of K and Ktr is proportional to the number of sub-segments.
Alternatively, according to the present invention, the frame is classified as voiced if only the last sub-segment (i.e. Ktr=1) has a normalized autocorrelation value exceeding the threshold value. According to still one modification the frame is classified as voiced if substantially half of the sub-segments out of the whole speech frame (e.g. 4 or 5 subsegments out of 9) have a normalized autocorrelation value exceeding the threshold.
To improve the performance of the voicing determination algorithm, the last sub-segments are emphasized and specifically the performance of the voicing determination algorithm in unvoiced to voiced transients is emphasized including if all of a predetermined number of the last sub-segments have a normalized authorization value exceeding the same threshold value.
In the view of foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
9930712 | Dec 1999 | GB | national |
Number | Name | Date | Kind |
---|---|---|---|
4074069 | Tokura et al. | Feb 1978 | A |
4230906 | Davis | Oct 1980 | A |
4589131 | Horvath et al. | May 1986 | A |
5734789 | Swaminathan et al. | Mar 1998 | A |
6219636 | Ihara | Apr 2001 | B1 |
Number | Date | Country |
---|---|---|
2334459 | Jan 1975 | DE |
23 34 459 | Jan 1975 | DE |
WO 9621220 | Jul 1996 | FR |
9801848 | Jan 1998 | WO |
Number | Date | Country | |
---|---|---|---|
20020156620 A1 | Oct 2002 | US |