1. Field of the Invention
The present invention relates to a noise power estimation system, a noise power estimating method, a speech recognition system and a speech recognizing method.
2. Background Art
In order to achieve natural human robot interaction, a robot should recognize human speeches even if there are some noises and reverberations. In order to avoid performance degradation of automatic speech recognizers (ASR) due to interferences such as background noise, many speech enhancement processes have been applied to robot audition systems [K. Nakadai, et al, “An open source software system for robot audition HARK and its evaluation,” in 2008 IEEE-RAS Int'l Conf. on Humanoid Robots (Humanoids 2008) IEEE, 2008; J. Valin, et al, “Enhanced robot audition based on microphone array source separation with post-filter,” in IROS2004. IEEE/RSJ, 2004, pp. 2123-2128; S. Yamamoto, et. al, “Making a robot recognize three simultaneous sentences in real-time,” in IROS2005. IEEE/RSJ, 2005, pp. 897-892; and N. Mochiki, et al, “Recognition of three simultaneous utterance of speech by four-line directivity microphone mounted on head of robot,” in 2004 Int'l Conf. on Spoken Language Processing (ICSLP2004) 2004, p. WeA1705o.4.]. Speech enhancement processes require noise spectrum estimation.
For example, the Minima-Controlled Recursive Average (MCRA) method [I. Cohen and B. Berdugo, “Speech enhancement for non-stationary noise environments,” Signal Processing, vol. 81, pp. 2403-2481, 2001.] is employed for noise spectrum estimation. MCRA tracks the minimum level spectra and judges whether the current input signal is voice active or not (inferring noise) based on the ratio of the input energy and the minimum energy after applying a consequent thresholding operation. This means that MCRA implicitly assumes that the minimum level of the noise spectrum does not change. Therefore, if the noise is not steady-state and the minimum level changes, it is very difficult to set the threshold parameter to a fixed value. Moreover, even if a fine tuned threshold parameter for a non-steady-state noise works properly, the process will fail easily for other noises, even for usual steady-state noises.
Thus, to carry out a speech enhancement process by appropriately setting parameters for noise environment changes has been difficult.
In other words, a noise power estimation system, a noise power estimating method, an automatic speech recognition system and an automatic speech recognizing method that do not require a level based threshold parameter and have high robustness against noise environment changes have not been developed.
Accordingly, there is a need for a noise power estimation system, a noise power estimating method, an automatic speech recognition system and an automatic speech recognizing method that do not require a level based threshold parameter and have high robustness against noise environment changes.
A noise power estimation system according to the first aspect of the present invention is that for estimating noise power of each frequency spectral component The noise power estimation system includes a cumulative histogram generating section for generating a cumulative histogram for each frequency spectral component of a time series signal, in which the horizontal axis indicates index of power level and the vertical axis indicates cumulative frequency and which is weighted by exponential moving average; and a noise power estimation section for determining an estimated value of noise power for each frequency spectral component of the time series signal based on the cumulative histogram.
The noise power estimation system according to the present aspect determines an estimated value of noise power for each frequency spectral component of the time series signal based on the cumulative histogram which is weighted by exponential moving average. Accordingly, the system is highly robust against noise environmental changes. Further, since the system uses the cumulative histogram which is weighted by exponential moving average, it does not require threshold parameters which have to be based on the level.
A noise power estimation system according an embodiment of the present invention is a noise power estimation system according to the first aspect of the present invention, and the noise power estimation section regards a value of noise power corresponding to a predetermined ratio of cumulative frequency to the maximum value of cumulative frequency as the estimated value.
According to the present embodiment, cumulative frequency corresponding to the noise power can be easily determined based on a predetermined ratio of cumulative frequency to the maximum value of cumulative frequency. The predetermined ratio can be determined in consideration of frequency of target speeches, for example.
In a speech recognition system according to the second aspect of the present invention, spectral subtraction is performed using estimated values of noise power which have been obtained for each frequency spectral component by the noise power estimation system according to the first aspect of the present invention.
The speech recognition system according to the present aspect does not require threshold parameters which have to be based on the level and is highly robust against noise environmental changes.
A noise power estimating method according to the third aspect of the present invention is that for estimating noise power of each frequency spectral component. The present method includes the steps of generating, by a cumulative histogram generating section, a cumulative histogram for each frequency spectral component of a time series signal, in which the horizontal axis indicates index of power level and the vertical axis indicates cumulative frequency and which is weighted by exponential moving average; and determining, by a noise power estimation section, an estimated value of noise power for each frequency spectral component of the time series signal based on the cumulative histogram. In the present method, noise power is continuously estimated by repeating the two steps described above.
In the noise power estimation method according to the present aspect, an estimated value of noise power for each frequency spectral component of the time series signal is determined based on the cumulative histogram which is weighted by exponential moving average. Accordingly, the method is highly robust against noise environmental changes. Further, since the method uses the cumulative histogram which is weighted by exponential moving average, it does not require threshold parameters which have to be based on the level.
A noise power estimation method according an embodiment of the present invention is a noise power estimating method according to the third aspect of the present invention, and the noise power estimation section regards a value of noise power corresponding to a predetermined ratio of cumulative frequency to the maximum value of cumulative frequency as the estimated value.
According to the present embodiment, cumulative frequency corresponding to the noise power can be easily determined based on a predetermined ratio of cumulative frequency to the maximum value of cumulative frequency. The predetermined ratio can be determined in consideration of frequency of target speeches, for example.
In a speech recognition method according to the fourth aspect of the present invention, spectral subtraction is performed using estimated values of noise power which have been obtained for each frequency spectral component by the noise power estimation method according to the third aspect of the present invention.
The speech recognition method according to the present aspect does not require threshold parameters which have to be based on the level and is highly robust against noise environmental changes.
The sound detecting section 100 is a microphone array consisting of a plurality of microphones installed on a robot, for example.
The sound source separating section 200 performs linear speech enhancement process. The sound source separating section 200 obtains acoustic data from the microphone array and separates sound sources using linear separating algorithm which is called GSS (Geometric Source Separation), for example. In the present embodiment, a method called GSS-AS which is based on GSS and provided with step size adjustment technique is used [H. Nakajima, et. al., “Adaptive step-size parameter control for real world blind source separation,” in ICASSP 2008. IEEE, 2008, pp. 149-152.]. The sound source separating section 200 may be realized by any other system besides the above-mentioned one by which directional sound sources can be separated.
The recursive noise power estimation section 300 performs recursive noise power estimation for each frequency spectral component of sound of each sound source separated by the sound source separating section 200. The structure and function of the recursive noise power estimation section 300 will be described in detail later.
The spectral subtraction section 400 subtracts noise power for each frequency spectral component estimated by the recursive noise power estimation section 300 from the frequency spectral component of sound of each sound source separated by the sound source separating section 200. Spectral subtraction is described in the documents [I. Cohen and B. Berdugo, “Speech enhancement for non-stationary noise environments,” Signal Processing vol. 81, pp. 2403-2481, 2001; M Delcroix, et al., “Static and dynamic variance compensation for recognition of reverberant speech with dereverberation processing,” IEEE Trans. on Audio, Speech, and Language Processing, vol. 17, no. 2, pp. 324-334, 2009; and Y. Takahashi, et al., “Real-time implementaion of blind spatial subtraction array for hands-free robot spoken dialogue system,” in IROS2008. IEEE/RSJ, 2008, pp. 1687-1692.]. In place of spectral subtraction, the Minimum Mean Square Error [IMMSE] may be used [J. Valin, et al, “Enhanced robot audition based on microphone array source separation with post-filter,” in IROS2004. IEEE/RSJ, 2004, pp. 2123-2128; and S. Yamamoto, et al, “Making a robot recognize three simultaneous sentences in real-time,” in IROS2005. IEEE/RSJ, 2005, pp. 897-892.].
Thus, the recursive noise power estimation section 300 and the spectral subtraction section 400 perform non-linear speech enhancement process.
The acoustic feature extracting section 500 extracts acoustic features based on output of the spectral subtraction section 400.
The speech recognizing section 600 performs speech recognition based on output of the acoustic feature extracting section 500.
The recursive noise power estimation section 300 will be described below.
In step S010 of
YL(t)=20 log10|y(t)| (1)
Iy(t)=└(YL(t)−Lmin)/Lstep┘ (2)
The conversion from power into index is performed using a conversion table to reduce calculation time.
In step S020 of
α is the time decay parameter that is calculated from time constant Tr and sampling frequency Fs using the following expression.
The cumulative histogram thus generated is constructed in such a way that weights of earlier data become smaller. Such a cumulative histogram is called a cumulative histogram weighted by moving average. In expression (3), all indices are multiplied by α and (1−α) is added only to index Iy(t). In actual calculation, calculation of Expression (4) is directly performed without calculation of Expression (3) to reduce calculation time. That is, in Expression (4), all indices are multiplied by α and (1−α) is added to indices from Iy(t) to Imax. Further, in actuality, an exponentially incremented value (1−α)α−t is added to indices from Iy(t) to Imax instead of (1−α) and thus operation of multiplying all indices by α can be avoided to reduce calculation time. However, this process causes exponential increases of S(t,i). Therefore, a magnitude normalization process of S(t,i) is required when S(t,Imax) approaches the maximum limit value of the variable.
In step S030 of
In the expression, argmin means I which minimizes a value in the bracket [ ]. In place of search using Expression (5) for all indices from 1 to Imax, search is performed in one direction from the index Ix(t−1) found at the immediately preceding time so that calculation time is significantly reduced.
In step S040 of
Lx(t)=Lmin+Lstep·Ix(t) (6)
The method shown in
x and α are primary parameters that influence the estimated value of noise. However, parameter x is not so sensitive to the estimated Lx value, if the noise level is stable. For example, in
Also, time constant Tr does not need to be changed according to neither SNR nor to frequency. Time constant Tr controls the equivalent average time for histogram calculation. Time constant Tr should be set to allow sufficient time for both noise and speech periods. For typical interaction dialogs, such as question and answer dialogs, the typical value of Tr is 10s, because the period of most speech utterances is less than 10s.
Thus, the system according to the present invention is remarkably more advantageous than other systems in that parameters can be determined independently of the S/N ratio or the frequency. On the other hand, the conventional MCRA method requires threshold parameters for distinguishing signal from noise, which have to be adjusted according to the S/N ratio varying depending on the frequency.
Experiments
Experiments performed to proof performance of an automatic speech recognition system using the noise power estimating device according to the present invention will be described below.
1) Experimental Settings
Table 1 shows parameters for the sound detecting section 100, the recursive noise power estimation section 200 according to the embodiment of the present invention and the conventional MCRA method. The MCRA parameters were identical to the parameters described in MCRA's original paper (I. Cohen and B. Berdugo, “Speech enhancement for non-stationary noise environments,” Signal Processing vol. 81, pp. 2403-2481, 2001.).
2) Results of the Experiments
a) shows the estimated noise errors obtained for steady-state condition. The horizontal and vertical axes show the time (in unit of second) and error levels (in unit of dB) respectively. The solid line in
b) shows the estimated noise errors obtained for non-steady-state condition. The horizontal and vertical axes show the time (in unit of second) and error levels (in unit of dB) respectively. The solid line in
For steady-state condition shown in
The recursive noise power estimation section according to the present embodiment was evaluated through a robot audition system [K Nakadai, et al, “An open source software system for robot audition HARK and its evaluation,” in 2008 IEEE-RAS Int'l. Conf. on Humanoid Robots (Humanoids 2008). IEEE, 2008.]. The system integrates sound source localization, voice activity detection, speech enhancement and ASR (Automatic Speech Recognition). ATR216 and Julius [A. Lee, et. al, “Julius—an open source real-time large vocabulary recognition engine,” in 7th European Conf. on Speech Communication and Technology, 2001, vol. 3, pp. 1691-1694.] were used for ASR and a word correct rate (WCR) was used for the evaluation metric. The acoustic model for ASR was trained with enhanced speeches using only GSS-AS process applied on a large data corpus: Japanese Newspaper Article Sentences (JNAS). Three systems, that is, the base system, the MCRA system and the system of the present embodiment, were evaluated. Linear sub-process by GSS-AS was applied to all systems. The base system is a system without any non-linear enhancement sub-processes. The MCRA system uses a non-linear enhancement sub-process based on SS (Spectral Subtraction) and MCRA. The system of the present embodiment is that shown in
Table 2 shows noise conditions. WCR scores were evaluated for two noise types, that is, fan (steady noise) and music (non-steady noise). Positions of the speaker for music and that for noise are shown in
The input data was 236 isolated utterances and the estimated noises were initialized by every utterance. Since robot systems make new estimations when a new speaker emergences and restart the initialization, when the speaker vanishes, it is assumed that a dynamic environment is created, in which the speaker changes frequently.
Number | Date | Country | Kind |
---|---|---|---|
2010-232979 | Oct 2010 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5485522 | Solve et al. | Jan 1996 | A |
5712953 | Langs | Jan 1998 | A |
5781883 | Wynn | Jul 1998 | A |
6098038 | Hermansky et al. | Aug 2000 | A |
6230123 | Mekuria et al. | May 2001 | B1 |
6519559 | Sirivara | Feb 2003 | B1 |
6804640 | Weintraub et al. | Oct 2004 | B1 |
7072831 | Etter | Jul 2006 | B1 |
7596231 | Samadani | Sep 2009 | B2 |
7941315 | Matsuo | May 2011 | B2 |
8249271 | Bizjak | Aug 2012 | B2 |
8364479 | Schmidt et al. | Jan 2013 | B2 |
8489396 | Hetherington et al. | Jul 2013 | B2 |
20020128830 | Kanazawa et al. | Sep 2002 | A1 |
20020150265 | Matsuzawa et al. | Oct 2002 | A1 |
20050004685 | Seem | Jan 2005 | A1 |
20050256705 | Kazama et al. | Nov 2005 | A1 |
20080010063 | Komamura | Jan 2008 | A1 |
20080059098 | Zhang | Mar 2008 | A1 |
20080281589 | Wang et al. | Nov 2008 | A1 |
20090063143 | Schmidt et al. | Mar 2009 | A1 |
20100004932 | Washio et al. | Jan 2010 | A1 |
20110191101 | Uhle et al. | Aug 2011 | A1 |
20110224980 | Nakadai et al. | Sep 2011 | A1 |
20120245927 | Bondy | Sep 2012 | A1 |
20130142343 | Matsui et al. | Jun 2013 | A1 |
Number | Date | Country |
---|---|---|
07-262348 | Oct 1995 | JP |
10-319985 | Dec 1998 | JP |
2005-44349 | Feb 2005 | JP |
2009-75536 | Apr 2009 | JP |
Entry |
---|
Loizou, P.: “Speech Enhancement: Theory and Practice”; 2007; CRC Press, pp. 446-453. |
Martin, R.: “Spectral subtraction based on minimum statistics”, Proc. of EUSIPCO, Edinburgh, UK, Sep. 1994, pp. 1182-1185. |
K. Nakadai et al., “An Open Source Software System for Robot Audition HARK and Its Evaluation”, IEEE-RAS International Conference on Humanoid Robots, Dec. 1-3, 2008, pp. 561-566. |
Jean-Marc Valin et al., “Enhanced Robot Audition Based on Microphone Array Source Separation with Post-Filter”, IEEE-RSJ, 2004, pp. 2123-2128. |
Shun'ichi Yamamoto et al., “Making a Robot Recognize Three Simultaneous Sentences in Real-Time”, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005, pp. 897-902. |
Naoya Mochiki et al., “Recognition of Three Simultaneous Utterance of Speech by Four-Line Directivity Microphone Mounted on Head of Robot”, International Conference on Spoken Language Processing, 2004, pp. 1-4. |
Israel Cohen et al., “Speech Enhancement for Non-Stationary Noise Environments”, Signal Processing, vol. 81, 2001, pp. 2403-2418. |
Hirofumi Nakajima et al., “Adaptive Step-Size Parameter Control for Real-World Blind Source Separation”, ICASSP2008, IEEE, 2008, pp. 149-152. |
Marc Delcroix et al., “Static and Dynamic Variance Compensation for Recognition of Reverberant Speech with Dereverberation Preprocessing”, IEEE Translation on Audio, Speech, and Language Processing, vol. 17, No. 2, 2009, pp. 324-334. |
Yu Takahashi et al., “Real-Time Implementation of Blind Spatial Substraction Array for Hands-Free Robot Spoken Dialogue System”, IROS2008, IEEE/RSJ, 2008, pp. 1687-1692. |
Akinobu Lee et al., “Julius—An Open Source Real-Time Large Vocabulary Recognition Engine”, 7th European Conference on Speech Communication and Technology, vol. 3, 2001, pp. 1691-1694. |
Japanese Office Action for corresponding JP Appln. No. 2010-232979 dated Aug. 20, 2013. |
Number | Date | Country | |
---|---|---|---|
20120095753 A1 | Apr 2012 | US |