This application is a National Stage application of PCT/2014/051457 filed on Aug. 18, 2014, and entitled “M
In some speech processing systems, speech enhancement (SE) and automated speech recognition (ASR) are realized by separate engines. A SE module sends an enhanced single channel audio signal as well as some metadata to an ASR module. The original multi-channel recordings (e.g. originating from a microphone array) contain information that may be useful for speech detection, such as spatial information that enables distinguishing a target speaker from interfering speakers and/or knowledge about a reference signal, which can be useful in echo cancellation. In known systems this data is only available to the speech enhancement module where it is condensed into a stream of metadata that is sent in parallel to the enhanced single-channel signal.
Embodiments of the invention provide method and apparatus for a speech enhancement system having speech segmentation using metadata. In adverse acoustic environments, speech recognition should remain robust against interfering speakers or echoes from loudspeakers. In embodiments of the invention metadata from multiple speech detectors allows a compromise between the aggressiveness of speech enhancement, which might be counterproductive for speech recognition accuracy, and false triggering of the recognizer, which may result in high error rates. The ASR engine consolidates the metadata with its internal detectors for speech segmentation.
Conventional systems that include SE and ASR may have certain processing bottlenecks in ASR speech segmentation. For example, time-spatial information is summarized by the SE module to one value that is generated on a frame-by-frame base with limited history and without any look-ahead. The speech recognizer, however, processes the audio data on different time scales, e.g. by buffering the audio data until speech activity is detected. Fast detection is required to start buffering the audio stream in order not to miss speech onsets. After speech activity is detected with reasonable certainty, the speech recognition is triggered. These two effects—different time scales and contradicting requirements on the detection—are currently not reflected when generating the metadata. In known systems, the metadata is not updated by the SE module while the input stream is buffered in the ASR module. In addition, in conventional systems, the ASR engine has no knowledge about the internal state of the SE module, and therefore, cannot evaluate the confidence of the metadata. Further, in currently available systems, only the result of one detector is encoded, e.g. either based on the echo canceller or the beam former.
In embodiments of the invention, metadata for speech segmentation is generated by multiple speech detectors and takes into account the different requirements of ASR regarding latency and confidence of detection. Speech detectors adjusted to the particular tasks can send their metadata in parallel. It is understood that embodiments of the invention are applicable to any speech processing system in which speech segmentation is desirable.
In one aspect of the invention, a method comprises: processing microphone signals by a speech enhancement module to generate an audio stream signal; processing of the microphone signals by a first speech detector to generate first metadata; processing of the microphone signals by a second speech detector to generate second metadata; performing endpointing of the audio stream signal using the first and second metadata; and performing speech recognition on the audio stream signal using the endpointing including transitioning from a silence state to a maybe speech state, in which data is buffered, based on the first metadata and transitioning to a speech state, in which speech recognition is performed, based upon the second metadata.
The method can further include one or more of the following features: the first metadata has a frame-by-frame time scale, the second metadata has a sequence of frames time scale, performing one or more of barge-in, beamforming, and/or echo cancellation for generating the first and/or second metadata, tuning the parameters of the first speech detector (e.g., a speech detection threshold) for a given latency for the first metadata, adjusting latency for a given confidence level of voice activity detection for the second metadata, controlling computation of the second metadata using the first metadata (and vice-versa), and/or performing one or more of barge-in, beamforming, and/or echo cancellation for generating further metadata.
In another aspect of the invention, an article comprises: a non-transitory computer readable medium having stored instructions that enable a machine to: process microphone signals by a speech enhancement module to generate an audio stream signal; process of the microphone signals by a first speech detector to generate first metadata; processing of the microphone signals by a second speech detector to generate second metadata; perform endpointing of the audio stream signal using the first and second metadata; and perform speech recognition on the audio stream signal using the endpointing including transitioning from a silence state to a maybe speech state, in which data is buffered, based on the first metadata and transitioning to a speech state, in which speech recognition is performed, based upon the second metadata.
The article can further include one or more of the following features: the first metadata has a frame-by-frame time scale, the second metadata has a sequence of frames time scale, instructions to perform one or more of barge-in, beamforming, and/or echo cancellation for generating the first and second metadata, instructions to tune the parameters of the first detector for a given latency for the first metadata, instructions to adjust latency for a given confidence level of voice activity detection for the second metadata, instructions to control computation of the second metadata using the first metadata (and vice-versa), and/or instructions to perform one or more of barge-in, beamforming, and/or echo cancellation for generating further metadata.
In a further aspect of the invention, a system comprises: a speech enhancement module to process microphone signals for generating an audio stream signal, the speech enhancement module comprising: a first speech detector to process for generating first metadata; and a second speech detector to process the microphone signals for generating second metadata; and an automated speech recognition module to receive the audio stream from the speech enhancement module, the speech recognition module comprising: an endpointing module to perform endpointing of the audio stream signal using the first and second metadata; and a speech recognition module to perform speech recognition on the audio stream signal using the endpointing including transitioning from a silence state to a maybe speech state, in which data is buffered, based on the first metadata and transitioning to a speech state, in which speech recognition is performed, based upon the second metadata.
The system can further include a further speech detector to perform one or more of barge-in, beamforming, and/or echo cancellation for generating further metadata for use by the endpointing module, the first speech detector is further configured to tune the detector parameters (e.g., the detection threshold) for a given latency for the first metadata, and/or the second speech detector is further configured to adjust latency for a given confidence level of voice activity detection for the second metadata.
The foregoing features of this invention, as well as the invention itself, may be more fully understood from the following description of the drawings in which:
It is understood that the speech enhancement system 102 can include a variety of modules to process information from the microphones 112. Illustrative modules can include echo cancellation, beamforming, noise suppression, wind noise suppression, transient removal, and the like. It is further understood that additional speech detectors can be focused on one or more of echo cancellation, beamforming, noise suppression, wind noise suppression, transient removal, and the like, to generate further metadata that can be used by the endpointing module 122.
In embodiments of the invention, the speech detectors can be synchronous or asynchronous for extracting complementary information from the multi-channel audio stream to compromise latency and confidence of endpointing states for speech recognition, as described more fully below. In addition, while first and second speech detectors are shown in illustrative embodiments, it is understood that any practical number of speech detectors for generating respective metadata using various parameters can be used to meet the needs of a particular application.
A transition from the maybe state 204 to the speech confirmed state 206 triggers speech recognition. High confidence in the endpointing is desired to achieve high detection rates for speech activity and to avoid false triggers (false alarms) of the speech recognizer in case of non-stationary noise. Latency is not such an issue in this context and may be controlled by the metadata generated by the speech detectors in the speech enhancement module.
In general, the contradicting requirements for the state transitions are considered by employing multiple detectors to generate the first and second metadata. It is understood that the metadata from the speech detectors can be generated using a variety of factors. Speech models based on a single frame of the microphone signal may be evaluated to obtain a first speech detector with low latency. The speech detector may be tuned to achieve a low miss rate of speech onsets whereas the false alarm rate should not exceed a predefined value, for example. The confidence of a voice activity detection based on such a speech detector may be limited due to the short temporal context. Another speech detector may rely on more sophisticated speech models based on a larger temporal context. Such a detector can be tuned to yield lower false alarm rates than the first detector. Additional latency of the voice activity detection would be expected for more sophisticated processing. In general, the confidence of early decisions is limited by a given latency. If the latency can be adapted dynamically, a certain confidence level of speech detection can be achieved.
Different detectors may be employed for special use cases, e.g. barge-in, beamforming or distributed microphones. Instead of using one source of information, the results of several detectors can be weighted by some prior probabilities, normalized confidence measures and/or heuristic rules to calculate the metadata. A confidence measure may be based on the detection of speech onsets, specific group of phonemes (e.g. voiced speech) or noise scenarios (e.g. babble noise). Voiced speech typically has a more distinct characteristic compared to unvoiced speech. Challenging noise scenarios generally degrade the confidence of speech detection. In the case of babble noise, the background noise can comprise a superposition of several non-target speakers and therefore may exhibit speech-like characteristics. In addition, the internal state of the SE module may be incorporated into the confidence measure of the metadata. For example, the metadata may be not reliable when an echo canceller in the SE module has been reinitialized while the ASR module has already buffered audio data. The metadata may be rejected or confirmed by the ASR endpointing 122 (
It is understood that different detectors can extract different types of information that can be based on multi-channel and single-channel microphone data. For example, a beamformer takes spatial information into account to detect target speakers based on the direction of arrival of the sound signal. For barge-in, since a reference signal is available, an adaptive filter can be calculated to estimate the underlying impulse response of the echo path. This estimate allows one to distinguish between speech of the local speaker and echo of the prompt played back over the loudspeaker. As mentioned above, the internal state directly after a reset of the echo canceller cannot be relied upon. Further distributed microphones can use speaker dedicated microphones. Further, multi-channel noise reduction can be used, as well as detectors for speech onsets and voiced speech. Background noise classification can also be implemented for single-channel data. Speech and noise characteristics may be evaluated at all processing stages including the unprocessed microphone signal and the enhanced output signal. In case of sample rate conversion both the fullband signal and/or the downsampled speech signal can be evaluated.
It is understood that the first and second metadata can be sent to the speech recognition module in a variety of formats. For example, the data can be sent within the audio stream as attached metadata in an extended data structure. That is, the speech enhancement module sends additional bytes attached to the audio signal buffers/frames. In another embodiment, the metadata is encoded into the audio stream data structure and decoded by the ASR module, as illustrated in
In embodiments, speech enhancement can be applied to audio and video data acquisition, such as face tracking and lip reading to improve speech segmentation. For example, video data can be helpful in determining whether there is a person in a room and who is the speaker. PCT Publication No. WO2013058728 A1, which is incorporated herein, discloses using visual information to alter or set operating parameters of an audio signal processor. For example, a digital camera can captures visual information about a scene that includes a human speaker and/or a listener to ascertain information about acoustics of a room.
In one embodiment, the multi-channel input to the speech enhancement module 102 contains time-spatial information, e.g., about target/non-target speakers or reference signal in case of echo cancellation. If the ASR module 104 only receives single-channel data, as in conventional systems, it may react sensitively to aggressive speech enhancement. In accordance with embodiments of the invention, the multiple metadata allows a trade-off between the aggressiveness of signal processing and false triggering of the speech recognizer. In addition, the system can combine signal processing and some metadata for speech segmentation to achieve higher speech recognition accuracy.
It is understood that a variety of factors and configurations can be used to generate the multiple metadata. It is further understood that more than two metadata can be used to meet the needs of a particular application. A first metadata can provide low or no latency (on a frame basis) and a relatively low miss rate and a second metadata can provide high confidence and a high detection rate for a sequence of frames. For the first metadata, for a given latency, the detection/confidence threshold can be tuned with respect to the detection of speech onsets. For the second metadata, latency can be enlarged to achieve a certain confidence level for the start of speech recognition. In one embodiment, the calculation of the second metadata can be controlled by the first metadata. It is understood that the metadata can be tuned for illustrative applications including barge-in (loudspeaker reference channel for AEC (acoustic echo cancellation) is available), beam forming (microphone array), distributed microphones (dedicated microphones for multiple speakers), and the like. For example, in case of echo cancellation, one could use a standard VAD (voice activity detection) feature until the echo canceler has been converged after which the internal parameters of the echo canceler can be evaluated for the second metadata.
For passenger speech 470 and driver speech 472, a respective microphone signal is shown along with the first metadata 474 from the first speech detector 452 and the second metadata 476 for the second speech detector 454.
It is understood that a variety of suitable speech/voice activity detectors can be used to meet the requirements of a particular application. In one embodiment, a first speech detector to generate first metadata can be provided in accordance with “Spectral flatness”: M. H. Moattar and M. M. Homayounpour, “A simple but efficient real-time voice activity detection algorithm,” 17th European Signal Processing Conference (EUSIPCO 2009), 2009, which is incorporated herein by reference and a second speech detector to generate second metadata can be provided in accordance with “Long-term spectral flatness”: Y. Ma and A. Nishihara, “Efficient voice activity detection algorithm using long-term spectral flatness measure,” EURASIP Journal on Audio, Speech, and Music Processing, vol. 2013:21, no. 1, pp. 1-18, 2013, which is incorporated herein by reference.
One particular speech detector for a distributed microphone setup that is useful in providing metadata for embodiments of the invention is described below and disclosed in T. Matheja, M. Buck, and T. Fingscheidt “Speaker Activity Detection for Distributed Microphone Systems in Cars,” Proc. of the 6th Biennial Workshop on Digital Signal Processing for In-Vehicle Systems, September, 2013, which is incorporated herein by reference.
In exemplary embodiments, an energy-based speaker activity detection (SAD) system evaluates a signal power ratio (SPR) in each of M≥2 microphone channels. In embodiments, the processing is performed in the discrete Fourier transform domain with the frame index l and the frequency subband index k at a sampling rate of fs=16 kHz, for example. In one particular embodiment, the time domain signal is segmented by a Hann window with a frame length of K=512 samples and a frame shift of 25%. It is understood that basic fullband SAD is the focus here and that enhanced fullband SAD and frequency selective SAD are not discussed herein.
Using the microphone signal spectra Y(l,k), the power ratio (l,k) and the signal-to-noise ratio (SNR) {circumflex over (ξ)}m(l,k) are computed to determine a basic fullband speaker activity detection (l). As described more fully below, in one embodiment different speakers can be distinguished by analyzing how many positive and negative values occur for the logarithmic SPR in each frame for each channel m, for example.
Before considering the SAD, the system should determine SPRs. Assuming that speech and noise components are uncorrelated and that the microphone signal spectra are a superposition of speech and noise components, the speech signal power spectral density (PSD) estimate {circumflex over (Φ)}ΣΣ,m(l,k) in channel m can be determined by
{circumflex over (Φ)}ΣΣm(l,k)=max{{circumflex over (Φ)}YYm(l,k)−{circumflex over (Φ)}NNm(l,k),0}, (1)
where {circumflex over (Φ)}YY,m(l,k) may be estimated by temporal smoothing of the squared magnitude of the microphone signal spectra Ym(l,k). The noise PSD estimate {circumflex over (Φ)}NN.m(l,k) can be determined by any suitable approach such as an improved minimum controlled recursive averaging approach described in I. Cohen, “Noise Spectrum Estimation in Adverse Environments: Improved Minima Controlled Recursive Averaging,” IEEE Transactions on Speech and Audio Processing, vol. 11, no. 5, pp. 466-475, September 2003, which is incorporated herein by reference. Note that within the measure in Equation (1), direct speech components originating from the speaker related to the considered microphone are included, as well as cross-talk components from other sources and speakers. The SPR in each channel m can be expressed below for a system with M≥2 microphones as
with the small value ϵ, as discussed similarly in T. Matheja, M. Buck, T. Wolff, “Enhanced Speaker Activity Detection for Distributed Microphones by Exploitation of Signal Power Ratio Patterns,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2501-2504, Kyoto, Japan, March 2012, which is incorporated herein by reference.
It is assumed that one microphone always captures the speech best because each speaker has a dedicated microphone close to the speaker's position. Thus, the active speaker can be identified by evaluating the SPR values among the available microphones. Furthermore, the logarithmic SPR quantity enhances differences for lower values and results in
′m(l,k)=10 log10m(l,k)) (3)
Speech activity in the m-th speaker related microphone channel can be detected by evaluating if the occurring logarithmic SPR is larger than 0 dB, in one embodiment. To avoid considering the SPR during periods where the SNR ξm(l,k) shows only small values lower than a threshold ΘSNR1, a modified quantity for the logarithmic power ratio in Equation (3) is defined by
With a noise estimate {circumflex over (Φ)}′NN,m (l,k) for determination of a reliable SNR quantity, the SNR is determined in a suitable manner as in Equation (5) below, such as that disclosed by R. Martin, “An Efficient Algorithm to Estimate the Instantaneous SNR of Speech Signals,” in Proc. European Conference on Speech Communication and Technology (EUROSPEECH), Berlin, Germany, pp. 1093-1096, September 1993.
{circumflex over (ξ)}m(l,k)=
Using the overestimation factor γSNR the considered noise PSD results in
{circumflex over (Φ)}′NN,m(l,k)=γSNR·{circumflex over (Φ)}NN,m(l,k) (6)
Based on Equation (4), the power ratios are evaluated by observing how many positive (+) or negative (−) values occur in each frame. Hence, for the positive counter follows:
with
Equivalently the negative counter can be determined by
considering
Regarding these quantities, a soft frame-based SAD measure may be written by
where Gmc(l) is an SNR-dependent soft weighting function to pay more attention to high SNR periods. In order to consider the SNR within certain frequency regions the weighting function is computed by applying maximum subgroup SNRs:
The maximum SNR across K′ different frequency subgroup SNRs {circumflex over (ξ)}mG(l,æ) is given by
The grouped SNR values can each be computed in the range between certain DFT bins kæ and kæ+1 with æ=1, 2, . . . , K′ and {kæ}={28, 53, 78, 103, 128, 153, 178, 203, 228, 253}. We write for the mean SNR in the æ-th subgroup:
The basic fullband SAD is obtained by thresholding using ΘSAD1:
It is understood that during double-talk situations the evaluation of the signal power ratios is no longer reliable. Thus, regions of double-talk should be detected in order to reduce speaker activity misdetections. Considering the positive and negative counters, for example, a double-talk measure can be determined by evaluating whether cm+(l) exceeds a limit ΘDTM during periods of detected fullband speech activity in multiple channels.
To detect regions of double-talk this result is held for some frames in each channel. In general, double-talk (l)=1 is detected if the measure is true for more than one channel. Preferred parameter settings for the realization of the basic fullband SAD can be found in Table 1 below.
Processing may be implemented in hardware, software, or a combination of the two. Processing may be implemented in computer programs executed on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.
The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer. Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate.
Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)).
Having described exemplary embodiments of the invention, it will now become apparent to one of ordinary skill in the art that other embodiments incorporating their concepts may also be used. The embodiments contained herein should not be limited to disclosed claims. All publications and references cited herein are expressly incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2014/051457 | 8/18/2014 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/028254 | 2/25/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5826230 | Reaves | Oct 1998 | A |
6185527 | Petkovic et al. | Feb 2001 | B1 |
6907030 | Bladsjo | Jun 2005 | B1 |
8442817 | Naka | May 2013 | B2 |
20030039338 | Erb | Feb 2003 | A1 |
20050114118 | Peck | May 2005 | A1 |
20050141541 | Cuny | Jun 2005 | A1 |
20050154583 | Naka | Jul 2005 | A1 |
20050216261 | Garner | Sep 2005 | A1 |
20060122837 | Kim et al. | Jun 2006 | A1 |
20070050190 | Washio | Mar 2007 | A1 |
20070294295 | Finkelstein | Dec 2007 | A1 |
20090052636 | Webb et al. | Feb 2009 | A1 |
20100185448 | Meisel | Jul 2010 | A1 |
20100191524 | Washio | Jul 2010 | A1 |
20110055256 | Phillips | Mar 2011 | A1 |
20110208520 | Lee | Aug 2011 | A1 |
20120065972 | Strifler | Mar 2012 | A1 |
20120253799 | Bangalore et al. | Oct 2012 | A1 |
20120330653 | Lissek | Dec 2012 | A1 |
20130211826 | Mannby | Aug 2013 | A1 |
20140278391 | Braho | Sep 2014 | A1 |
20150172454 | Milosevski | Jun 2015 | A1 |
20150302855 | Kim | Oct 2015 | A1 |
Number | Date | Country |
---|---|---|
2009-216986 | Sep 2009 | JP |
Entry |
---|
PCT International Preliminary Report and Written Opinion dated Mar. 2, 2017 for International Application No. PCT/US2014/051457; 8 Pages. |
PCT Search Report and Written Opinion of ISA dated Apr. 28, 2015; for PCT App. No. PCT/US2014/051457; 11 pages. |
Moattar et al.; “A Simple but Efficient Real-Time Voice Activity Detection Algorithm”; 17th European Signal Processing Conference, Aug. 24-28, 2009; pp. 2549-2553 (5 pages). |
Ma et al.; “Efficient voice activity detection algorithm using long-term spectral flatness measure”; EURASIP Journal on Audio, Speech, and Music Processing, Jan. 21, 2013; 18 pages. |
Number | Date | Country | |
---|---|---|---|
20170213556 A1 | Jul 2017 | US |