Speech has become an efficient input method for computer systems due to improvements in the accuracy of speech recognition. However, the conventional speech recognition technology is unable to perform speech recognition on an audio signal which includes overlapping voices. Accordingly, it may be desirable to extract non-overlapping voices from such a signal in order to perform speech recognition thereon.
In a conferencing context, a microphone array may capture a continuous audio stream including overlapping voices of any number of unknown speakers. Systems are desired to efficiently convert the stream into a fixed number of continuous output signals such that each of the output signals contains no overlapping speech segments. A meeting transcription may be automatically generated by inputting each of the output signals to a speech recognition engine.
The following description is provided to enable any person in the art to make and use the described embodiments. Various modifications, however, will remain apparent to those in the art.
Some embodiments described herein provide a technical solution to the technical problem of low-latency speech separation for a continuous multi-microphone audio signal. According to some embodiments, a multi-microphone input signal may be converted into a fixed number of output signals, none of which includes overlapping speech segments. Embodiments may employ an RNN-CNN hybrid network for generating speech separation Time-Frequency (TF) masks and a set of fixed beamformers followed by a neural post-filter. At every time instance, a beamformed signal from one of the beamformers is determined to correspond to one of the active speakers, and the post-filter attempts to minimize interfering voices from the other active speakers which still exist in the beamformed signal. Some embodiments may achieve separation accuracy comparable to or better than prior methods while significantly reducing processing latency.
Signals 110 are processed with a set of fixed beamformers 120. Each of fixed beamformers 120 may be associated with a particular focal direction. Some embodiments may employ eighteen fixed beamformers 120, each with a distinct focal direction separated by 20 degrees from its neighboring beamformers. Such beamformers may be designed based on the super-directive beamforming approach or the delay-and-sum beamforming approach. Alternatively, the beamformers may be learned from pre-defined training data so as to minimize an average loss function, such as the mean squared error between the beamformed and clean signals, over the training data is minimized.
Audio signals 110 are also received by feature extraction component 130. Feature extraction component 130 extracts first features from audio signals 110. According to some embodiments, the first features include a magnitude spectrum of one audio signal of audio signals 110 which was captured by a reference microphone. The extracted first features may also include inter-microphone phase differences computed between the audio signal captured by the reference microphone and the audio signals captured by each of the other microphones.
The first features are fed to TF mask generation component 140, which generates TF masks, each associated with either of two output channels (Out1 and Out2), based on the extracted features. Each output channel of TF mask generation component 140 represents a different sound source within a short time segment of audio signals 110. System 100 uses two output channels because three or more people rarely speak simultaneously within a meeting, but embodiments may employ three or more output channels.
A TF mask associates each TF point of the TF representations of audio signals 210 with its dominant sound source (e.g., Speaker1, Speaker2). More specifically, for each TF point, the TF mask of Out1 (or Out2) represents a probability from 0 to 1 that the speaker associated with Out1 (or Out2) dominates the TF point. In some embodiments, the TF mask of Out1 (or Out2) can take any number that represents the degree of confidence that the corresponding TF point is dominated by the speaker associated with Out1 (or Out2). If only one speaker is speaking, the TF mask of Out1 (or Out2) may comprise all 1's and the TF mask of Out2 (or Out1) may comprise all Os. As will be described in detail below, TF mask generation component 140 may be implemented by a neural network trained with a mean-squared error permutation invariant training loss.
Output channels Out1 and Out2 are provided to enhancement components 150 and 160 to generate output signals 155 and 165 representing first and second sound sources (i.e., speakers), respectively. Enhancement component 150 (or 160) treats the speaker associated with Out1 (or Our2) as a target speaker and the speaker associated with Out2 (or Out1) as an interfering speaker and generates output signal 155 (or 165) in such a way that the output signal contains only the target speaker. In operation, each enhancement component 150 and 160 determines, based on the TF masks generated by TF mask generation component 140, the directions of the target and interfering speakers. Based on the target speaker direction, one of the beamformed signals generated by each of fixed beamformers 120 is selected. Each enhancement component 150 and 160 then extracts second features from audio signals 110, the selected beamformed signal, and the target and interference speaker directions to generate an enhancement TF mask based on the extracted second features. The enhancement TF mask is applied to (e.g., multiplied with) the selected beamformed signal to generate a substantially non-overlapped audio signal (155, 165) associated with the target speaker. The non-overlapped audio signals may then be submitted to a speech recognition engine to generate a meeting transcription.
Each component of system 100 and otherwise described herein may be implemented by one or more computing devices (e.g., computer servers), storage devices (e.g., hard or solid-state disk drives), and other hardware as is known in the art. The components may be located remote from one another and may be elements of one or more cloud computing platforms, including but not limited to a Software-as-a-Service, a Platform-as-a-Service, and an Infrastructure-as-a-Service platform. According to some embodiments, one or more components are implemented by one or more dedicated virtual machines.
In some embodiments, TF mask generation component 140 is realized by using a neural network trained using permutation invariance training (PIT). One advantage of implementing component 140 as a neural network PIT, in comparison to other speech separation mask estimation schemes such as spatial clustering, deep clustering, and deep attractor networks, is that a PIT-trained network does not require prior knowledge of the number of active speakers. If only one speaker is active, a PIT-trained network yields zero-valued TF masks from any extra output channels. However, implementations of TF mask generation component 140 are not necessarily limited to a neural network trained with PIT.
A neural network trained with PIT can not only separate speech signals for each short time frame but can also maintain consistent order of output signals across short time frames. This results from penalization during training if the network changes the output signal order at some middle point of an utterance.
The above-described PIT-trained network assigns an output channel to each separated speech frame consistently across short time frames but this ordering may break down over longer time frames. For example, the network is trained on mixed speech segments of up to TTR (=10) seconds during the learning phase, so the resultant model does not necessarily keep the output order consistent beyond TTR seconds. In addition, a RNN's state values tend to saturate when exposed to a long feature vector stream. Therefore, some embodiments refresh the state values periodically in order to keep the RNN working.
Feature extraction component 154 extracts features from original audio signals 110 based on the determined directions and the beamformed signal selected at beam selection component 153. TF mask generation component 156 generates a TF mask based on the extracted features. TF mask application component 158 applies the generated TF mask to the beamformed signal selected at beam selection component 153, corresponding to the determined target speaker direction, to generate output audio signal 155.
Sound source localization components 151 and 152 estimate the target and interference speaker directions every NS frames, or 0.016 NS seconds when a frame shift is 0.016 seconds, according to some embodiments. For each of the target and interference directions, sound source localization may be performed based on audio signals 110 and the TF masks of frames (n−NW n], where n refers to the current frame index. The estimated directions are used for processing the frames in (n−NM−NS, n−NM], resulting in a delay of NM frames. A “margin” of length NM may be introduced so that sound source localization leverages a small amount of future context. In some embodiments, NM, NS, and NW are set at 20, 10, and 50, respectively.
Sound source localization may be performed with maximum likelihood estimation using the TF masks as observation weights. It is hypothesized that each magnitude-normalized multi-channel observation vector, zt,f, follows a complex angular Gaussian distribution as follows:
p(zt,f|ω)=0.5π−M(M−1)!|Bf,ω|−1(zt,fBf,ω−1zt,f)−M
where ω denotes an incident angle, M the number of microphones, and Bf,ω=(hf,ωhf,ω+εI) with hf,ω, I, and ε being the steering vector for angle ω at frequency f, an M-dimensional identify matrix, and a small flooring value. Given a set of observations, Z={zt,f}, the following log likelihood function is to be maximized with respect to ω:
where ω can take a discrete value between 0 and 360 and mt,f denotes the TF mask provided by the separation network. It can be shown that the log likelihood function reduces to the following simple form:
L(ω) is computed for every possible discrete direction. For example, in some embodiments, it is computed for every 5 degrees. The ω value that results in the highest score is then determined as the target speaker's direction.
For each of the target and interference beamformer directions, feature extraction component 154 calculates a directional feature for each TF bin as a sparsified version of the cosine distance between the direction's steering vector and the multi-channel microphone array signal 110. Also extracted are the inter-microphone phase difference of each microphone for the direction, and a TF representation of the beamformed signal associated with the direction. The extracted features are input to TF mask generation component 156.
TF mask generation component 156 may utilize a direction-informed target speech extraction method such as that proposed by Z. Chen, X. Xiao, T. Yoshioka, H. Erdogan, J. Li, and Y. Gong in “Multi-channel overlapped speech recognition with location guided speech extraction network,” Proc. IEEE Worksh. Spoken Language Tech., 2018. The method uses a neural network that accepts the features computed based on the target and interference directions to focus on the target direction and give less attention to the interference direction. According to some embodiments, component 156 consists of four unidirectional LSTM layers, each with 600 units, and is trained to minimize the mean squared error of clean and TF mask-processed signals.
Initially, a first plurality of audio signals are received at S810. The first plurality of audio signals is captured by an audio capture device equipped with multiple microphones. For example, S810 may comprise reception of a multi-channel audio signal from a system such as system 220.
At S820, a second plurality of beamformed signals is generated based on the first plurality of audio signals. Each of the second plurality of beamformed signals is associated with a respective one of a second plurality of beamformer directions. S820 may comprise processing of the first plurality of audio signals using a set of fixed beamformers, with each of the fixed beamformers corresponding to a respective direction toward which it steers the beamforming directivity.
First features are extracted based on the first plurality of audio signals at S830. The first features may include, for example, inter-microphone phase differences with respect to a reference microphone and a spectrogram of one channel of the multi-channel audio signal. TF masks, each associated with one of two or more output channels, is generated at S840 based on the extracted features.
Next, at S850, a first direction corresponding to a target speaker and a second direction corresponding to a second speaker are determined based on the TF masks generated for the output channels. At S855, one of the second plurality of beamformed signals which corresponds to the first direction is selected.
Second features are extracted from the first plurality of audio signals at S860 for each output channel based on the first and second directions determined for the output channel. An enhancement TF mask is then generated at S870 for each output channel based on the second features extracted for the output channel. The enhancement TF mask of each output channel is applied at S880 to the selected beamformed signal. The enhancement TF mask is intended to de-emphasize an interfering sound source which might be present in the selected beamformed signal to which it is applied.
As shown, transcription service 910 may be implemented as a cloud service providing transcription of multi-channel audio signals received over cloud 920. The transcription service may implement speech separation to separate overlapping speech signals from the multi-channel audio voice signals according to some embodiments.
One of client devices 930, 932 and 934 may capture a multi-channel directional audio signal as described herein and request transcription of the audio signal from transcription service 910. Transcription service 910 may perform speech separation and perform voice recognition on the separated signals to generate a transcript. According to some embodiments, the client device specifies a type of capture system used to capture the multi-channel directional audio signal in order to provide the geometry and number of capture devices to transcription service 910. Transcription service 910 may in turn access transcript storage service 940 to store the generated transcript. One of client devices 930, 932 and 934 may then access transcript storage service 940 to request a stored transcript.
System 1000 includes processing unit 1010 operatively coupled to communication device 1020, persistent data storage system 1030, one or more input devices 1040, one or more output devices 1050 and volatile memory 1060. Processing unit 1010 may comprise one or more processors, processing cores, etc. for executing program code. Communication interface 1020 may facilitate communication with external devices, such as client devices, and data providers as described herein. Input device(s) 1040 may comprise, for example, a keyboard, a keypad, a mouse or other pointing device, a microphone, a touch screen, and/or an eye-tracking device. Output device(s) 1050 may comprise, for example, a display (e.g., a display screen), a speaker, and/or a printer.
Data storage system 1030 may comprise any number of appropriate persistent storage devices, including combinations of magnetic storage devices (e.g., magnetic tape, hard disk drives and flash memory), optical storage devices, Read Only Memory (ROM) devices, etc. Memory 1060 may comprise Random Access Memory (RAM), Storage Class Memory (SCM) or any other fast-access memory.
Transcription service 1032 may comprise program code executed by processing unit 1010 to cause system 1000 to receive multi-channel audio signals and provide two or more output audio signals consisting of non-overlapping speech as described herein. Node operator libraries 1034 may comprise program code to execute functions of trained nodes of a neural network to generate TF masks as described herein. Audio signals 1036 may include both received multi-channel audio signals and two or more output audio signals consisting of non-overlapping speech. Beamformed signals 1038 may comprise signals generated by fixed beamformers based on input multi-channel audio signals as described herein. Data storage device 1030 may also store data and other program code for providing additional functionality and/or which are necessary for operation of system 1000, such as device drivers, operating system files, etc.
Each functional component described herein may be implemented at least in part in computer hardware, in program code and/or in one or more computing systems executing such program code as is known in the art. Such a computing system may include one or more processing units which execute processor-executable program code stored in a memory system.
The foregoing diagrams represent logical architectures for describing processes according to some embodiments, and actual implementations may include more or different components arranged in other manners. Other topologies may be used in conjunction with other embodiments. Moreover, each component or device described herein may be implemented by any number of devices in communication via any number of other public and/or private networks. Two or more of such computing devices may be located remote from one another and may communicate with one another via any known manner of network(s) and/or a dedicated connection. Each component or device may comprise any number of hardware and/or software elements suitable to provide the functions described herein as well as any other functions. For example, any computing device used in an implementation of a system according to some embodiments may include a processor to execute program code such that the computing device operates as described herein.
All systems and processes discussed herein may be embodied in program code stored on one or more non-transitory computer-readable media. Such media may include, for example, a hard disk, a DVD-ROM, a Flash drive, magnetic tape, and solid state Random Access Memory (RAM) or Read Only Memory (ROM) storage units. Embodiments are therefore not limited to any specific combination of hardware and software.
Those in the art will appreciate that various adaptations and modifications of the above-described embodiments can be configured without departing from the claims. Therefore, it is to be understood that the claims may be practiced other than as specifically described herein.
Number | Name | Date | Kind |
---|---|---|---|
20150071455 | Tzirkel-Hancock | Mar 2015 | A1 |
20190043491 | Kupryjanow | Feb 2019 | A1 |
20200027451 | Cantu | Jan 2020 | A1 |
Entry |
---|
Chen, et al., “Efficient Integration of Fixed Beamformers and Speech Separation Networks for Multi-Channel Far-Field Speech Separation”, In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr. 15, 2018, pp. 5384-5388. |
“International Search Report and Written Opinion Issued in PCT Application No. PCT/US2020/019851”, dated May 11, 2020, 37 Pages. |
Wang, et al., “Supervised Speech Separation Based on Deep Learning: An Overview”, In Journal of IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, Issue 10, Oct. 2018, pp. 1702-1726. |
Yoshioka, et al., “Low-Latency Speaker-Independent Continuous Speech Separation”, In Journal of Computing Research Repository, Apr. 13, 2019, 5 Pages. |
Wang, Zhong-Qiu et al., “Integrating spectral and spatial features for multi-channel speaker separation”, In Proceedings of Interspeech 2018,19th Annual Conference of the International Speech Communication Association, Hyderabad, India, Sep. 2, 2018, (pp. 2718-2722, 5 total pages). |
Boeddeker, Christoph et al., “Exploring practical aspects of neural mask-based beamforming for far-field speech recognition”, In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Apr. 17, 2018, 5 Pages. |
Cetin, Ozgur et al., “Analysis of overlaps in meetings by dialog factors, hot spots, speakers, and collection site: Insights for automatic speech recognition”, In Proceedings of Ninth International Conference on Spoken Language Processing, Sep. 17, 2016, (pp. 293-296, 4 total pages). |
Chen, Zhuo et al., “Deep attractor network for single-microphone speaker separation”, In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 5, 2017, (pp. 246-250, 5 total pages). |
Chen, Zhuo et al., “Multi-channel overlapped speech recognition with location guided speech extraction network”, In Proceedings of 2018 IEEE Spoken Language Technology Workshop, SLT 2018, Athens, Greece, Dec. 18, 2018, 8 Pages. |
Drude, Lukas et al., “Source counting in speech mixtures using a variational em approach for complex watson mixture models”, In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, May 4, 2014, (pp. 6834-6838, 5 total pages). |
Drude, Lukas et al., “Tight integration of spatial and spectral features for BSS with deep clustering embeddings”, In Proceedings of 18th Annual Conference of the International Speech Communication Association, Aug. 20, 2017, (pp. 2650-2654, 5 total pages). |
Fiscus, Jonathan G. et al., “Multiple dimension Levenshtein edit distance calculations for evaluating automatic speech recognition systems during simulaneous speech”, In Proceedings of The International Conference on language Resources and Evaluation, May 2006, (pp. 803-808, 6 total pages). |
Hershey, John R. et al., “Deep clustering: Discriminative embeddings for segmentation and separation”, In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 20, 2016, (pp. 31-35, 5 total pages). |
Heymann, Jahn et al., “BLSTM supported GEV beamformer front-end for the 3rd CHiME challenge”, In Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding, Dec. 13, 2015, (pp. 444-451, 8 total pages). |
Ito, Nobutaka et al., “Complex angular central Gaussian mixture model for directional statistics in mask-based microphone array signal processing”, in Proceedings of European Signal Processing Conference (EUSIPCO), 2016, (pp. 1153-1157, 5 total pages). |
Kolbaek, Morten et al., “Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks”, In Journal of IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 25, Issue 10, Oct. 2017, (pp. 1901-1913, 13 total pages). |
Oord, Aaron et al., “Wavenet: A Generative Model for Raw Audio”, Published in arXiv preprint, arXiv:1609.03499, Sep. 19, 2016, 15 Pages. |
Ozerov, Alexey, et al., “Multichannel nonnegative matrix factorization in convolutive mixtures for audio source separation”, In Proceedings of IEEE Trans. Audio, Speech, Language Process., vol. 18,No. 3, 2010, (pp. 550-563, 14 pages). |
Sawada, Hiroshi et al., “Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment”, In Proceedings of IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, No. 3, Mar. 2011, (pp. 516-527, 12 total pages). |
Wang, Zhong-Qiu et al., “Multi-channel deep clustering: discriminative spectral and spatial embeddings for speaker-independent speech separation”, In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2018, Calgary, AB, Canada, Apr. 15, 2018, pp. 1-5. |
Yoshioka, Takuya et al., “Generalization of Multi-Channel Linear Prediction Methods for Blind MIMO Impulse Response Shortening”, In Journal of IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, Issue 10, Dec. 2012, (pp. 2707-2720, 14 total pages). |
Yoshioka, Takuya et al., “Multi-Microphone Neural Speech Separation for far-field multi-talker Speech Recognition”, In Proceedings of ICASSP, Apr. 17, 2018, (pp. 5739-5743, 5 total pages). |
Yoshioka, Takuya, et al., “Recognizing overlapped speech in meetings: A multichannel separation approach using neural networks”, In Proceedings of Interspeech, 2018, (pp. 3038-3042, 5 total pages). |
Yoshioka, Takuya et al., “The NTT CHiME-3 system: advances in speech enhancement and recognition for mobile multi-microphone devices”, In Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding, Dec. 13, 2015, (pp. 436-443, 8 total pages). |
Chang, Shuo-Yiin et al.,“Temporal modeling using dilated convolution and gating for voice-activity-detection”, In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2018 ,Apr. 15, 2018, (pp. 5549-5553, 5 total pages). |
Ito, Nobutako et al.,“Relaxed disjointness based clustering for joint blind source separation and dereverberation”,In Proceedings of 14th International Workshop on Acoustic Signal Enhancement, IWAENC 2014, Juan-les-Pins, France , Sep. 8, 2014, (pp. 268-272, 5 total pages). |
Makino, S. et al.,“Blind speech separation”, In Publication of Springer Netherlands, 2007, (Parts 1 to 6, 439 total pages). |
Yoshioka, Takuya et al., “Recognizing Overlapped Speech in Meetings: A Multichannel Separation Approach Using Neural Networks”, CameraReady, 2018, 5pgs. |
Kolbaek, Morten et al., “Multi-talker Speech Separation and Tracing with Permutation Invariant Training of Deep Recurrent Neural Networks”, arXiv:1703.06284v1 [cs.SD], Mar. 18, 2017, 10pgs. |
Number | Date | Country | |
---|---|---|---|
20200322722 A1 | Oct 2020 | US |