The present invention relates generally to speech processing, and more particularly, to distinguishing the individual speech of simultaneous speakers.
Despite many years of intensive efforts by a large research, community, automatic separation of competing or simultaneous speakers is still an unsolved, outstanding problem. Such competing or simultaneous speech commonly occurs in telephony or broadcast situations where either two speakers, or a speaker and some other sound (such as ambient noise) are each simultaneously received by the same channel. To date, efforts that exploit speech-specific information to reduce the effects of multiple speaker interference have been largely unsuccessful. For example, the assumptions of past blind signal separation approaches often are not applicable in normal speaking and telephony environments.
The extreme difficulty that automated systems face in dealing with competing sound sources stands in stark contrast to the remarkable ease with which humans and most animals perceive and parse complex, overlapping auditory events in their surrounding world of sounds. This facility, known as auditory scene analysis, has recently been the focus of intensive research and mathematical modeling, which has yielded fascinating insights into the properties of the acoustic features and cues that humans automatically utilize to distinguish between simultaneous speakers.
A related yet more general problem occurs when the competing sound source is not speech, but is instead arbitrary yet distinct from the desired sound source. For example, when on location recording for a movie or news program, the sonic environment is often not as quiet as would be ideal. During sound production, it would be useful to have available methods that allow for the reduction of undesired background or ambient sounds, while maintaining desired sounds, such as dialog.
The problem of speaker separation is also called “co-channel speech interference.” One prior art approach to the co-channel speech interference problem is blind signal separation (BSS), which approximately recovers unknown signals or “sources” from their observed mixtures. Typically, such mixtures are acquired by a number of sensors, where each sensor receives a different combination of the source signals. The term “blind” is employed, because the only a priori knowledge of the signals is their statistical independence. An article by J. Cardoso (“Blind Signal Separation: Statistical Principles” IEEE Proceedings, Vol. 86, No 10, October 1998, pp. 2009-2025) describes the technique.
In general, BSS is based on the hypothesis that the source signals are stochastically mutually independent. The article by Cardoso noted above, and a related article by S. Amari and A. Cichocki (“Adaptive Blind Signal Processing-Neural Network Approaches,” IEEE Proceedings, Vol. 86, No 10, October 1998, pp. 2026-2048) provide heuristic algorithms for BSS of speech. Such algorithms have originated from traditional signal processing theory, and from various other backgrounds such as neural networks, information theory, statistics, system theory, and information theory. However, most such algorithms deal with the instantaneous mixture of sources and only a few methods examine the situation of convolutive mixtures of speech signals. The case of instantaneous mixture is the simplest case of BSS and can be encountered when multiple speakers are talking simultaneously in an anechoic room with no reverberation effects and sound reflections. However, when dealing with real room acoustics (i.e., in a broadcast studio, over a speakerphone, or even in a phone booth), the effect of reverberation is significant. Depending upon the amount and the type of the room noise, and the strength of the reverberation, the resulting speech signals that are received by the microphones may be highly distorted, which will significantly reduce the ability of such prior art speech separation algorithms.
To quote a recent experimental study: “ . . . reverberation and room noise considerably degrade the performance of BSSD (blind source separation and deconvolution) algorithms. Since current BSSD algorithms are so sensitive to the environments in which they are used, they will only perform reliably in acoustically treated spaces devoid of persistent noises.” (A. Westner and V. M. Bove, Jr., “Applying Blind Source Separation and Deconvolution to Real-World Acoustic Environments,” Proc. 106th Audio Engineering Society (AES) Convention, 1999.)
Thus, BSS techniques, while representing an area of active research, have not produced successful results when applied to speech recognition under co-channel speech interference. In addition, BSS requires more than one microphone, which often is not practical in most broadcast and telephony speech recognition applications. It would be desirable to provide a technique capable of solving the problem of simultaneous speakers, which requires only one microphone, and which is inherently less sensitive to non-ideal room reverberation and noise.
Therefore, neither the currently popular single microphone nor known multiple microphone approaches, which have been proven successful for addressing mild acoustic distortion, have provided satisfactory solutions for dealing with difficult co-channel speech interference and long-delay acoustic reverberation problems. Some of the inherent infrastructure of the existing state-of-the-art speech recognizers, which requires relatively short, fixed-frame feature inputs or which requires prior statistical information about the interference sources, is responsible for this current challenge.
If automatic speech recognition (ASR) systems, speakerphones, or enhancement systems for the hearing impaired are to become truly comparable to human performance, they must be able to segregate multiple speakers and focus on one among many, to “fill in” missing speech information interrupted by brief bursts of noise, and to tolerate changing patterns of reverberation due to different room acoustics. Humans with normal hearing are often able to accomplish these feats through remarkable perceptual processes known collectively as auditory scene analysis. The mechanisms that give rise to such an ability are an amalgam of relatively well-known bottom-up sound processing stages in the early and central auditory system, and less understood top-down attention phenomena involving whole brain function. It would be desirable to provide ASR techniques capable of solving the simultaneous speaker problem noted above. It would further be desirable to provide ASR techniques capable of solving the simultaneous speaker problem modeled at least in part, on auditory scene analysis.
Preferably, such techniques should be usable in conjunction with existing ASR systems. It would thus be desirable to provide enhancement preprocessors that can be used to process input signals into existing ASR systems. Such techniques should be language independent and capable of separating different, non-speech sounds, such as multiple musical instruments, in a single channel.
The present invention is directed to a method for recovering an audio signal produced by a desired source from an audio channel in which audio signals from a plurality of different sources are combined. The method includes the steps of processing the audio channel with a joint acoustic modulation frequency algorithm to separate audio signals from the plurality of different sources into distinguishable components. Next, each distinguishable component corresponding to any source that is not desired in the audio channel is masked, so that the distinguishable component corresponding to the desired source remains unmasked. The distinguishable component that is unmasked is then processed with an inverse joint acoustic modulation frequency algorithm, to recover the audio signal produced by the desired source.
The step of processing the audio channel with the joint acoustic modulation frequency algorithm preferably includes the steps of applying a base acoustic transform to the audio channel and applying a second modulation transform to the result.
The step of processing the distinguishable component that is unmasked with an inverse joint acoustic modulation frequency algorithm includes the steps of applying an inverse second modulation transform to the distinguishable component that is unmasked and applying an inverse base acoustic transform to the result.
The base acoustic transform separates the audio channel into a magnitude spectrogram and a phase spectrogram. Accordingly, the second modulation transform converts the magnitude spectrogram and the phase spectrogram into a magnitude joint frequency plane and a phase joint frequency plane. Masking each distinguishable component is implemented by providing a magnitude mask and a phase mask for each distinguishable component corresponding to any source that is not desired. Using each magnitude mask, a point-by-point multiplication is performed on the magnitude joint frequency plane, producing a modified magnitude joint frequency plane. Similarly, using each phase mask, a point-by-point addition on the phase joint frequency plane is performed, producing a modified phase joint frequency plane. Note that while a point-by-point operation is performed on both the magnitude joint frequency plane and the phase joint frequency plane, different types of operations are performed.
The step of processing the distinguishable component that is unmasked with an inverse joint acoustic modulation frequency algorithm includes the step of performing an inverse second modulation transform on the modified magnitude joint frequency plane, producing a magnitude spectrogram. An inverse second modulation transform is then applied on the modified phase joint frequency plane, producing a phase spectrogram, and an inverse base acoustic transform is applied on the magnitude spectrogram and the phase spectrogram, to recover the audio signal produced by the desired source. Preferably, all of the transforms are executed by a computing device.
In some applications of the present invention, the method will include the step of automatically selecting each distinguishable component corresponding to any source that is not desired. In addition, it may be desirable to enable a user to listen to the audio signal that was recovered, to determine if additional processing is desired. As a further option, the method may include the step of displaying the distinguishable components, and enabling a user to select the distinguishable component that corresponds to the audio signal from the desired source.
As yet another option, before the step of processing the audio channel with the joint acoustic modulation frequency algorithm, the method may include the step of separating the audio channel into a plurality of different analysis windows, such that each portion of the audio channel in an analysis window has relatively constant spectral characteristics. The plurality of different analysis windows are preferably selected such that vocalic and fricative sounds are not present in the same analysis window.
In one application of the present invention, the steps of the method will be implemented as a preprocessor in an automated speech recognition system, so that the audio signal produced by the desired source is recovered for automated speech recognition.
Another aspect of the present invention is directed to a memory medium storing machine instructions for carrying out the steps of the method.
Yet another aspect of the present invention is directed to a system for recovering an audio signal produced by a desired source from an audio channel in which audio signals from a plurality of different sources are combined. The system includes a memory in which are stored a plurality of machine instructions defining a single channel audio separation program. A processor is coupled to the memory, to access the machine instructions, and executes the machine instructions to carry out functions that are generally consistent with the steps of the method discussed above.
Still another aspect of the present invention is directed at processing the audio channel of a hearing aid to recover an audio signal produced by a desired source from undesired background sounds, so that only the audio signal produced by a desired source is amplified by the hearing aid. The steps of such a method are generally consistent with the steps of the method discussed above. A related aspect of the invention is directed to a hearing aid that is configured to execute functions that are generally consistent with the steps of the method discussed above, such that only an audio signal produced by a desired source is amplified by the hearing aid, avoiding the masking effects of undesired sounds.
The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
Major features of the present invention include: (1) the ability to separate sounds from only a single channel of data, where this channel has a combination of all sounds to be separated; (2) employing joint acoustic/modulation frequency representations that enable speech from different speakers to be separated into separate regions; (3) the use of high fidelity filtering (analysis/synthesis) in joint acoustic/modulation frequencies to achieve speaker separation preprocessors, which can be integrated with current ASR systems; and (4) the ability to separate audio signals in a single channel that arise from multiple sources, even when such sources are other than human speech.
Referring to
Joint acoustic/modulation frequency analysis and display tools that localize and separate sonorant portions of multiple-speakers' speech into distinct regions of two-dimensional displays are preferably employed. The underlying representation of these displays will be invertible after arbitrary modification. For example, and most commonly, if the regions representing one of the speakers are set to zero, then the inverted modified display should maintain the speech of only the other speaker. This approach should also be applicable to situations where speech interference can come from music or other non-speech sounds in the background.
In one preferred embodiment, the above technique is implemented using hardware manually controlled by a user. In another preferred embodiment, the technique is implemented using software that automatically controls the process. A working embodiment of a software implementation has been achieved using the signal processing language MATLAB.
Those of ordinary skill in the art will recognize that a joint acoustic/modulation frequency transform can simultaneously show signal energy as a function of acoustic frequency and modulation rate. Since it is possible to arbitrarily modify and invert this transform, the clear separability of the regions of sonorant sounds from different simultaneous speakers can be used to design speaker-separation mask filters.
Thus, the representation of
Once the transforms of blocks 10 and 12 of
One crucial step preceding the computation of this new speech representation based on the concept of modulation frequency is to track the relatively stationary portions of the speech spectrum over the entire sentence. This tracking will provide appropriate analysis windows over which the representation will be minimally “smeared” by the speech acoustics with varying spectral characteristics. For example, as shown by the above example, it is preferable not to mix vocalic and fricative sounds in the same analysis window.
As noted above, the present invention facilitates the separation and removal of undesired noise interference from speech recordings. Empirical data indicates that the present invention provides superior noise reduction when compared to existing, conventional techniques.
The prior art has focused on the separation of multiple talkers for automatic speech recognition, but not for direct enhancement of an audio signal for human listening. Significantly, prior art techniques do not explicitly maintain any phase information. Further, such prior techniques do not utilize analysis/synthesis formulation, nor employ filtering to allow explicit removal of the undesired sound or speaker, while allowing a playback of the desired sound or speaker. Further, prior techniques have been intended to be applied to synthetic speech, a substantially simpler problem than natural speech.
Specific implementations of the present invention are shown in
Also included in processing unit 832 are a random access memory (RAM) 836 and non-volatile memory 838, which typically includes read only memory (ROM) and some form of memory storage, such as a hard drive, optical drive, etc. These memory devices are bi-directionally coupled to CPU 834. Such storage devices are well known in the art. Machine instructions and data are temporarily loaded into RAM 836 from non-volatile memory 838. Also stored in memory are operating system software and ancillary software. While not separately shown, it should be understood that a power supply is required to provide the electrical power needed to energize computing system 830.
Preferably, computing system 830 includes speakers 837. While these components are not strictly required in a functional computing system, their inclusion facilitates use computing system 830 in connection with implementing many of the features of the present invention. Speakers enable a user to listen to changes in an audio signal as a result of the single channel sound separation techniques of the present invention. A modem 835 is often available in computing systems, and is useful for importing or exporting data via a network connection or telephone line. As shown, modem 835 and speakers 837 are components that are internal to processing unit 832; however, such units can be, and often are, provided as external peripheral devices.
Input device 820 can be any device or mechanism that enables input to the operating environment executed by the CPU. Such an input device(s) include, but are not limited to a mouse, keyboard, microphone, pointing device, or touchpad. Although, in a preferred embodiment, human interaction with input device 820 is necessary, it is contemplated that the present invention can be modified to receive input electronically. Output device 822 generally includes any device that produces output information perceptible to a user, but will most typically comprise a monitor or computer display designed for human perception of output. However, it is contemplated that present invention can be modified so that the system's output is an electronic signal, or adapted to interact with external systems. Accordingly, the conventional computer keyboard and computer display of the preferred embodiments should be considered as exemplary, rather than as limiting in regard to the scope of the present invention.
As noted above, it is contemplated that the methods of the present invention can be beneficially applied as a preprocessor for existing ASR systems.
It is contemplated that the present invention can also be beneficially applied to hearing aids. A well-known problem with analog hearing aids is that they amplify sound over the full frequency range of hearing, so low frequency background noise often masks higher frequency speech sounds. To alleviate this problem, manufacturers provided externally accessible “potentiometers” on hearing aids, which, rather like a graphic equalizer on a stereo system, provided the ability to reduce or enhance the gain in different frequency bands to enable distinguishing conversations that would otherwise at least partially be obscured by background noise. Subsequently, programmable hearing aids were developed that included analog circuitry included automatic equalization circuitry. More “potentiometers” could be included, enabling better signal processing to occur. Yet another more recent advance has been the replacement of analog circuitry in hearing aids with digital circuits. Hearing instruments incorporating Digital Signal Processing (DSP), referred to as digital hearing aids, enable even more complex and effective signal processing to be achieved.
It is contemplated that the present invention can beneficially be incorporated into hearing aids to pre-process audio signals, removing portions of the audio signal that do not correspond to speech, and/or removing portions of the audio signal corresponding to a non desired speaker.
Once the audio signal from microphone 906 has been processed by pre-processor 908 in accord with the present invention, further processing and current amplification is performed on the audio signal by amplifier 910. It should be understood that the functions performed by amplifier 910 correspond to the amplification and signal processing performed by corresponding circuitry in conventional hearing aids, which implement signal processing to enhance the performance of the hearing aid. Block 912, which encompasses pre-amplifier 907, pre-processor 908 and amplifier 910, indicates that in some embodiments, it is possible that a single component, such as an ASIC, will execute all of the functions provided by each of the individual components.
The fully processed audio signal is sent to an output transducer 914, which generates an audio output that is transmitted to the eardrum/ear canal of the user. Note that hearing aid 900 includes a battery 916, operatively coupled with each of pre-amplifier 907, pre-processor 908 and amplifier 910. A housing 904, generally plastic, substantially encloses microphone 906, pre-amplifier 907, pre-processor 908, amplifier 910, output transducer 914 and battery 916. While housing 904 schematically corresponds to an in-the-ear (ITE) type hearing aid, it should be understood that the present invention can be included in other types of hearing aids, including behind-the-ear (BTE), in-the canal (ITC), and completely-in-the-canal (CIC) hearing aids.
It is expected that sound separation techniques in accord with the present invention will be particularly well suited for integration into hearing aids that already use DSP. In principal however, such sound separation techniques could be used as an add-on to any other type of electronic hearing aid, including analog hearing aids.
With respect to how the sound separation techniques of the present invention can be used in hearing aids, the following applications are contemplated. It should be understood, however, that such applications are merely exemplary, and are not intended to limit the scope of the present invention. The present invention can be employed to separate different speakers, such that for multiple speakers, all but the highest intensity speech sources will be masked. For example, when a hearing impaired person who is wearing hearing aids has dinner in a restaurant (particularly a restaurant that has a large amount of hard surfaces, such as windows), all of the conversations in the restaurant are amplified to some extent, making it very difficult for the hearing impaired person to comprehend the conversation at his or her table. Using the techniques of the present invention, all speech except the highest intensity speech sources can be masked, dramatically reducing the background noise due to conversations at other tables, and amplifying the conversation in the immediate area (i.e. the highest intensity speech). Another hearing aid application would be in the use of the present invention to improve the intelligibility of speech from a single speaker (i.e., a single source) by masking modulation frequencies in the voice of the speaker that are less important for comprehending speech.
The following appendices provide exemplary coding to automatically execute the transforms required to achieve the present invention. Appendix A provides exemplary coding that computes the two-dimensional transform of a given one-dimensional input signal. A Fourier basis is used for the base transform and the modulation transform. Appendix B provides exemplary coding that computes the inverse transforms required to invert the filtered and masked representation to generate a one-dimensional signal that includes the desired audio signal. Finally, Appendix C provides exemplary coding that enables a user to separate combined audio signals in accord with the present invention, including executing the transforms and masking steps described in detail above.
Although the present invention has been described in connection with the preferred form of practicing it and modifications thereto, those of ordinary skill in the art will understand that many other modifications can be made to the invention. Accordingly, it is not intended that the scope of the invention in any way be limited by the above description, but instead be determined entirely by reference to the claims that follow.
This application is based on a prior copending provisional application Ser. No. 60/369,432, filed on Apr. 2, 2002, the benefit of the filing date of which is hereby claimed under 35 U.S.C. § 119(e).
Number | Name | Date | Kind |
---|---|---|---|
6321200 | Casey | Nov 2001 | B1 |
6430528 | Jourjine et al. | Aug 2002 | B1 |
6910013 | Allegro et al. | Jun 2005 | B2 |
7076433 | Ito et al. | Jul 2006 | B2 |
20020176353 | Atlas et al. | Nov 2002 | A1 |
Number | Date | Country | |
---|---|---|---|
20030185411 A1 | Oct 2003 | US |
Number | Date | Country | |
---|---|---|---|
60369432 | Apr 2002 | US |