The disclosure herein related generally to noise suppression. In particular, this disclosure relates to noise suppression systems, devices, and methods for use in acoustic applications.
The ability to correctly identify voiced and unvoiced speech is critical to many speech applications including speech recognition, speaker verification, noise suppression, and many others. In a typical acoustic application, speech from a human speaker is captured and transmitted to a receiver in a different location. In the speaker's environment there may exist one or more noise sources that pollute the speech signal, the signal of interest, with unwanted acoustic noise. This makes it difficult or impossible for the receiver, whether human or machine, to understand the user's speech.
Typical methods for classifying voices and unvoiced speech have relied mainly on the acoustic content of single microphone data, which is plagued by problems with noise and the corresponding uncertainties in signal content. This is especially problematic with the proliferation of portable communication devices like mobile telephones. There are methods known in the art for suppressing the noise present in the speech signals, but these normally requires a robust method of determining when speech is being produced. Non-acoustic methods have been employed successfully in commercial products such as the Jawbone headset produced by AliphCom, Inc., San Francisco, California (Aliph), but an acoustic-only solution is desired in some cases (e.g., for reduced cost, as a supplement to the non-acoustic sensor, etc.)
Each patent, patent application, and/or publication mentioned in this specification is herein incorporated by reference in its entirety to the same extent as if each individual patent, patent application, and/or publication was specifically and individually indicated to be incorporated by reference.
Acoustic Voice Activity Detection (AVAD) methods and systems are described herein. The AVAD methods and systems, which include algorithms or programs, use microphones to generate virtual directional microphones which have very similar noise responses and very dissimilar speech responses. The ratio of the energies of the virtual microphones is then calculated over a given window size and the ratio can then be used with a variety of methods to generate a VAD signal. The virtual microphones can be constructed using either a fixed or an adaptive filter. The adaptive filter generally results in a more accurate and noise-robust VAD signal but requires training. In addition, restrictions can be placed on the filter to ensure that it is training only on speech and not on environmental noise.
In the following description, numerous specific details are introduced to provide a thorough understanding of, and enabling description for, embodiments. One skilled in the relevant art, however, will recognize that these embodiments can be practiced without one or more of the specific details, or with other components, systems, etc. In other instances, well-known structures or operations are not shown, or are not described in detail, to avoid obscuring aspects of the disclosed embodiments.
The PSAD algorithm as described herein calculates the ratio of the energies of two directional microphones M1 and M2:
where the “z” indicates the discrete frequency domain and “i” ranges from the beginning of the window of interest to the end, but the same relationship holds in the time domain. The summation can occur over a window of any length; 200 samples at a sampling rate of 8 kHz has been used to good effect. Microphone M1 is assumed to have a greater speech response than microphone M2. The ratio R depends on the relative strength of the acoustic signal of interest as detected by the microphones.
For matched omnidirectional microphones (i.e. they have the same response to acoustic signals for all spatial orientations and frequencies), the size of R can be calculated for speech and noise by approximating the propagation of speech and noise waves as spherically symmetric sources. For these the energy of the propagating wave decreases as 1/r2:
The distance d1 is the distance from the acoustic source to M1. d2 is the distance from the acoustic source to M2 and d=d2−d1 (see
where the “S” subscript denotes the ratio for speech sources and “N” the ratio for noise sources. There is not a significant amount of separation between noise and speech sources in this case, and therefore it would be difficult to implement a robust solution using simple omnidirectional microphones.
A better implementation is to use directional microphones where the second microphone has minimal speech response. As described herein, such microphones can be constructed using omnidirectional microphones O1 and O2:
V
1(z)=−β(z)α(z)O2(z)+O1(z)z−γ
V
2(z)=α(z)O2(z)−β(z)O1(z)z−γ
where α(z) is a calibration filter used to compensate O2's response so that it is the same as O1, β(z) is a filter that describes the relationship between O1 and calibrated O2 for speech, and γ is a fixed delay that depends on the size of the array. There is no loss of generality in defining α(z) as above, as either microphone may be compensated to match the other. For this configuration V1 and V2 have very similar noise response magnitudes and very dissimilar speech response magnitudes if
where again d=2 d0 and c is the speed of sound in air, which is temperature dependent and approximately
where T is the temperature of the air in Celsius.
The filter β(z) can be calculated using wave theory to be
where again dk is the distance from the user's mouth to Ok.
The filter β(z) can also be determined experimentally using an adaptive filter.
The adaptive process varies {tilde over (β)}(z) to minimize the output of V2 when only speech is being received by O1 and O2. A small amount of noise may be tolerated with little ill effect, but it is preferred that only speech is being received when the coefficients of {tilde over (β)}(z) are calculated. Any adaptive process may be used; a normalized least-mean squares (NLMS) algorithm was used in the examples below.
The V1 can be constructed using the current value for {tilde over (β)}(z) or the fixed filter β(z) can be used for simplicity.
Now the ratio R is
where double bar indicates norm and again any size window may be used. If {tilde over (β)}(z) has been accurately calculated, the ratio for speech should be relatively high (e.g., greater than approximately 2) and the ratio for noise should be relatively low (e.g., less than approximately 1.1). The ratio calculated will depend on both the relative energies of the speech and noise as well as the orientation of the noise and the reverberance of the environment. In practice, either the adapted filter {tilde over (β)}(z) or the static filter b(z) may be used for V1(z) with little effect on R—but it is important to use the adapted filter {tilde over (β)}(z) in V2(z) for best performance. Many techniques known to those skilled in the art (e.g., smoothing, etc.) can be used to make R more amenable to use in generating a VAD and the embodiments herein are not so limited.
The ratio R can be calculated for the entire frequency band of interest, or can be calculated in frequency subbands. One effective subband discovered was 250 Hz to 1250 Hz, another was 200 Hz to 3000 Hz, but many others are possible and useful.
Once generated, the vector of the ratio R versus time (or the matrix of R versus time if multiple subbands are used) can be used with any detection system (such as one that uses fixed and/or adaptive thresholds) to determine when speech is occurring. While many detection systems and methods are known to exist by those skilled in the art and may be used, the method described herein for generating an R so that the speech is easily discernable is novel. It is important to note that the R does not depend on the type of noise or its orientation or frequency content; R simply depends on the V1 and V2 spatial response similarity for noise and spatial response dissimilarity for speech. In this way it is very robust and can operate smoothly in a variety of noisy acoustic environments.
The accuracy of the adaptation to the β(z) of the system is a factor in determining the effectiveness of the AVAD. A more accurate adaptation to the actual β(z) of the system leads to lower energy of the speech response in V2, and a higher ratio R. The noise (far-field) magnitude response is largely unchanged by the adaptation process, so the ratio R will be near unity for accurately adapted beta. For purposes of accuracy, the system can be trained on speech alone, or the noise should be low enough in energy so as not to affect or to have a minimal affect the training.
To make the training as accurate as possible, the coefficients of the filter β(z) of an embodiment are generally updated under the following conditions, but the embodiment is not so limited: speech is being produced (requires a relatively high SNR or other method of detection such as an Aliph Skin Surface Microphone (SSM) as described in U.S. patent application Ser. No. 10/769,302, filed Jan. 30, 2004, which is incorporated by reference herein in its entirety); no wind is detected (wind can be detected using many different methods known in the art, such as examining the microphones for uncorrelated low-frequency noise); and the current value of R is much larger than a smoothed history of R values (this ensures that training occurs only when strong speech is present). These procedures are flexible and others may be used without significantly affecting the performance of the system. These restrictions can make the system relatively more robust.
Even with these precautions, it is possible that the system accidentally trains on noise (e.g., there may be a higher likelihood of this without use of a non-acoustic VAD device such as the SSM used in the Jawbone headset produced by Aliph, San Francisco, Calif.). Thus, an embodiment includes a further failsafe system to preclude accidental training from significantly disrupting the system. The adaptive β is limited to certain values expected for speech. For example, values for d1 for an ear-mounted headset will normally fall between 9 and 14 centimeters, so using an array length of 2 d0=2.0 cm and Equation 2 above,
which means that
0.82<|β(z)|<0.88.
The magnitude of the β filter can therefore be limited to between approximately 0.82 and 0.88 to preclude problems if noise is present during training. Looser limits can be used to compensate for inaccurate calibrations (the response of omnidirectional microphones is usually calibrated to one another so that their frequency response is the same to the same acoustic source—if the calibration is not completely accurate the virtual microphones may not form properly).
Similarly, the phase of the β filter can be limited to be what is expected from a speech source within +−30 degrees from the axis of the array. As described herein, and with reference to
where dS is the distance from the midpoint of the array to the speech source. Varying dS from 10 to 15 cm and allowing θ to vary between 0 and +−30 degrees, the maximum difference in γ results from the difference of γ at 0 degrees (58.8 μsec) and γat +−30 degrees for dS=10 cm (50.8 μsec). This means that the maximum expected phase difference is 58.8−50.8=8.0 μsec, or 0.064 samples at an 8 kHz sampling rate. Since
φ(f)=2πft=2πf(08.0×10−6) rad
the maximum phase difference realized at 4 kHz is only 0.2 rad or about 11.4 degrees, a small amount, but not a negligible one. Therefore the β filter should almost linear phase, but some allowance made for differences in position and angle. In practice a slightly larger amount was used (0.071 samples at 8 kHz) in order to compensate for poor calibration and diffraction effects, and this worked well. The limit on the phase in the example below was implemented as the ratio of the central tap energy to the combined energy of the other taps:
where β is the current estimate. This limits the phase by restricting the effects of the non-center taps. Other ways of limiting the phase of the beta filter are known to those skilled in the art and the algorithm presented here is not so limited.
Embodiments are presented herein that use both a fixed β(z) and an adaptive β(z), as described in detail above. In both cases, R was calculated using frequencies between 250 and 3000 Hz using a window size of 200 samples at 8 kHz. The results for V1 (top plot), V2 (middle plot), R (bottom plot, solid line, windowed using a 200 sample rectangular window at 8 kHz) and the VAD (bottom plot, dashed line) are shown in
Results using the adaptive beta filter are shown in
Systems and methods for discriminating voiced and unvoiced speech from background noise are now described including a Pathfinder Speech Activity Detection (PSAD) system, referenced above, and a Non-Acoustic Sensor Voiced Speech Activity Detection (NAVSAD) system. The noise removal and reduction methods provided herein, while allowing for the separation and classification of unvoiced and voiced human speech from background noise, address the shortcomings of typical systems known in the art by cleaning acoustic signals of interest without distortion.
Note that the detection subsystems 1250 and denoising subsystems 1240 of both the NAVSAD and PSAD systems of an embodiment are algorithms controlled by the processor 1230, but are not so limited. Alternative embodiments of the NAVSAD and PSAD systems can include detection subsystems 1250 and/or denoising subsystems 1240 that comprise additional hardware, firmware, software, and/or combinations of hardware, firmware, and software. Furthermore, functions of the detection subsystems 1250 and denoising subsystems 1240 may be distributed across numerous components of the NAVSAD and PSAD systems.
The NAVSAD and PSAD systems support a two-level commercial approach in which (i) a relatively less expensive PSAD system supports an acoustic approach that functions in most low- to medium-noise environments, and (ii) a NAVSAD system adds a non-acoustic sensor to enable detection of voiced speech in any environment. Unvoiced speech is normally not detected using the sensor, as it normally does not sufficiently vibrate human tissue. However, in high noise situations detecting the unvoiced speech is not as important, as it is normally very low in energy and easily washed out by the noise Therefore in high noise environments the unvoiced speech is unlikely to affect the voiced speech denoising. Unvoiced speech information is most important in the presence of little to no noise and, therefore, the unvoiced detection should be highly sensitive in low noise situations, and insensitive in high noise situations. This is not easily accomplished, and comparable acoustic unvoiced detectors known in the art are incapable of operating under these environmental constraints.
The NAVSAD and PSAD systems include an array algorithm for speech detection that uses the difference in frequency content between two microphones to calculate a relationship between the signals of the two microphones. This is in contrast to conventional arrays that attempt to use the time/phase difference of each microphone to remove the noise outside of an “area of sensitivity”. The methods described herein provide a significant advantage, as they do not require a specific orientation of the array with respect to the signal.
Further, the systems described herein are sensitive to noise of every type and every orientation, unlike conventional arrays that depend on specific noise orientations. Consequently, the frequency-based arrays presented herein are unique as they depend only on the relative orientation of the two microphones themselves with no dependence on the orientation of the noise and signal with respect to the microphones. This results in a robust signal processing system with respect to the type of noise, microphones, and orientation between the noise/signal source and the microphones.
The systems described herein use the information derived from the Pathfinder noise suppression system and/or a non-acoustic sensor described in the Related Applications to determine the voicing state of an input signal, as described in detail below. The voicing state includes silent, voiced, and unvoiced states. The NAVSAD system, for example, includes a non-acoustic sensor to detect the vibration of human tissue associated with speech. The non-acoustic sensor of an embodiment is a General Electromagnetic Movement Sensor (GEMS) as described briefly below and in detail in the Related Applications, but is not so limited Alternative embodiments, however, may use any sensor that is able to detect human tissue motion associated with speech and is unaffected by environmental acoustic noise.
The GEMS is a radio frequency device (2.4 GHz) that allows the detection of moving human tissue dielectric interfaces The GEMS includes an RF interferometer that uses homodyne mixing to detect small phase shifts associated with target motion. In essence, the sensor sends out weak electromagnetic waves (less than 1 milliwatt) that reflect off of whatever is around the sensor. The reflected waves are mixed with the original transmitted waves and the results analyzed for any change in position of the targets. Anything that moves near the sensor will cause a change in phase of the reflected wave that will be amplified and displayed as a change in voltage output from the sensor. A similar sensor is described by Gregory C Burnett (1999) in “The physiological basis of glottal electromagnetic micropower sensors (GEMS) and their use in defining an excitation function for the human vocal tract”; Ph.D. Thesis, University of California at Davis.
Consideration was given to a number of multi-dimensional factors in developing the detection algorithm 1250. The biggest consideration was to maintaining the effectiveness of the Pathfinder denoising technique, described in detail in the Related Applications and reviewed herein. Pathfinder performance can be compromised if the adaptive filter training is conducted on speech rather than on noise. It is therefore important not to exclude any significant amount of speech from the VAD to keep such disturbances to a minimum.
Consideration was also given to the accuracy of the characterization between voiced and unvoiced speech signals, and distinguishing each of these speech signals from noise signals. This type of characterization can be useful in such applications as speech recognition and speaker verification.
Furthermore, the systems using the detection algorithm of an embodiment function in environments containing varying amounts of background acoustic noise. If the non-acoustic sensor is available, this external noise is not a problem for voiced speech. However, for unvoiced speech (and voiced if the non-acoustic sensor is not available or has malfunctioned) reliance is placed on acoustic data alone to separate noise from unvoiced speech. An advantage inheres in the use of two microphones in an embodiment of the Pathfinder noise suppression system, and the spatial relationship between the microphones is exploited to assist in the detection of unvoiced speech. However, there may occasionally be noise levels high enough that the speech will be nearly undetectable and the acoustic-only method will fail. In these situations, the non-acoustic sensor (or hereafter just the sensor) will be required to ensure good performance.
In the two-microphone system, the speech source should be relatively louder in one designated microphone when compared to the other microphone. Tests have shown that this requirement is easily met with conventional microphones when the microphones are placed on the head, as any noise should result in an H1 with a gain near unity.
Regarding the NAVSAD system, and with reference to
For the sensor, the SD is akin to the energy of the signal, which normally corresponds quite accurately to the voicing state, but may be susceptible to movement noise (relative motion of the sensor with respect to the human user) and/or electromagnetic noise. To further differentiate sensor noise from tissue motion, the XCORR can be used. The XCORR is only calculated to 15 delays, which corresponds to just under 2 milliseconds at 8000 Hz.
The XCORR can also be useful when the sensor signal is distorted or modulated in some fashion. For example, there are sensor locations (such as the jaw or back of the neck) where speech production can be detected but where the signal may have incorrect or distorted time-based information. That is, they may not have well defined features in time that will match with the acoustic waveform. However, XCORR is more susceptible to errors from acoustic noise, and in high (<0 dB SNR) environments is almost useless. Therefore it should not be the sole source of voicing information.
The sensor detects human tissue motion associated with the closure of the vocal folds, so the acoustic signal produced by the closure of the folds is highly correlated with the closures. Therefore, sensor data that correlates highly with the acoustic signal is declared as speech, and sensor data that does not correlate well is termed noise. The acoustic data is expected to lag behind the sensor data by about 0.1 to 0.8 milliseconds (or about 1-7 samples) as a result of the delay time due to the relatively slower speed of sound (around 330 m/s). However, an embodiment uses a 15-sample correlation, as the acoustic wave shape varies significantly depending on the sound produced, and a larger correlation width is needed to ensure detection.
The SD and XCORR signals are related, but are sufficiently different so that the voiced speech detection is more reliable. For simplicity, though, either parameter may be used. The values for the SD and XCORR are compared to empirical thresholds, and if both are above their threshold, voiced speech is declared. Example data is presented and described below.
The NAVSAD can determine when voiced speech is occurring with high degrees of accuracy due to the non-acoustic sensor data. However, the sensor offers little assistance in separating unvoiced speech from noise, as unvoiced speech normally causes no detectable signal in most non-acoustic sensors. If there is a detectable signal, the NAVSAD can be used, although use of the SD method is dictated as unvoiced speech is normally poorly correlated. In the absence of a detectable signal use is made of the system and methods of the Pathfinder noise removal algorithm in determining when unvoiced speech is occurring. A brief review of the Pathfinder algorithm is described below, while a detailed description is provided in the Related Applications.
With reference to
M
1(z)=S(z)+N2(z)
M
2(z)=N(z)+S2(z)
with
N
2(z)=N(z)H1(z)
S
2(z)=S(z)H2(z)
so that
M
1(z)=S(z)+N2(z)H1(z)
M
2(z)=N(z)+S2(z)H2(z)
This is the general case for all two microphone systems. There is always going to be some leakage of noise into Mic 1, and some leakage of signal into Mic 2 Equation 1 has four unknowns and only two relationships and cannot be solved explicitly.
However, there is another way to solve for some of the unknowns in Equation 1. Examine the case where the signal is not being generated—that is, where the GEMS signal indicates voicing is not occurring. In this case, s(n)=S(z)=0, and Equation 1 reduces to
M
1n(z)=N(z)H1(z)
M
2n(z)=N(z)
where the n subscript on the M variables indicate that only noise is being received. This leads to
H1(z) can be calculated using any of the available system identification algorithms and the microphone outputs when only noise is being received. The calculation can be done adaptively, so that if the noise changes significantly H1(z) can be recalculated quickly.
With a solution for one of the unknowns in Equation 1, solutions can be found for another, H2(z), by using the amplitude of the GEMS or similar device along with the amplitude of the two microphones. When the GEMS indicates voicing, but the recent (less than 1 second) history of the microphones indicate low levels of noise, assume that n(s)=N(z)˜0. Then Equation 1 reduces to
M
1s(z)=S(z)
M
2s(z)=S(z)H2(z)
which in turn leads to
which is the inverse of the H1(z) calculation, but note that different inputs are being used.
After calculating H1(z) and H2(z) above, they are used to remove the noise from the signal. Rewrite Equation 1 as
S(z)=M1(z)−N(z)H1(z)
N(z)=M2(z)−S(z)H2(z)
S(z)=M1(z)−[M2(z)−S(z)H2(z)]H1(z),
S(z)[1−H2(z)H1(z)]=M1(z)−M2(z)H1(z)
and solve for S(z) as:
In practice H2(z) is usually quite small, so that H2(z)H1(z)<<1, and
S(z)≈M1(z)−M2(z)H1(z),
obviating the need for the H2(z) calculation.
With reference to
where ΔM is the difference in gain between Mic 1 and Mic 2 and therefore H1(z), as above in Equation 2. The variable d1 is the distance from Mic 1 to the speech or noise source.
If the “noise” is the user speaking, and Mic 1 is closer to the mouth than Mic 2, the gain increases. Since environmental noise normally originates much farther away from the user's head than speech, noise will be found during the time when the gain of H1(z) is near unity or some fixed value, and speech can be found after a sharp rise in gain. The speech can be unvoiced or voiced, as long as it is of sufficient volume compared to the surrounding noise. The gain will stay somewhat high during the speech portions, then descend quickly after speech ceases. The rapid increase and decrease in the gain of H1(z) should be sufficient to allow the detection of speech under almost any circumstances. The gain in this example is calculated by the sum of the absolute value of the filter coefficients. This sum is not equivalent to the gain, but the two are related in that a rise in the sum of the absolute value reflects a rise in the gain.
As an example of this behavior,
What is not clear from this plot 2100 is that the PSAD system functions as an automatic backup to the NAVSAD. This is because the voiced speech (since it has the same spatial relationship to the mics as the unvoiced) will be detected as unvoiced if the sensor or NAVSAD system fail for any reason. The voiced speech will be misclassified as unvoiced, but the denoising will still not take place, preserving the quality of the speech signal.
However, this automatic backup of the NAVSAD system functions best in an environment with low noise (approximately 10+dB SNR), as high amounts (10 dB of SNR or less) of acoustic noise can quickly overwhelm any acoustic-only unvoiced detector, including the PSAD. This is evident in the difference in the voiced signal data 1702 and 2102 shown in plots 1700 and 1200 of
Regarding hardware considerations, and with reference to
A number of configurations are possible using the NAVSAD and PSAD systems to detect voiced and unvoiced speech. One configuration uses the NAVSAD system (non-acoustic only) to detect voiced speech along with the PSAD system to detect unvoiced speech; the PSAD also functions as a backup to the NAVSAD system for detecting voiced speech. An alternative configuration uses the NAVSAD system (non-acoustic correlated with acoustic) to detect voiced speech along with the PSAD system to detect unvoiced speech; the PSAD also functions as a backup to the NAVSAD system for detecting voiced speech. Another alternative configuration uses the PSAD system to detect both voiced and unvoiced speech.
While the systems described above have been described with reference to separating voiced and unvoiced speech from background acoustic noise, there are no reasons more complex classifications cannot be made. For more in-depth characterization of speech, the system can bandpass the information from Mic 1 and Mic 2 so that it is possible to see which bands in the Mic 1 data are more heavily composed of noise and which are more weighted with speech. Using this knowledge, it is possible to group the utterances by their spectral characteristics similar to conventional acoustic methods; this method would work better in noisy environments.
As an example, the “k” in “kick” has significant frequency content form 500 Hz to 4000 Hz, but a “sh” in “she” only contains significant energy from 1700-4000 Hz. Voiced speech could be classified in a similar manner. For instance, an /i/(“ee”) has significant energy around 300 Hz and 2500 Hz, and an /a/(“ah”) has energy at around 900 Hz and 1200 Hz. This ability to discriminate unvoiced and voiced speech in the presence of noise is, thus, very useful.
A dual omnidirectional microphone array (DOMA) that provides improved noise suppression is now described. Compared to conventional arrays and algorithms, which seek to reduce noise by nulling out noise sources, the array of an embodiment is used to form two distinct virtual directional microphones, as described in detail above. The two virtual microphones are configured to have very similar noise responses and very dissimilar speech responses. The only null formed by the DOMA is one used to remove the speech of the user from V2. The two virtual microphones of an embodiment can be paired with an adaptive filter algorithm and/or VAD algorithm, as described in detail above, to significantly reduce the noise without distorting the speech, significantly improving the SNR of the desired speech over conventional noise suppression systems. The embodiments described herein are stable in operation, flexible with respect to virtual microphone pattern choice, and have proven to be robust with respect to speech source-to-array distance and orientation as well as temperature and calibration techniques.
Unless otherwise specified, the following terms used in describing the DOMA of an embodiment have the corresponding meanings in addition to any meaning or understanding they may convey to one skilled in the art.
The term “bleedthrough” means the undesired presence of noise during speech.
The term “denoising” means removing unwanted noise from Mid, and also refers to the amount of reduction of noise energy in a signal in decibels (dB).
The term “devoicing” means removing/distorting the desired speech from Mic1
The term “directional microphone (DM)” means a physical directional microphone that is vented on both sides of the sensing diaphragm.
The term “Mic1 (M1)” means a general designation for an adaptive noise suppression system microphone that usually contains more speech than noise.
The term “Mic2 (M2)” means a general designation for an adaptive noise suppression system microphone that usually contains more noise than speech.
The term “noise” means unwanted environmental acoustic noise.
The term “null” means a zero or minima in the spatial response of a physical or virtual directional microphone.
The term “O1” means a first physical omnidirectional microphone used to form a microphone array.
The term “O2” means a second physical omnidirectional microphone used to form a microphone array.
The term “speech” means desired speech of the user.
The term “Skin Surface Microphone (SSM)” is a microphone used in an earpiece (e.g., the Jawbone earpiece available from Aliph of San Francisco, Calif.) to detect speech vibrations on the user's skin.
The term “V1” means the virtual directional “speech” microphone, which has no nulls.
The term “V2” means the virtual directional “noise” microphone, which has a null for the user's speech.
The term “Voice Activity Detection (VAD) signal” means a signal indicating when user speech is detected.
The term “virtual microphones (VM)” or “virtual directional microphones” means a microphone constructed using two or more omnidirectional microphones and associated signal processing.
M
1(z)=S(z)+N2(z)
M
2(z)=N(z)+S2(z)
with
N
2(z)=N(z)H1(z)
S
2(z)=S(z)H2(z),
so that
M
1(z)=S(z)+N2(z)H1(z)
M
2(z)=N(z)+S2(z)H2(z) Eq. 1
This is the general case for all two microphone systems. Equation 1 has four unknowns and only two known relationships and therefore cannot be solved explicitly.
However, there is another way to solve for some of the unknowns in Equation 1. The analysis starts with an examination of the case where the speech is not being generated, that is, where a signal from the VAD subsystem 2204 (optional) equals zero. In this case, s(n)=S(z)=0, and Equation 1 reduces to
M
1N(z)=N(z)H1(z)
M
2N(z)=N(z)
where the N subscript on the M variables indicate that only noise is being received. This leads to
The function H1(z) can be calculated using any of the available system identification algorithms and the microphone outputs when the system is certain that only noise is being received. The calculation can be done adaptively, so that the system can react to changes in the noise.
A solution is now available for H1(z), one of the unknowns in Equation 1. The final unknown, H2(z), can be determined by using the instances where speech is being produced and the VAD equals one. When this is occurring, but the recent (perhaps less than 1 second) history of the microphones indicate low levels of noise, it can be assumed that n(s)=N(z).about.0. Then Equation 1 reduces to
M
1S(z)=S(z)
M
2S(z)=S(z)H2(z)
which in turn leads to
which is the inverse of the H1(z) calculation. However, it is noted that different inputs are being used (now only the speech is occurring whereas before only the noise was occurring). While calculating H2(z), the values calculated for H1(z) are held constant (and vice versa) and it is assumed that the noise level is not high enough to cause errors in the H2(z) calculation.
After calculating H1(z) and H2(z), they are used to remove the noise from the signal. If Equation 1 is rewritten as
S(z)=M1(z)−N(z)H1(z)
N(z)=M2(z)−S(z)H2(z)
S(z)=M1(z)−[M2(z)−S(z)H2(z)]H1(z)
S(z)[1−H2(z)H1(z)]=M1(z)−M2(z)H1(z),
then N(z) may be substituted as shown to solve for S(z) as
If the transfer functions H1(z) and H2(z) can be described with sufficient accuracy, then the noise can be completely removed and the original signal recovered. This remains true without respect to the amplitude or spectral characteristics of the noise. If there is very little or no leakage from the speech source into M2, then H2(z)≈0 and Equation 3 reduces to
S(z)≈M1(z)−M2(z)H1(z) Eq. 4
Equation 4 is much simpler to implement and is very stable, assuming H1(z) is stable. However, if significant speech energy is in M2(z), devoicing can occur. In order to construct a well-performing system and use Equation 4, consideration is given to the following conditions:
Condition R1 is easy to satisfy if the SNR of the desired speech to the unwanted noise is high enough. “Enough” means different things depending on the method of VAD generation. If a VAD vibration sensor is used, as in Burnett U.S. Pat. No. 7,256,048, accurate VAD in very low SNRs (−10 dB or less) is possible. Acoustic-only methods using information from O1 and O2 can also return accurate VADs, but are limited to SNRs of 3 dB or greater for adequate performance.
Condition R5 is normally simple to satisfy because for most applications the microphones will not change position with respect to the user's mouth very often or rapidly. In those applications where it may happen (such as hands-free conferencing systems) it can be satisfied by configuring Mic2 so that H2(z)≈0.
Satisfying conditions R2, R3, and R4 are more difficult but are possible given the right combination of V1 and V2. Methods are examined below that have proven to be effective in satisfying the above, resulting in excellent noise suppression performance and minimal speech removal and distortion in an embodiment.
The DOMA, in various embodiments, can be used with the Pathfinder system as the adaptive filter system or noise removal. The Pathfinder system, available from AliphCom, San Francisco, Calif., is described in detail in other patents and patent applications referenced herein. Alternatively, any adaptive filter or noise removal algorithm can be used with the DOMA in one or more various alternative embodiments or configurations.
When the DOMA is used with the Pathfinder system, the Pathfinder system generally provides adaptive noise cancellation by combining the two microphone signals (e.g., Mic1, Mic2) by filtering and summing in the time domain. The adaptive filter generally uses the signal received from a first microphone of the DOMA to remove noise from the speech received from at least one other microphone of the DOMA, which relies on a slowly varying linear transfer function between the two microphones for sources of noise. Following processing of the two channels of the DOMA, an output signal is generated in which the noise content is attenuated with respect to the speech content, as described in detail below.
As an example,
In this example system 2500, the output of physical microphone 2301 is coupled to processing component 2502 that includes a first processing path that includes application of a first delay z11 and a first gain A11 and a second processing path that includes application of a second delay z12 and a second gain A12. The output of physical microphone 2302 is coupled to a third processing path of the processing component 2502 that includes application of a third delay z21 and a third gain A21 and a fourth processing path that includes application of a fourth delay z22 and a fourth gain A22. The output of the first and third processing paths is summed to form virtual microphone V1, and the output of the second and fourth processing paths is summed to form virtual microphone V2.
As described in detail below, varying the magnitude and sign of the delays and gains of the processing paths leads to a wide variety of virtual microphones (VMs), also referred to herein as virtual directional microphones, can be realized. While the processing component 2502 described in this example includes four processing paths generating two virtual microphones or microphone signals, the embodiment is not so limited. For example,
The DOMA of an embodiment can be coupled or connected to one or more remote devices. In a system configuration, the DOMA outputs signals to the remote devices. The remote devices include, but are not limited to, at least one of cellular telephones, satellite telephones, portable telephones, wireline telephones, Internet telephones, wireless transceivers, wireless communication radios, personal digital assistants (PDAs), personal computers (PCs), headset devices, head-worn devices, and earpieces.
Furthermore, the DOMA of an embodiment can be a component or subsystem integrated with a host device. In this system configuration, the DOMA outputs signals to components or subsystems of the host device. The host device includes, but is not limited to, at least one of cellular telephones, satellite telephones, portable telephones, wireline telephones, Internet telephones, wireless transceivers, wireless communication radios, personal digital assistants (PDAs), personal computers (PCs), headset devices, head-worn devices, and earpieces.
As an example,
The construction of VMs for the adaptive noise suppression system of an embodiment includes substantially similar noise response in V1 and V2. Substantially similar noise response as used herein means that H1(z) is simple to model and will not change much during speech, satisfying conditions R2 and R4 described above and allowing strong denoising and minimized bleedthrough.
The construction of VMs for the adaptive noise suppression system of an embodiment includes relatively small speech response for V2. The relatively small speech response for V2 means that H2(z)≈0, which will satisfy conditions R3 and R5 described above.
The construction of VMs for the adaptive noise suppression system of an embodiment further includes sufficient speech response for V1 so that the cleaned speech will have significantly higher SNR than the original speech captured by O1.
The description that follows assumes that the responses of the omnidirectional microphones O1 and O2 to an identical acoustic source have been normalized so that they have exactly the same response (amplitude and phase) to that source. This can be accomplished using standard microphone array methods (such as frequency-based calibration) well known to those versed in the art.
Referring to the condition that construction of VMs for the adaptive noise suppression system of an embodiment includes relatively small speech response for V2, it is seen that for discrete systems V2(z) can be represented as:
The distances d1 and d2 are the distance from O1 and O2 to the speech source (see
It is important to note that the β above is not the conventional β used to denote the mixing of VMs in adaptive beamforming; it is a physical variable of the system that depends on the intra-microphone distance d0 (which is fixed) and the distance d, and angle θ, which can vary. As shown below, for properly calibrated microphones, it is not necessary for the system to be programmed with the exact β of the array. Errors of approximately 10-15% in the actual β (i.e. the β used by the algorithm is not the β of the physical array) have been used with very little degradation in quality. The algorithmic value of β may be calculated and set for a particular user or may be calculated adaptively during speech production when little or no noise is present. However, adaptation during use is not required for nominal performance.
The above formulation for V2(z) has a null at the speech location and will therefore exhibit minimal response to the speech. This is shown in
The V1(z) can be formulated using the general form for V1(z):
V
1(z)=αAO1(z)·z−d
V
2(z)=O2(z)−zγβO1(z)
and, since for noise in the forward direction
O
2N(z)=O1N(z)·z−γ,
then
V
2N(z)=O1N(z)·z−γ−z−γβO1,N(z)
V
2N(z)=(1−β)(O1N(z)·z−γ)
If this is then set equal to V1(z) above, the result is
V
1N(z)=αAO1N(z)·z−d
thus we may set
dA=γ
dB=0
αA=1
αA=β
to get
V
1(z)=O1(z)·z−γβO2(z)
The definitions for V1 and V2 above mean that for noise H1(z) is:
which, if the amplitude noise responses are about the same, has the form of an all pass filter. This has the advantage of being easily and accurately modeled, especially in magnitude response, satisfying R2. This formulation assures that the noise response will be as similar as possible and that the speech response will be proportional to (1−β2). Since β is the ratio of the distances from O1 and O2 to the speech source, it is affected by the size of the array and the distance from the array to the speech source.
The response of V1 to speech is shown in
It should be noted that
The speech null of V2 means that the VAD signal is no longer a critical component. The VAD's purpose was to ensure that the system would not train on speech and then subsequently remove it, resulting in speech distortion. If, however, V2 contains no speech, the adaptive system cannot train on the speech and cannot remove it. As a result, the system can denoise all the time without fear of devoicing, and the resulting clean audio can then be used to generate a VAD signal for use in subsequent single-channel noise suppression algorithms such as spectral subtraction. In addition, constraints on the absolute value of H1(z) (i.e. restricting it to absolute values less than two) can keep the system from fully training on speech even if it is detected. In reality, though, speech can be present due to a mis-located V2 null and/or echoes or other phenomena, and a VAD sensor or other acoustic-only VAD is recommended to minimize speech distortion.
Depending on the application, β and γ may be fixed in the noise suppression algorithm or they can be estimated when the algorithm indicates that speech production is taking place in the presence of little or no noise. In either case, there may be an error in the estimate of the actual β and γ of the system. The following description examines these errors and their effect on the performance of the system. As above, “good performance” of the system indicates that there is sufficient denoising and minimal devoicing.
The effect of an incorrect β and γ on the response of V1 and V2 can be seen by examining the definitions above:
V
1(z)=O1(z)·z−γT−βTO2(z)
V
2(z)=O2(z)·z−γTβTO1(z)
where βT and γT denote the theoretical estimates of β and γ used in the noise suppression algorithm. In reality, the speech response of O2 is
O
2S(z)=βRO1S(z)·z−γR
where βR and γRdenote the real β and γ of the physical system. The differences between the theoretical and actual values of β and γ can be due to mis-location of the speech source (it is not where it is assumed to be) and/or a change in air temperature (which changes the speed of sound). Inserting the actual response of O2 for speech into the above equations for V1 and V2 yields
V
1S(z)=O1S(z)[z−γT−βTβRz−γR]
V
2S(z)=O1S(z)[βRz−γR−βTz−γT]
If the difference in phase is represented by
V
R
=Y
T
+Y
D
And the difference in amplitude as
βR=BβT
then
V
1S(z)=O1S(z)z−γT[1−BβT2z−γD]
V
2S(z)=βTO1S(z)z−γT[Bz−γD−1]. Eq. 5
The speech cancellation in V2 (which directly affects the degree of devoicing) and the speech response of V1 will be dependent on both B and D. An examination of the case where D=0 follows.
In
The B factor can be non-unity for a variety of reasons. Either the distance to the speech source or the relative orientation of the array axis and the speech source or both can be different than expected. If both distance and angle mismatches are included for B, then
where again the T subscripts indicate the theorized values and R the actual values. In
An examination follows of the case where B is unity but D is nonzero. This can happen if the speech source is not where it is thought to be or if the speed of sound is different from what it is believed to be. From Equation 5 above, it can be sees that the factor that weakens the speech null in V2 for speech is
N(z)=Bz−γD−1
or in continuous s domain
N(s)=Be−D
Since γ is the time difference between arrival of speech at V1 compared to V2, it can be errors in estimation of the angular location of the speech source with respect to the axis of the array and/or by temperature changes. Examining the temperature sensitivity, the speed of sound varies with temperature as
c=331.3+(0.606T) m/s
where T is degrees Celsius. As the temperature decreases, the speed of sound also decreases. Setting 20 C as a design temperature and a maximum expected temperature range to −40 C to +60 C (−40 F to 140 F). The design speed of sound at 20 C is 343 m/s and the slowest speed of sound will be 307 m/s at −40 C with the fastest speed of sound 362 m/s at 60 C. Set the array length (2 d.sub.0) to be 21 mm. For speech sources on the axis of the array, the difference in travel time for the largest change in the speed of sound is
or approximately 7 microseconds. The response for N(s) given B=1 and D=7.2 μsec is shown in
If B is not unity, the robustness of the system is reduced since the effect from non-unity B is cumulative with that of non-zero D.
Another way in which D can be non-zero is when the speech source is not where it is believed to be—specifically, the angle from the axis of the array to the speech source is incorrect. The distance to the source may be incorrect as well, but that introduces an error in B, not D.
Referring to
The V2 speech cancellation response for θ1=0 degrees and θ2=30 degrees and assuming that B=1 is shown in
The description above has assumed that the microphones O1 and O2 were calibrated so that their response to a source located the same distance away was identical for both amplitude and phase. This is not always feasible, so a more practical calibration procedure is presented below. It is not as accurate, but is much simpler to implement. Begin by defining a filter .alpha.(z) such that:
O
1S(z)=α(z)O2C(z)
where the “C” subscript indicates the use of a known calibration source. The simplest one to use is the speech of the user. Then
O
1S(z)=α(z)O2C(z)
The microphone definitions are now:
V
1(z)=O1(z)·z−γ−β(z)α(z)O2(z)
V
2(z)=α(z)O2(z)−z−γβ(z)O1(z)
The β of the system should be fixed and as close to the real value as possible. In practice, the system is not sensitive to changes in β and errors of approximately +−5% are easily tolerated. During times when the user is producing speech but there is little or no noise, the system can train α(z) to remove as much speech as possible. This is accomplished by:
A simple adaptive filter can be used for α(z) so that only the relationship between the microphones is well modeled. The system of an embodiment trains only when speech is being produced by the user. A sensor like the SSM is invaluable in determining when speech is being produced in the absence of noise. If the speech source is fixed in position and will not vary significantly during use (such as when the array is on an earpiece), the adaptation should be infrequent and slow to update in order to minimize any errors introduced by noise present during training.
The above formulation works very well because the noise (far-field) responses of V1 and V2 are very similar while the speech (near-field) responses are very different. However, the formulations for V1 and V2 can be varied and still result in good performance of the system as a whole. If the definitions for V1 and V2 are taken from above and new variables B1 and B2 are inserted, the result is:
V
1(z)=O1(z)·z−γT−B1βTO2(z)
V
2(z)=O2(z)−z−γTB2βTO1(z)
where B1 and B2 are both positive numbers or zero. If B1 and B2 are set equal to unity, the optimal system results as described above. If B1 is allowed to vary from unity, the response of V1 is affected. An examination of the case where B2 is left at 1 and B1 is decreased follows. As B1 drops to approximately zero, V1 becomes less and less directional, until it becomes a simple omnidirectional microphone when B1=0.Since B2=1, a speech null remains in V2, so very different speech responses remain for V1 and V2. However, the noise responses are much less similar, so denoising will not be as effective. Practically, though, the system still performs well. B1 can also be increased from unity and once again the system will still denoise well, just not as well as with B1=1.
If B2 is allowed to vary, the speech null in V2 is affected. As long as the speech null is still sufficiently deep, the system will still perform well. Practically values down to approximately B2=0.6 have shown sufficient performance, but it is recommended to set B2 close to unity for optimal performance.
Similarly, variables ε and Δ may be introduced so that:
V
1(z)=(ε−β)O2N(z)+(1+Δ)O1N(z)z−γ
V
2(z)=(1+Δ)O2N(z)+(ε−β)O1N(z)z−γ
This formulation also allows the virtual microphone responses to be varied but retains the all-pass characteristic of H1(z).
In conclusion, the system is flexible enough to operate well at a variety of B1 values, but B2 values should be close to unity to limit devoicing for best performance. Experimental results for a 2 d0=19 mm array using a linear β of 0.83 and B1=B2=1 on a Bruel and Kjaer Head and Torso Simulator (HATS) in very loud (˜85 dBA) music/speech noise environment are shown in
Embodiments described herein include a method comprising: forming a first virtual microphone by combining a first signal of a first physical microphone and a second signal of a second physical microphone; forming a filter that describes a relationship for speech between the first physical microphone and the second physical microphone; forming a second virtual microphone by applying the filter to the first signal to generate a first intermediate signal, and summing the first intermediate signal and the second signal; generating an energy ratio of energies of the first virtual microphone and the second virtual microphone; and detecting acoustic voice activity of a speaker when the energy ratio is greater than a threshold value.
The first virtual microphone and the second virtual microphone of an embodiment are distinct virtual directional microphones.
The first virtual microphone and the second virtual microphone of an embodiment have approximately similar responses to noise.
The first virtual microphone and the second virtual microphone of an embodiment have approximately dissimilar responses to speech.
The method of an embodiment comprises applying a calibration to at least one of the first signal and the second signal.
The calibration of an embodiment compensates a second response of the second physical microphone so that the second response is equivalent to a first response of the first physical microphone.
The method of an embodiment comprises applying a delay to the first intermediate signal.
The delay of an embodiment is proportional to a time difference between arrival of the speech at the second physical microphone and arrival of the speech at the first physical microphone.
The forming of the first virtual microphone of an embodiment comprises applying the filter to the second signal.
The forming of the first virtual microphone of an embodiment comprises applying the calibration to the second signal.
The forming of the first virtual microphone of an embodiment comprises applying the delay to the first signal.
The forming of the first virtual microphone by the combining of an embodiment comprises subtracting the second signal from the first signal.
The filter of an embodiment is an adaptive filter.
The method of an embodiment comprises adapting the filter to minimize a second virtual microphone output when only speech is being received by the first physical microphone and the second physical microphone.
The adapting of an embodiment comprises applying a least-mean squares process.
The method of an embodiment comprises generating coefficients of the filter during a period when only speech is being received by the first physical microphone and the second physical microphone.
The forming of the filter of an embodiment comprises generating a first quantity by applying a calibration to the second signal. The forming of the filter of an embodiment comprises generating a second quantity by applying the delay to the first signal. The forming of the filter of an embodiment comprises forming the filter as a ratio of the first quantity to the second quantity.
The generating of the energy ratio of an embodiment comprises generating the energy ratio for a frequency band.
The generating of the energy ratio of an embodiment comprises generating the energy ratio for a frequency subband.
The frequency subband of an embodiment includes frequencies higher than approximately 200 Hertz (Hz).
The frequency subband of an embodiment includes frequencies in a range from approximately 250 Hz to 1250 Hz.
The frequency subband of an embodiment includes frequencies in a range from approximately 200 Hz to 3000 Hz.
The filter of an embodiment is a static filter.
The forming of the filter of an embodiment comprises determining a first distance as distance between the first physical microphone and a mouth of the speaker. The forming of the filter of an embodiment comprises determining a second distance as distance between the second physical microphone and the mouth. The forming of the filter of an embodiment comprises forming a ratio of the first distance to the second distance.
The method of an embodiment comprises generating a vector of the energy ratio versus time.
The first and second physical microphones of an embodiment are omnidirectional microphones.
The method of an embodiment comprises positioning the first physical microphone and the second physical microphone along an axis and separating the first physical microphone and the second physical microphone by a first distance.
A midpoint of the axis of an embodiment is a second distance from a mouth of the speaker, wherein the mouth is located in a direction defined by an angle relative to the midpoint.
Embodiments described herein include a method comprising: forming a first virtual microphone; forming a filter by generating a first quantity by applying a calibration to a second signal of a second physical microphone, generating a second quantity by applying the delay to a first signal of a first physical microphone, and forming the filter as a ratio of the first quantity to the second quantity; forming a second virtual microphone by applying the filter to the first signal to generate a first intermediate signal, and summing the first intermediate signal and the second signal; and generating a ratio of energies of the first virtual microphone and the second virtual microphone and detecting acoustic voice activity using the ratio.
The first virtual microphone and the second virtual microphone of an embodiment have approximately similar responses to noise and approximately dissimilar responses to speech.
The method of an embodiment comprises applying a calibration to at least one of the first signal and the second signal, wherein the calibration compensates a second response of the second physical microphone so that the second response is equivalent to a first response of the first physical microphone.
The method of an embodiment comprises applying a delay to the first intermediate signal, wherein the delay is proportional to a time difference between arrival of the speech at the second physical microphone and arrival of the speech at the first physical microphone.
The forming of the first virtual microphone of an embodiment comprises applying the filter to the second signal.
The forming of the first virtual microphone of an embodiment comprises applying the calibration to the second signal.
The forming of the first virtual microphone of an embodiment comprises applying the delay to the first signal.
The forming of the first virtual microphone by the combining of an embodiment comprises subtracting the second signal from the first signal.
The filter of an embodiment is an adaptive filter.
The method of an embodiment comprises adapting the filter to minimize a second virtual microphone output when only speech is being received by the first physical microphone and the second physical microphone.
The adapting of an embodiment comprises applying a least-mean squares process.
The method of an embodiment comprises generating coefficients of the filter during a period when only speech is being received by the first physical microphone and the second physical microphone.
The generating of the ratio of an embodiment comprises generating the ratio for a frequency band.
The generating of the ratio of an embodiment comprises generating the ratio for a frequency subband.
The method of an embodiment comprises generating a vector of the ratio versus time.
Embodiments described herein include a method comprising: forming a first virtual microphone by generating a first combination of a first signal and a second signal, wherein the first signal is received from a first physical microphone and the second signal is received from a second physical microphone; forming a filter by generating a first quantity by applying a calibration to at least one of the first signal and the second signal, generating a second quantity by applying a delay to the first signal, and forming the filter as a ratio of the first quantity to the second quantity; and forming a second virtual microphone by applying the filter to the first signal to generate a first intermediate signal and summing the first intermediate signal and the second signal; and determining a presence of acoustic voice activity of a speaker when an energy ratio of energies of the first virtual microphone and the second virtual microphone is greater than a threshold value.
Embodiments described herein include an acoustic voice activity detection system comprising: a first virtual microphone comprising a first combination of a first signal and a second signal, wherein the first signal is received from a first physical microphone and the second signal is received from a second physical microphone; a filter, wherein the filter is formed by generating a first quantity by applying a calibration to at least one of the first signal and the second signal, generating a second quantity by applying a delay to the first signal, and forming the filter as a ratio of the first quantity to the second quantity; and a second virtual microphone formed by applying the filter to the first signal to generate a first intermediate signal and summing the first intermediate signal and the second signal, wherein acoustic voice activity of a speaker is determined to be present when an energy ratio of energies of the first virtual microphone and the second virtual microphone is greater than a threshold value.
The first virtual microphone and the second virtual microphone of an embodiment have approximately similar responses to noise and approximately dissimilar responses to speech.
A calibration is applied to the second signal of an embodiment, wherein the calibration compensates a second response of the second physical microphone so that the second response is equivalent to a first response of the first physical microphone.
The delay is applied to the first intermediate signal of an embodiment, wherein the delay is proportional to a time difference between arrival of the speech at the second physical microphone and arrival of the speech at the first physical microphone.
The first virtual microphone of an embodiment is formed by applying the filter to the second signal.
The first virtual microphone of an embodiment is formed by applying the calibration to the second signal.
The first virtual microphone of an embodiment is formed by applying the delay to the first signal.
The first virtual microphone of an embodiment is formed by subtracting the second signal from the first signal.
The filter of an embodiment is an adaptive filter.
The filter of an embodiment is adapted to minimize a second virtual microphone output when only speech is being received by the first physical microphone and the second physical microphone.
Coefficients of the filter of an embodiment are generated during a period when only speech is being received by the first physical microphone and the second physical microphone.
The energy ratio of an embodiment comprises an energy ratio for a frequency band.
The energy ratio of an embodiment comprises an energy ratio for a frequency subband.
Embodiments described herein include a device comprising: a first physical microphone generating a first signal; a second physical microphone generating a second signal; and a processing component coupled to the first physical microphone and the second physical microphone, the processing component forming a first virtual microphone, the processing component forming a filter that describes a relationship for speech between the first physical microphone and the second physical microphone, the processing component forming a second virtual microphone by applying the filter to the first signal to generate a first intermediate signal, and summing the first intermediate signal and the second signal, the processing component detecting acoustic voice activity of a speaker when an energy ratio of energies of the first virtual microphone and the second virtual microphone is greater than a threshold value.
The device of an embodiment comprises applying a calibration to at least one of the first signal and the second signal.
The calibration of an embodiment compensates a second response of the second physical microphone so that the second response is equivalent to a first response of the first physical microphone.
The device of an embodiment comprises applying a delay to the first intermediate signal.
The delay of an embodiment is proportional to a time difference between arrival of the speech at the second physical microphone and arrival of the speech at the first physical microphone.
The forming of the first virtual microphone of an embodiment comprises applying the filter to the second signal.
The forming of the first virtual microphone of an embodiment comprises applying the calibration to the second signal.
The forming of the first virtual microphone of an embodiment comprises applying the delay to the first signal.
The forming of the first virtual microphone by the combining of an embodiment comprises subtracting the second signal from the first signal.
The filter of an embodiment is an adaptive filter.
The device of an embodiment comprises adapting the filter to minimize a second virtual microphone output when only speech is being received by the first physical microphone and the second physical microphone.
The adapting of an embodiment comprises applying a least-mean squares process.
The device of an embodiment comprises generating coefficients of the filter during a period when only speech is being received by the first physical microphone and the second physical microphone.
The forming of the filter of an embodiment comprises generating a first quantity by applying a calibration to the second signal. The forming of the filter of an embodiment comprises generating a second quantity by applying the delay to the first signal. The forming of the filter of an embodiment comprises forming the filter as a ratio of the first quantity to the second quantity.
The generating of the energy ratio of an embodiment comprises generating the energy ratio for a frequency band.
The generating of the energy ratio of an embodiment comprises generating the energy ratio for a frequency subband.
The frequency subband of an embodiment includes frequencies higher than approximately 200 Hertz (Hz).
The frequency subband of an embodiment includes frequencies in a range from approximately 250 Hz to 1250 Hz.
The frequency subband of an embodiment includes frequencies in a range from approximately 200 Hz to 3000 Hz.
The filter of an embodiment is a static filter.
The forming of the filter of an embodiment comprises determining a first distance as distance between the first physical microphone and a mouth of the speaker. The forming of the filter of an embodiment comprises determining a second distance as distance between the second physical microphone and the mouth. The forming of the filter of an embodiment comprises forming a ratio of the first distance to the second distance.
The device of an embodiment comprises generating a vector of the energy ratio versus time.
The first virtual microphone and the second virtual microphone of an embodiment are distinct virtual directional microphones.
The first virtual microphone and the second virtual microphone of an embodiment have approximately similar responses to noise.
The first virtual microphone and the second virtual microphone of an embodiment have approximately dissimilar responses to speech.
The first and second physical microphones of an embodiment are omnidirectional microphones.
The device of an embodiment comprises positioning the first physical microphone and the second physical microphone along an axis and separating the first physical microphone and the second physical microphone by a first distance.
A midpoint of the axis of an embodiment is a second distance from a mouth of the speaker, wherein the mouth is located in a direction defined by an angle relative to the midpoint.
Embodiments described herein include a device comprising: a headset including at least one loudspeaker, wherein the headset attaches to a region of a human head; a microphone array connected to the headset, the microphone array including a first physical microphone outputting a first signal and a second physical microphone outputting a second signal; and a processing component coupled to the first physical microphone and the second physical microphone, the processing component foaming a first virtual microphone, the processing component forming a filter that describes a relationship for speech between the first physical microphone and the second physical microphone, the processing component forming a second virtual microphone by applying the filter to the first signal to generate a first intermediate signal, and summing the first intermediate signal and the second signal, the processing component detecting acoustic voice activity of a speaker when an energy ratio of energies of the first virtual microphone and the second virtual microphone is greater than a threshold value.
The AVAD can be a component of a single system, multiple systems, and/or geographically separate systems. The AVAD can also be a subcomponent or subsystem of a single system, multiple systems, and/or geographically separate systems. The AVAD can be coupled to one or more other components (not shown) of a host system or a system coupled to the host system.
One or more components of the AVAD and/or a corresponding system or application to which the AVAD is coupled or connected includes and/or runs under and/or in association with a processing system. The processing system includes any collection of processor-based devices or computing devices operating together, or components of processing systems or devices, as is known in the art. For example, the processing system can include one or more of a portable computer, portable communication device operating in a communication network, and/or a network server. The portable computer can be any of a number and/or combination of devices selected from among personal computers, cellular telephones, personal digital assistants, portable computing devices, and portable communication devices, but is not so limited. The processing system can include components within a larger computer system.
Aspects of the AVAD and corresponding systems and methods described herein may be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), programmable array logic (PAL) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits (ASICs). Some other possibilities for implementing aspects of the AVAD and corresponding systems and methods include: microcontrollers with memory (such as electronically erasable programmable read only memory (EEPROM)), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the AVAD and corresponding systems and methods may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. Of course the underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (MOSFET) technologies like complementary metal-oxide semiconductor (CMOS), bipolar technologies like emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc.
It should be noted that any system, method, and/or other components disclosed herein may be described using computer aided design tools and expressed (or represented), as data and/or instructions embodied in various computer-readable media, in terms of their behavioral, register transfer, logic component, transistor, layout geometries, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the Internet and/or other computer networks via one or more data transfer protocols (e.g., HTTP, FTP, SMTP, etc.). When received within a computer system via one or more computer-readable media, such data and/or instruction-based expressions of the above described components may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.
The above description of embodiments of the AVAD and corresponding systems and methods is not intended to be exhaustive or to limit the systems and methods to the precise forms disclosed. While specific embodiments of, and examples for, the AVAD and corresponding systems and methods are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the systems and methods, as those skilled in the relevant art will recognize. The teachings of the AVAD and corresponding systems and methods provided herein can be applied to other systems and methods, not only for the systems and methods described above.
The elements and acts of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the AVAD and corresponding systems and methods in light of the above detailed description.
In general, in the following claims, the terms used should not be construed to limit the AVAD and corresponding systems and methods to the specific embodiments disclosed in the specification and the claims, but should be construed to include all systems that operate under the claims. Accordingly, the AVAD and corresponding systems and methods is not limited by the disclosure, but instead the scope is to be determined entirely by the claims.
While certain aspects of the AVAD and corresponding systems and methods are presented below in certain claim forms, the inventors contemplate the various aspects of the AVAD and corresponding systems and methods in any number of claim forms. Accordingly, the inventors reserve the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the AVAD and corresponding systems and methods.
This application is a continuation of U.S. patent application Ser. No. 12/606,146, filed Oct. 26, 2009, which is a continuation-in-part of U.S. patent application Ser. No. 11/805,987, filed May 25, 2007, which is a continuation of U.S. patent application Ser. No. 10/159,770, filed May 30, 2002, and which claims the benefit of U.S. Provisional Application No. 60/368,209, filed Mar. 27, 2002; this application also is a continuation-in-part of U.S. patent application Ser. No. 12/139,333, filed Jun. 13, 2008, and which claims the benefit of U.S. Provisional Application No. 61/045,377, filed Apr. 16, 2008; this application also claims the benefit of U.S. Patent Application No. 61/108,426, filed Oct. 24, 2008, all of which are herein incorporated by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
61108426 | Oct 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13669375 | Nov 2012 | US |
Child | 18130654 | US | |
Parent | 12606146 | Oct 2009 | US |
Child | 13669375 | US |