The present embodiments relate to detection of voice activity in noisy environments.
Robust voice activity detection in noisy sound environments is a very difficult problem when using a small device mounted in the ear. Such systems that rely on using fixed thresholds often suffer from false positives and false negatives.
A method of adjusting a gain on a voice operated control system can include receiving a first microphone signal, receiving a second microphone signal, updating a slow time weighted ratio of the filtered first and second signals, and updating a fast time weighted ratio of the filtered first and second signals. The method can further include calculating an absolute difference between the fast time weighted ratio and the slow time weighted ratio, comparing the absolute difference with a threshold, and increasing the gain when the absolute difference is greater than the threshold. In some embodiments the threshold can be fixed. In some embodiments the method can further include band limiting or band pass filtering the first microphone signal to provide a filtered first signal, band limiting or band pass filtering the second microphone signal to provide a filtered second signal, calculating a power estimate of the filtered first signal including a fast time weighted average and a slow time weighted average of the filtered first signal, and calculating a power estimate of the filtered second signal including a fast time weighted average and a slow time weighed average of the filtered second signal. In some embodiments the threshold is dependent on the slow time weighted average. In some embodiments, the threshold value is set to a time averaged value of the absolute difference and in some embodiments the threshold value is set to the time averaged value of the absolute difference using a leaky integrator used for time smoothing. The step of band limiting or band pass filtering can use a weighted fast Fourier transform operation. In some embodiments, the method can further include determining a current voice activity status based on the comparison step. In some embodiments, the method can further include determining a current voice activity status using Singular Value Decomposition, a neural net system, or a bounded probability value.
The embodiments can also include an electronic device for adjusting a gain on a voice operated control system which can include one or more processors and a memory having computer instructions. The instructions, when executed by the one or more processors causes the one or more processors to perform the operations of receiving a first microphone signal, receiving a second microphone signal, updating a slow time weighted ratio of the filtered first and second signals, and updating a fast time weighted ratio of the filtered first and second signals. The one or more processors can further perform the operations of calculating an absolute difference between the fast time weighted ratio and the slow time weighted ratio, comparing the absolute difference with a threshold, and increasing the gain when the absolute difference is greater than the threshold. In some embodiments, adjusting or increasing the gain involves adjusting a gain of an overall system or of a total output. In some embodiments, adjusting the gain involves adjusting the gain from a first microphone, from a second microphone or from both. In some embodiments, adjusting the gain involves adjusting the gain at the output of a VAD or comparator or other output. In some embodiments, adjusting the gain can involve any combination of gain adjustment mentioned above.
In some embodiments, electronic device can further include the memory having instructions when executed by the one or more processors to cause the one or more processors to perform the operations of band limiting or band pass filtering the first microphone signal to provide filtered first signal, band limiting or band pass filtering the second microphone signal to provide a filtered second signal, calculating a power estimate of the filtered first signal including a fast time weighted average and a slow time weighted average of the filtered first signal, and calculating a power estimate of the filtered second signal including a fast time weighted average and a slow time weighed average of the filtered second signal. In some embodiments the threshold is fixed or the threshold is dependent on the slow time weighted average. In some embodiments, the first microphone signal is received by an ambient signal microphone and the second microphone signal is received by a ear canal microphone. The ambient signal microphone and the ear canal microphone can be part of an earphone device having a sound isolating barrier or a partially sound isolating barrier to isolate the ear canal microphone from an ambient environment. The earphone device can be any number of devices including, but not limited to a headset, earpiece, headphone, ear bud or other type of earphone device. In some embodiments, the sound isolating barrier or partially sound isolating barrier is an inflatable balloon or foam plug. In some embodiments, the memory further includes instructions causing the operation of determining a current voice activity status based on the comparison step. In some embodiments, the memory further includes instructions causing the operation of determining a current voice activity status using Singular Value Decomposition, neural net systems, or a bounded probability value. In some embodiments, the first microphone signal is optionally processed using an analog or a digital band-pass filter and in some embodiments the second microphone signal is optionally processed using an analog or a digital band-pass filter. In some embodiments, at least one characteristic of the first or second microphone signals includes a short-term power estimate.
The invention may be understood from the following detailed description when read in connection with the accompanying drawing. It is emphasized, according to common practice, that various features of the drawings may not be drawn to scale. On the contrary, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. Moreover, in the drawing, common numerical references are used to represent like features. Included in the drawing are the following figures:
A new method and system is presented to robustly determined voice activity using typically two microphones mounted in a small earpiece. The determined voice activity status can be used to control the gain on a voice operated control system to gate the level of a signal directed to a second voice receiving system. This voice receiving system can be a voice communication system (eg radio or telephone system), a voice recording system, a speech to text system, a voice machine-control system. The gain of the voice operated control system is typically set to zero when no voice active is detected, and set to unity otherwise. The overall data rate in a voice communication system can therefore be adjusted, and large data rate reductions are possible: thereby increasing the number of voice communications channels and/or increasing the voice quality for each voice communication channel. The voice activity status can also be used to adjust the power used in a wireless voice communication system, thereby extending the battery life of the system.
P_1(t)=W*FFT(M_1(t)
P_2(t)=W*FFT(M_2(t)
Where
P_1(t) is the weighted power estimate of signal microphone 1 at time t.
W is a frequency weighting vector.
FFT( ) is a Fast Fourier Transform operation.
M_1(t) is the signal from the first microphone at time t.
M_2(t) is the signal from the second microphone at time t.
A fast-time weighted average of the two band pass filtered power estimates is calculated at 25 and 26 respectively, with a fast time constant which in the preferred embodiment is equal to 45 ms.
AV_M1_fast(t)=a*AV_M1_fast(t−1)+(a−1)*P_1(t)
AV_M2_fast(t)=a*AV_M2_fast(t−1)+(a−1)*P_1(t)
Where
AV_M1_fast(t) is the fast time weighted average of the first band pass filtered microphone signal.
AV_M2_fast(t) is the fast time weighted average of the second band pass filtered microphone signal.
a is a fast time weighting coefficient.
A slow-time weighted average of the two band pass filtered power estimates is calculated at 27 and 28 respectively, with a fast time constant which in the preferred embodiment is equal to 500 ms.
AV_M1_slow(t)=b*AV_M1_slow(t−1)+(b−1)*P_1(t)
AV_M2_slow(t)=b*AV_M2_slow(t−1)+(b−1)*P_1(t)
Where
AV_M1_slow(t) is the slow time weighted average of the first band pass filtered microphone signal.
AV_M2_slow(t) is the slow time weighted average of the second band pass filtered microphone signal.
b is a slow time weighting coefficient, where a>b.
The ratio of the two fast time weighted power estimates is calculated at 30 (i.e., the fast weighted power of the second microphone divided by the fast weighted power of the first microphone).
ratio_fast(t)=AV_M2_fast(t)/AV_M1_fast(t)
The ratio of the two slow time weighted power estimates is calculated at 29 (ie the slow weighted power of the second microphone divided by the slow weighted power of the first microphone).
ratio_slow(t)=AV_M2_slow(t)/AV_M1_slow(t)
The absolute difference of the two above ratio values is then calculated at 31.
diff(t)=abs(ratio_fast(t)−ratio_slow(t))
Note that the updating of the slow time weighted ratio in one embodiment is of the first filtered signal and the second filtered signal where the first filtered signal and the second filtered signal are the slow weighted powers of the first and second microphone signals. Similarly, updating of the fast time weighted ratio is of the first filtered signal and the second filtered signal where the first filtered signal and the second filtered signals are the fast weighted powers of the first and second microphone signals. As noted above, the absolute differences between the fast time weighted ratio and the slow time weighted ratios are calculated to provide a value.
This value is then compared with a threshold at 32, and if the value diff(t) is greater than this threshold, then we determine that voice activity is current in an active mode at 33, and the VOX gain value is updated at 34 or in this example increased (up to a maximum value of unity).
In one exemplary embodiment the threshold value is fixed.
In a second embodiment the threshold value is dependent on the slow weighted level AV_M1 slow.
In a third embodiment the threshold value is set to be equal to the time averaged value of the diff(t), for example calculated according to the following:
threshold(t)=c*threshold(t−1)+(c−1)*diff(t)
where c is a time smoothing coefficient such that the time smoothing is a leaky integrator type with a smoothing time of approximately 500 ms.
Although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the embodiments claimed.
This application is a continuation of and claims priority to U.S. patent application Ser. No. 14/922,475, filed on Oct. 26, 2015, which claims priority to U.S. Provisional Patent Application No. 62/068,273, filed on Oct. 24, 2014, which are both hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
5459814 | Gupta et al. | Oct 1995 | A |
5533133 | Lamkin et al. | Jul 1996 | A |
6088670 | Takada | Jul 2000 | A |
7756281 | Goldstein et al. | Jul 2010 | B2 |
8047207 | Perez et al. | Nov 2011 | B2 |
8194864 | Goldstein et al. | Jun 2012 | B2 |
8199919 | Goldstein et al. | Jun 2012 | B2 |
8208644 | Goldstein et al. | Jun 2012 | B2 |
8208652 | Keady | Jun 2012 | B2 |
8221861 | Keady | Jul 2012 | B2 |
8229128 | Keady | Jul 2012 | B2 |
8251925 | Keady et al. | Aug 2012 | B2 |
8312960 | Keady | Nov 2012 | B2 |
8437480 | Zong | May 2013 | B2 |
8437492 | Goldstein et al. | May 2013 | B2 |
8550206 | Keady et al. | Oct 2013 | B2 |
8554350 | Keady et al. | Oct 2013 | B2 |
8600067 | Usher et al. | Dec 2013 | B2 |
8631801 | Keady | Jan 2014 | B2 |
8657064 | Staab et al. | Feb 2014 | B2 |
8678011 | Goldstein et al. | Mar 2014 | B2 |
8718313 | Keady | May 2014 | B2 |
8848939 | Keady et al. | Sep 2014 | B2 |
8917880 | Goldstein et al. | Dec 2014 | B2 |
8935164 | Turnbull et al. | Jan 2015 | B2 |
8992710 | Keady | Mar 2015 | B2 |
9113267 | Usher et al. | Aug 2015 | B2 |
9123323 | Keady | Sep 2015 | B2 |
9138353 | Keady | Sep 2015 | B2 |
9185481 | Goldstein et al. | Nov 2015 | B2 |
9216237 | Keady | Dec 2015 | B2 |
9539147 | Keady et al. | Jan 2017 | B2 |
9757069 | Keady et al. | Sep 2017 | B2 |
9781530 | Usher et al. | Oct 2017 | B2 |
9843854 | Keady | Dec 2017 | B2 |
10012529 | Goldstein et al. | Jul 2018 | B2 |
10190904 | Goldstein et al. | Jan 2019 | B2 |
20030012388 | Ura | Jan 2003 | A1 |
20030055636 | Katuo et al. | Mar 2003 | A1 |
20060020451 | Kushner et al. | Jan 2006 | A1 |
20070126503 | Hsieh et al. | Jun 2007 | A1 |
20070276657 | Gournay et al. | Nov 2007 | A1 |
20090010444 | Goldstein et al. | Jan 2009 | A1 |
20090016541 | Goldstein et al. | Jan 2009 | A1 |
20090071487 | Keady | Mar 2009 | A1 |
20100086141 | Nicolino, Jr. | Apr 2010 | A1 |
20100241256 | Goldstein et al. | Sep 2010 | A1 |
20110035213 | Malenovsky et al. | Feb 2011 | A1 |
20120123772 | Thyssen et al. | May 2012 | A1 |
20120310637 | Vitte et al. | Dec 2012 | A1 |
20130013304 | Murthy et al. | Jan 2013 | A1 |
20130024194 | Zhao et al. | Jan 2013 | A1 |
20130098706 | Keady | Apr 2013 | A1 |
20130149192 | Keady | Jun 2013 | A1 |
20130188796 | Kristensen et al. | Jul 2013 | A1 |
20130272540 | Aahgren et al. | Oct 2013 | A1 |
20130297305 | Turnbull et al. | Nov 2013 | A1 |
20130329912 | Krishnaswamy et al. | Dec 2013 | A1 |
20140003644 | Keady et al. | Jan 2014 | A1 |
20140026665 | Keady | Jan 2014 | A1 |
20140093094 | Goldstein et al. | Apr 2014 | A1 |
20140188467 | Zing et al. | Jul 2014 | A1 |
20140278389 | Zurek et al. | Sep 2014 | A1 |
20140286497 | Thyssen et al. | Sep 2014 | A1 |
20140316774 | Wang | Oct 2014 | A1 |
20140341380 | Zheng et al. | Nov 2014 | A1 |
20140373854 | Keady | Dec 2014 | A1 |
20150163602 | Pedersen et al. | Jun 2015 | A1 |
20150221322 | Iyengar et al. | Aug 2015 | A1 |
20150228293 | Gunawan et al. | Aug 2015 | A1 |
20150243300 | Muesch | Aug 2015 | A1 |
20160015568 | Keady | Jan 2016 | A1 |
20160049915 | Wang | Feb 2016 | A1 |
20160155448 | Pumhagen et al. | Jun 2016 | A1 |
20160192077 | Keady | Jun 2016 | A1 |
20160295311 | Keady et al. | Oct 2016 | A1 |
20170134865 | Goldstein et al. | May 2017 | A1 |
20180054668 | Keady | Feb 2018 | A1 |
20180132048 | Usher et al. | May 2018 | A1 |
20180220239 | Keady et al. | Aug 2018 | A1 |
20190082272 | Goldstein et al. | Mar 2019 | A9 |
Number | Date | Country | |
---|---|---|---|
20190146747 A1 | May 2019 | US |
Number | Date | Country | |
---|---|---|---|
62068273 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14922475 | Oct 2015 | US |
Child | 16227695 | US |