This application relates to acoustic activity detection (AAD) approaches and voice activity detection (VAD) approaches, and their interfacing with other types of electronic devices.
Voice activity detection (VAD) approaches are important components of speech recognition software and hardware. For example, recognition software constantly scans the audio signal of a microphone searching for voice activity, usually, with a MIPS intensive algorithm. Since the algorithm is constantly running, the power used in this voice detection approach is significant.
Microphones are also disposed in mobile device products such as cellular phones. These customer devices have a standardized interface. If the microphone is not compatible with this interface it cannot be used with the mobile device product.
Many mobile devices have speech recognition included with the mobile device. However, the power usage of the algorithms are taxing enough to the battery that the feature is often enabled only after the user presses a button or wakes up the device. In order to enable this feature at all times, the power consumption of the overall solution must be small enough to have minimal impact on the total battery life of the device. As mentioned, this has not occurred with existing devices.
Because of the above-mentioned problems, some user dissatisfaction with previous approaches has occurred.
For a more complete understanding of the disclosure, reference should be made to the following detailed description and accompanying drawings wherein:
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity. It will be appreciated further that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art will understand that such specificity with respect to sequence is not actually required. It will also be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein.
Approaches are described herein that integrate voice activity detection (VAD) or acoustic activity detection (AAD) approaches into microphones. At least some of the microphone components (e.g., VAD or AAD modules) are disposed at or on an application specific circuit (ASIC) or other integrated device. The integration of components such as the VAD or AAD modules significantly reduces the power requirements of the system thereby increasing user satisfaction with the system. An interface is also provided between the microphone and circuitry in an electronic device (e.g., cellular phone or personal computer) in which the microphone is disposed. The interface is standardized so that its configuration allows placement of the microphone in most if not all electronic devices (e.g., cellular phones). The microphone operates in multiple modes of operation including a lower power mode that still detects acoustic events such as voice signals.
In many of these embodiments, at a microphone analog signals are received from a sound transducer. The analog signals are converted into digitized data. A determination is made as to whether voice activity exists within the digitized signal. Upon the detection of voice activity, an indication of voice activity is sent to a processing device. The indication is sent across a standard interface, and the standard interface is configured to be compatible to be coupled with a plurality of devices from potentially different manufacturers.
In other aspects, the microphone is operated in multiple operating modes, such that the microphone selectively operates in and moves between a first microphone sensing mode and a second microphone sensing mode based upon one or more of whether an external clock is being received from a processing device, or whether power is being supplied to the microphone. Within the first microphone sensing mode, the microphone utilizes an internal clock, receives first analog signals from a sound transducer, converts the first analog signals into first digitized data, determines whether voice activity exists within the first digitized signal, and upon the detection of voice activity, sends an indication of voice activity to the processing device an subsequently switches from using the internal clock to receiving an external clock. Within the second microphone sensing mode, the microphone receives second analog signals from a sound transducer, converts the second analog signals into second digitized data, determines whether voice activity exists within the second digitized signal, and upon the detection of voice activity, sends an indication of voice activity to the processing device, and uses the external clock supplied by the processing device.
In some examples, the indication comprises a signal indicating voice activity has been detected or a digitized signal. In other examples, the transducer comprises one of a microelectromechanical system (MEMS) device, a piezoelectric device, or a speaker.
In some aspects, the receiving, converting, determining, and sending are performed at an integrated circuit. In other aspects, the integrated circuit is disposed at one of a cellular phone, a smart phone, a personal computer, a wearable electronic device, or a tablet. In some examples, the receiving, converting, determining, and sending are performed when operating in a single mode of operation.
In some examples, the single mode is a power saving mode. In other examples, the digitized data comprises PDM data or PCM data. In some other examples, the indication comprises a clock signal. In yet other examples, the indication comprises one or more DC voltage levels.
In some examples, subsequent to sending the indication, a clock signal is received at the microphone. In some aspects, the clock signal is utilized to synchronize data movement between the microphone and an external processor. In other examples, a first frequency of the received clock is the same as a second frequency of an internal clock disposed at the microphone. In still other examples, a first frequency of the received clock is different than a second frequency of an internal clock disposed at the microphone.
In some examples, prior to receiving the clock signal, the microphone is in a first mode of operation, and receiving the clock signal is effective to cause the microphone to enter a second mode of operation. In other examples, the standard interface is compatible with any combination of the PDM protocol, the I2S protocol, or the I2C protocol.
In other embodiments, an apparatus includes an analog-to-digital conversion circuit, the analog-to-digital conversion circuit being configured to receive analog signals from a sound transducer and convert the analog signals into digitized data. The apparatus also includes a standard interface and a processing device. The processing device is coupled to the analog-to-digital conversion circuit and the standard interface. The processing device is configured to determine whether voice activity exists within the digitized signal and upon the detection of voice activity, to send an indication of voice activity to an external processing device. The indication is sent across the standard interface, and the standard interface is configured to be compatible to be coupled with a plurality of devices from potentially different manufacturers.
Referring now to
The charge pump 101 provides a voltage to charge up and bias a diaphragm of the capacitive MEMS sensor 102. For some applications (e.g., when using a piezoelectric device as a sensor), the charge pump may be replaced with a power supply that may be external to the microphone. A voice or other acoustic signal moves the diaphragm, the capacitance of the capacitive MEMS sensor 102 changes, and voltages are created that become an electrical signal. In one aspect, the charge pump 101 and the MEMS sensor 102 are not disposed on the ASIC (but in other aspects, they may be disposed on the ASIC). It will be appreciated that the MEMS sensor 102 may alternatively be a piezoelectric sensor, a speaker, or any other type of sensing device or arrangement.
The clock detector 104 controls which clock goes to the sigma-delta modulator 106 and synchronizes the digital section of the ASIC. If an external clock is present, the clock detector 104 uses that clock; if no external clock signal is present, then the clock detector 104 use an internal oscillator 103 for data timing/clocking purposes.
The sigma-delta modulator 106 converts the analog signal into a digital signal. The output of the sigma-delta modulator 106 is a one-bit serial stream, in one aspect. Alternatively, the sigma-delta modulator 106 may be any type of analog-to-digital converter.
The buffer 110 stores data and constitutes a running storage of past data. By the time acoustic activity is detected, this past additional data is stored in the buffer 110. In other words, the buffer 110 stores a history of past audio activity. When an audio event happens (e.g., a trigger word is detected), the control module 112 instructs the buffer 110 to spool out data from the buffer 110. In one example, the buffer 110 stores the previous approximately 180 ms of data generated prior to the activity detect. Once the activity has been detected, the microphone 100 transmits the buffered data to the host (e.g., electronic circuitry in a customer device such as a cellular phone).
The acoustic activity detection (AAD) module 108 detects acoustic activity. Various approaches can be used to detect such events as the occurrence of a trigger word, trigger phrase, specific noise or sound, and so forth. In one aspect, the module 108 monitors the incoming acoustic signals looking for a voice-like signature (or monitors for other appropriate characteristics or thresholds). Upon detection of acoustic activity that meets the trigger requirements, the microphone 100 transmits a pulse density modulation (PDM) stream to wake up the rest of the system chain to complete the full voice recognition process. Other types of data could also be used.
The control module 112 controls when the data is transmitted from the buffer. As discussed elsewhere herein, when activity has been detected by the AAD module 108, then the data is clocked out over an interface 119 that includes a VDD pin 120, a clock pin 122, a select pin 124, a data pin 126 and a ground pin 128. The pins 120-128 form the interface 119 that is recognizable and compatible in operation with various types of electronic circuits, for example, those types of circuits that are used in cellular phones. In one aspect, the microphone 100 uses the interface 119 to communicate with circuitry inside a cellular phone. Since the interface 119 is standardized as between cellular phones, the microphone 100 can be placed or disposed in any phone that utilizes the standard interface. The interface 119 seamlessly connects to compatible circuitry in the cellular phone. Other interfaces are possible with other pin outs. Different pins could also be used for interrupts.
In operation, the microphone 100 operates in a variety of different modes and several states that cover these modes. For instance, when a clock signal (with a frequency falling within a predetermined range) is supplied to the microphone 100, the microphone 100 is operated in a standard operating mode. If the frequency is not within that range, the microphone 100 is operated within a sensing mode. In the sensing mode, the internal oscillator 103 of the microphone 100 is being used and, upon detection of an acoustic event, data transmissions are aligned with the rising clock edge, where the clock is the internal clock.
Referring now to
In addition, the microphone 100 of
The function of the low pass filter 140 removes higher frequency from the charge pump. The function of the reference 142 is a voltage or other reference used by components within the system as a convenient reference value. The function of the decimation/compression module 144 is to minimize the buffer size used to compress and then store the data. The function of the decompression PDM module 146 is to pull the data apart for the control module. The function of the pre-amplifier 148 is bringing the sensor output signal to a usable voltage level.
The components identified by the label 100 in
Referring now to
In sensing mode, the output of the microphone is tri-stated and an internal clock is applied to the sensing circuit. Once the AAD module triggers (e.g., sends a trigger signal indicating an acoustic event has occurred), the microphone transmits buffered PDM data on the microphone data pin (e.g., data pin 126) synchronized with the internal clock (e.g., a 512 kHz clock). This internal clock will be supplied to the select pin (e.g., select pin 124) as an output during this mode. In this mode, the data will be valid on the rising edge of the internally generated clock (output on the select pin). This operation assures compatibility with existing I2S compatible hardware blocks. The select pin (e.g., select pin 124) and the data pin (e.g., data pin 126) will stop outputting the clock signal and data a set time after activity is no longer detected. The frequency for this mode is defined in the datasheet for the part in question. In other examples, the interface is compatible with the PDM protocol or the I2C protocol. Other examples are possible.
The operation of the microphone described above is shown in
For compatibility to the DMIC-compliant interfaces in sensing mode, the clock pin (e.g., clock pin 122) can be driven to clock out the microphone data. The clock must meet the sensing mode requirements for frequency (e.g., 512 kHz). When an external clock signal is detected on the clock pin (e.g., clock pin 122), the data driven on the data pin (e.g., data pin 126) is synchronized with the external clock within two cycles, in one example. Other examples are possible. In this mode, the external clock is removed when activity is no longer detected for the microphone to return to lowest power mode. Activity detection in this mode may use the select pin (e.g., select pin 124) to determine if activity is no longer sensed. Other pins may also be used.
This operation is shown in
Referring now to
The state transition diagram of
The microphone off state 402 is where the microphone 400 is deactivated. The normal mode state 404 is the state during the normal operating mode when the external clock is being applied (where the external clock is within a predetermined range). The microphone sensing mode with external clock state 406 is when the mode is switching to the external clock as shown in
As mentioned, transitions between these states are based on and triggered by events. To take one example, if the microphone is operating in normal operating state 404 (e.g., at a clock rate higher than 512 kHz) and the control module detects the clock pin is approximately 512 kHz, then control goes to the microphone sensing mode with external clock state 406. In the external clock state 406, when the control module then detects no clock on the clock pin, control goes to the microphone sensing mode internal clock state 408. When in the microphone sensing mode internal clock state 408, and an acoustic event is detected, control goes to the sensing mode with output state 410. When in the sensing mode with output state 410, a clock of greater than approximately 1 MHz may cause control to return to state 404. The clock may be less than 1 MHz (e.g., the same frequency as the internal oscillator) and is used to synchronize data being output from the microphone to an external processor. No acoustic activity for an OTP programmed amount of time, on the other hand, causes control to return to state 406.
It will be appreciated that the other events specified in
Preferred embodiments are described herein, including the best mode known to the inventors. It should be understood that the illustrated embodiments are exemplary only, and should not be taken as limiting the scope of the appended claims.
This application is a continuation of U.S. patent application Ser. No. 14/533,652, filed Nov. 5, 2014, which is a continuation-in-part of U.S. patent application Ser. No. 14/282,101, filed May 20, 2014, now U.S. Pat. No. 9,745,923, which claims the benefit of and priority to U.S. Provisional Application No. 61/826,587, filed May 23, 2013, and U.S. Provisional Application No. 61/901,832, filed Nov. 8, 2013, the entire contents of each of which are incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4831558 | Shoup et al. | May 1989 | A |
5555287 | Gulick et al. | Sep 1996 | A |
5675808 | Gulick et al. | Oct 1997 | A |
5819219 | De Vos et al. | Oct 1998 | A |
5822598 | Lam | Oct 1998 | A |
6057791 | Knapp | May 2000 | A |
6070140 | Tran | May 2000 | A |
6154721 | Sonnic | Nov 2000 | A |
6249757 | Cason | Jun 2001 | B1 |
6259291 | Huang | Jul 2001 | B1 |
6397186 | Bush et al. | May 2002 | B1 |
6756700 | Zeng | Jun 2004 | B2 |
6829244 | Wildfeuer et al. | Dec 2004 | B1 |
7102452 | Holmes | Sep 2006 | B1 |
7190038 | Dehe et al. | Mar 2007 | B2 |
7415416 | Rees | Aug 2008 | B2 |
7473572 | Dehe et al. | Jan 2009 | B2 |
7546498 | Tang et al. | Jun 2009 | B1 |
7619551 | Wu | Nov 2009 | B1 |
7630504 | Poulsen | Dec 2009 | B2 |
7774204 | Mozer et al. | Aug 2010 | B2 |
7781249 | Laming et al. | Aug 2010 | B2 |
7795695 | Weigold et al. | Sep 2010 | B2 |
7825484 | Martin et al. | Nov 2010 | B2 |
7829961 | Hsiao | Nov 2010 | B2 |
7856283 | Burk et al. | Dec 2010 | B2 |
7856804 | Laming et al. | Dec 2010 | B2 |
7903831 | Song | Mar 2011 | B2 |
7957972 | Huang et al. | Jun 2011 | B2 |
8274856 | Byeon | Sep 2012 | B2 |
8275148 | Li et al. | Sep 2012 | B2 |
8666751 | Murthi et al. | Mar 2014 | B2 |
8972252 | Hung et al. | Mar 2015 | B2 |
8996381 | Mozer et al. | Mar 2015 | B2 |
9043211 | Haiut et al. | May 2015 | B2 |
9111548 | Nandy et al. | Aug 2015 | B2 |
9112984 | Sejnoha et al. | Aug 2015 | B2 |
20030138061 | Li | Jul 2003 | A1 |
20030171907 | Gal-On et al. | Sep 2003 | A1 |
20050207605 | Dehe et al. | Sep 2005 | A1 |
20060013415 | Winchester | Jan 2006 | A1 |
20060074658 | Chadha | Apr 2006 | A1 |
20060164151 | Chatterjee et al. | Jul 2006 | A1 |
20070127761 | Poulsen | Jun 2007 | A1 |
20070274297 | Cross et al. | Nov 2007 | A1 |
20070278501 | MacPherson et al. | Dec 2007 | A1 |
20080175425 | Roberts et al. | Jul 2008 | A1 |
20080267431 | Leidl et al. | Oct 2008 | A1 |
20080279407 | Pahl | Nov 2008 | A1 |
20080283942 | Huang et al. | Nov 2008 | A1 |
20090001553 | Pahl et al. | Jan 2009 | A1 |
20090003629 | Shajaan et al. | Jan 2009 | A1 |
20090180655 | Tien et al. | Jul 2009 | A1 |
20090234645 | Bruhn | Sep 2009 | A1 |
20090257289 | Byeon | Oct 2009 | A1 |
20090316935 | Furst et al. | Dec 2009 | A1 |
20100046780 | Song | Feb 2010 | A1 |
20100052082 | Lee et al. | Mar 2010 | A1 |
20100128914 | Khenkin | May 2010 | A1 |
20100183181 | Wang | Jul 2010 | A1 |
20100246877 | Wang et al. | Sep 2010 | A1 |
20100290644 | Wu et al. | Nov 2010 | A1 |
20100322443 | Wu et al. | Dec 2010 | A1 |
20100322451 | Wu et al. | Dec 2010 | A1 |
20110013787 | Chang | Jan 2011 | A1 |
20110075875 | Wu et al. | Mar 2011 | A1 |
20110107010 | Strauss et al. | May 2011 | A1 |
20110170714 | Hanzlik et al. | Jul 2011 | A1 |
20110293115 | Henriksen | Dec 2011 | A1 |
20120112804 | Li et al. | May 2012 | A1 |
20120113899 | Overmars | May 2012 | A1 |
20120177227 | Adachi | Jul 2012 | A1 |
20120232896 | Taleb et al. | Sep 2012 | A1 |
20120250910 | Shajaan et al. | Oct 2012 | A1 |
20120310641 | Niemisto et al. | Dec 2012 | A1 |
20130035777 | Niemisto et al. | Feb 2013 | A1 |
20130058495 | Furst et al. | Mar 2013 | A1 |
20130195291 | Josefsson | Aug 2013 | A1 |
20130223635 | Singer et al. | Aug 2013 | A1 |
20130322461 | Poulsen | Dec 2013 | A1 |
20140163978 | Basye et al. | Jun 2014 | A1 |
20140244269 | Tokutake | Aug 2014 | A1 |
20140244273 | Laroche et al. | Aug 2014 | A1 |
20140257821 | Adams et al. | Sep 2014 | A1 |
20140270260 | Goertz et al. | Sep 2014 | A1 |
20140274203 | Ganong et al. | Sep 2014 | A1 |
20140278435 | Ganong et al. | Sep 2014 | A1 |
20140281628 | Nigam et al. | Sep 2014 | A1 |
20140343949 | Huang et al. | Nov 2014 | A1 |
20150106085 | Lindahl | Apr 2015 | A1 |
20150112690 | Guha et al. | Apr 2015 | A1 |
20150134331 | Millet et al. | May 2015 | A1 |
Number | Date | Country |
---|---|---|
1083639 | Mar 1994 | CN |
1306472 | Aug 2001 | CN |
1868118 | Nov 2006 | CN |
101288337 | Oct 2008 | CN |
1022224675.5 | Oct 2011 | CN |
102272826 | Dec 2011 | CN |
102568480 | Jul 2012 | CN |
102770909 | Nov 2012 | CN |
102983868 | Mar 2013 | CN |
103117065 | May 2013 | CN |
WO-9013890 | Nov 1990 | WO |
WO-0203747 | Jan 2002 | WO |
WO-02061727 | Aug 2002 | WO |
WO-2005009072 | Jan 2005 | WO |
WO-2007009465 | Jan 2007 | WO |
WO-2010060892 | Jun 2010 | WO |
Entry |
---|
Anonymous, “dsPIC30F Digital Signal Controllers,” retrieved from http://ww1.microchip.com/downloads/en/DeviceDoc/dspbrochure_70095G.pdf (Oct. 31, 2004). |
Number | Date | Country | |
---|---|---|---|
20180308511 A1 | Oct 2018 | US |
Number | Date | Country | |
---|---|---|---|
61901832 | Nov 2013 | US | |
61826587 | May 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14533652 | Nov 2014 | US |
Child | 16018724 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14282101 | May 2014 | US |
Child | 14533652 | US |