The present embodiments relate generally to audio or acoustic signal processing and more particularly to systems and methods for keyword detection in acoustic signals.
Voice keyword wakeup systems may monitor an incoming acoustic signal to detect keywords used to trigger wakeup of a device. Typical keyword detection methods include determining a score for matching the acoustic signal to a pre-determined keyword. If the score exceeds a pre-defined detection threshold, the keyword is considered to be detected. The pre-defined detection threshold is typically chosen to balance between having correct detections (e.g., detections when the keyword is actually uttered) and having false detections (e.g., detections when the keyword is not actually uttered). However, wakeup systems can miss detecting keyword utterances. This is especially true in difficult environments, for example, those having highly noisy, mismatched reverberant conditions, or high level of echo for barge-in (interruptions by other speakers, music). It can also be especially challenging to reduce false alarms (e.g., detections made that are actually incorrect) without increasing the false reject rate (e.g., the rate of failing to detect valid keyword utterances.
According to certain general aspects, the present technology relates to systems and methods for keyword detection in acoustic signals. Various embodiments provide methods and systems for facilitating more accurate and reliable keyword recognition when a user attempts to wake up a device or system, to launch an application on the device, and so on. For improving accuracy and reliability, various embodiments recognize that, when a keyword utterance is not recognized, users tend to repeat the keyword within a short time. Thus, within a short interval, there may be two pieces of the acoustic signal for which a confidence score may come close to the detection threshold, even if the confidence score does not exceed the detection threshold to trigger confirmation of keyword detection. In such situations, to facilitate detection of the keyword, it can be very valuable to loosen a criterion for keyword detection within the short interval, and/or to tune the keyword model used, according to various embodiments described herein.
These and other aspects and features of the present embodiments will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments in conjunction with the accompanying figures, wherein:
The present embodiments will now be described in detail with reference to the drawings, which are provided as illustrative examples of the embodiments so as to enable those skilled in the art to practice the embodiments and alternatives apparent to those skilled in the art. Notably, the figures and examples below are not meant to limit the scope of the present embodiments to a single embodiment, but other embodiments are possible by way of interchange of some or all of the described or illustrated elements. Moreover, where certain elements of the present embodiments can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the present embodiments will be described, and detailed descriptions of other portions of such known components will be omitted so as not to obscure the present embodiments. Embodiments described as being implemented in software should not be limited thereto, but can include embodiments implemented in hardware, or combinations of software and hardware, and vice-versa, as will be apparent to those skilled in the art, unless otherwise specified herein. In the present specification, an embodiment showing a singular component should not be considered limiting; rather, the present disclosure is intended to encompass other embodiments including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein. Moreover, applicants do not intend for any term in the specification or claims to be ascribed an uncommon or special meaning unless explicitly set forth as such. Further, the present embodiments encompass present and future known equivalents to the known components referred to herein by way of illustration.
Various embodiments of the present technology can be practiced with any electronic device operable to capture and process acoustic signals. In various embodiments, the electronic device can include smart microphones. The smart microphones may combine into a single device an acoustic sensor (e.g., a micro electro mechanical system (MEMS device)), along with a low power application specific integrated circuit (ASIC) and a low power processor used in conjunction with the acoustic sensor. Various embodiments can be practiced in smart microphones that include voice activity detection and keyword detection for providing a wakeup feature in a more power efficient manner.
In some embodiments, the electronic device can include hand-held devices, such as wired and/or wireless remote controls, notebook computers, tablet computers, phablets, smart phones, smart watches, personal digital assistants, media players, mobile telephones, and the like. In certain embodiments, the audio devices can include a personal desktop computer, television sets, car control and audio systems, smart thermostats, and so on.
Referring now to
In various embodiments, the smart microphone 110 includes at least an acoustic sensor, for example, a MEMS device 160. In various embodiments, the MEMS device 160 is used to detect acoustic signals, such as, for example, verbal communications from a user 190. The verbal communications can include keywords, key phrases, conversation, and the like. In various embodiments, the MEMS device may be used in conjunction with elements disposed on an application-specific integrated circuit (ASIC) 140. ASIC 140 is described further in regards to examples in
In some embodiments, the smart microphone 110 may also include a processor 150 to provide further processing capability. The processor 150 is implemented with circuitry. The processor 150 may be operable to perform certain processing, with regard to the acoustic signal captured by the MEMS device 160, at lower power than such processing can otherwise be performed in the host device 120. For example, the ASIC 140 may be operable to detect voice signals in the acoustic signal captured by MEMS device 160 and generate a voice activity detection signal based on the detection. In response to the voice detection signal, the processor 150 may be operable to wake up and then proceed to detect one or more pre-determined keywords or key phrases in the acoustic signals. In some embodiments, this detection functionality of processor 150 may be integrated into the ASIC 140, eliminating the need for a separate processor 150. For the detection functionality, a pre-stored list of keyword or key phrases may be compared word or phrases in the acoustic signal.
Upon detection of the one or more keywords or key phrases, the smart microphone 110 may initiate wakeup of the host device 120 and start sending captured acoustic signals to the host device 120. If no keyword or key phrase is detected, then wakeup of the host device 120 is not initiated. Until being woken up, the processor 150 and host device 120 may operate in a sleep mode (consuming no power or very small amounts of power). Further details of environment 100 and the smart microphone 110 and host device 120 in this regard are described below and with respect to examples in
Referring to
In some embodiments, the environment 100 may also have a regular (e.g., non-smart) microphone 130. The microphone 130 may be operable to capture the acoustic signal and provide the acoustic signal to the smart microphone 110 and/or to the host device 120 for further processing. In some embodiments, the processor 150 of the smart microphone 110 may be operable to perform low power processing of the acoustic signal captured by the microphone 130 while the host device 120 is kept in a lower power sleep mode. In certain embodiments, the processor 150 may continuously perform keyword detection in the obtained acoustic signal. In response to detection of a keyword, the processor 150 may send a signal to the host device 120 to initiate wake up of the host device to start full operations.
In some embodiments, the host DSP 170 of the host device 120 may be operable to perform low power processing of the acoustic signal captured by the microphone 130 while the main host processor 180 is kept in a lower power sleep mode. In certain embodiments, the host DSP 170 may continuously perform the keyword detection in the obtained acoustic signal. In response to detection of a keyword, the host DSP 170 may send a signal to the host processor 180 to wake up to start full operations of the host device 120.
The acoustic signal (in a form of electric signals) captured by the microphone 130 may be converted by codec 165 to digital signals. In some embodiments, codec 165 includes an analog-to-digital converter. The digital signals can be coded by codec 165 according to one or more audio formats. In some embodiments, the smart microphone 110 provides the coded digital signal directly to the host processor 180 of the host device 120, such that the host device 120 does not need to include the codec 165.
The host processor 180, which can be an application processor (AP) in some embodiments, may include a system on chip (SoC) configured to run an operating system and various applications of host device 120. In some embodiments, the host device 120 is configured as an SoC that comprises the host processor 180 and host DSP 170. The host processor 180 may be operable to support memory management, graphics processing, and multimedia decoding. The host processor 180 may be operable to execute instructions stored in a memory storage (not shown) of the host device 120. In some embodiments, the host processor 180 is operable to recognize natural language commands received from user 190 using automatic speech recognition (ASR) and perform one or more operations in response to the recognition.
In other embodiments, the host device 120 includes additional or other components used for operations of the host device 120. For example, the host device 120 may include a transceiver to communicate with other devices, such as a smartphone, a tablet computer, and/or a cloud-based computing resource (computing cloud) 195. The transceiver can be configured to communicate with a network such as the Internet, a Wide Area Network (WAN), a Local Area Network (LAN), a cellular network, and so forth, to send and receive data. In some embodiments, the host device 120 may send the acoustic signals to computing cloud 195, request that ASR be performed on the acoustic signal, and receive back the recognized speech.
The smart microphone 310 in the example in
The ASIC 340 is an example embodiment of the ASIC 140 in
Referring again to
The buffering and control element 360 may provide various buffering, analog to digital (A/D) conversion and various gain control, buffer control, clock, and amplifier elements for processing acoustic signals captured by the MEMS device, configured for use variously by the voice activity detector 380, the processor 350 and ultimately to the host device 120. An example describing further details regarding elements of an example ASIC of a smart microphone may be found in U.S. Pat. No. 9,113,263, entitled “VAD Detection Microphone and Method of Operating the Same,” which is incorporated by reference in its entirety herein.
In various embodiments, the smart microphone 310 may operate in multiple operational modes. The modes can include a voice activity detection (VAD) mode, a signal transmit mode, and a keyword or key phrase detection mode.
While operating in VAD mode, the smart microphone 310 may consume less power than in the other modes. While in VAD mode, the smart microphone 310 may operate for detection of voice activity using voice activity detector 380. In some embodiments, upon detection of voice activity, a signal may be sent to wake up processor 350.
In certain embodiments, the smart microphone 310 detects whether there is voice activity in the received acoustic signal, and in response to the detection, also detects whether the keyword or key phrase is present in the received acoustic signal. The smart microphone 310 can operate in these certain embodiments, to send a wakeup signal sent to the host device 120 in response to detecting both the presence of the voice activity and the presence of the key word or key phrase. For example, the ASIC 340 may detect voice signals in the acoustic signal captured by MEMS device 160, and generate a voice activity detection signal. In response to the voice detection signal, the keyword or key phrase detector 390 in processor 350 may be operable to wake up and then proceed to detect whether one or more pre-determined keywords or key phrases are present in the acoustic signals.
The processor 350 is an embodiment of the processor 150 in
In some embodiments, the functionality of the keyword or key phrase detector 390 may be integrated into the ASIC 340 which may eliminate the need to have a separate processor 350.
In other embodiments, the wakeup signal and acoustic signal may be sent to the host device 120 from the smart microphone 310 just in response to the presence of the voice activity detected by the smart microphone 310. The host device 120 may then operate to detect the presence of the key word or key phrase in the acoustic signal. Host DSP 170 shown in the example in
The host device 120 in
In response to receiving the wakeup signal, the host device 120 may start a wakeup process. After the wakeup latency, the host device 120 may provide the smart microphone 310 with a clock signal (for example, 768 kHz). In response to receiving the external clock signal, the smart microphone 310 may enter a signal transmit mode. In signal transmit mode, the smart microphone 310 may provide buffered audio data to the host device 120. In some embodiments, the buffered audio data may continue to be provided to the host device 120 as long as the host device 120 provides the external clock signal to the smart microphone 110.
The host device 120 and/or the computing cloud 195 may provide additional processing including noise suppression and/or noise reduction and ASR processing on the acoustic data received from the smart microphone 110.
In various embodiments, keyword or key phrase detection may be performed based on a keyword model. The keyword model can be a machine learning model operable to analyze a piece of the acoustic signal and output a score (also referred as a confidence score or a keyword confidence score). The confidence score may represent probability that the piece of the acoustic signal matches a pre-determined keyword. In various embodiments, the keyword model may include one or more of a Gaussian mixture model (GMM), a phoneme hidden Markov model (HMM), a deep neural network (DNN), a recurrent neural network, a convolutional neural network, and a support vector machine. In various embodiments, the keyword model may be user-independent or user-dependent. In some embodiments, the keyword model may be pre-trained to run in two and more modes. For example, the keyword model may run in a regular mode in high signal-to-noise (SNR) ratio environment and a low SNR mode for noisy environments.
It should be appreciated that, although the term keyword is used herein in certain examples, for simplicity, without also referring explicitly to key phrases, the use may be repeating a key phrase in practicing various embodiments.
As a user 190 speaks a keyword or a key phrase, the confidence score may keep increasing. In some embodiments, the keyword is considered to be present in the piece of the acoustic signal if the confidence score equals or exceeds a pre-determined (keyword) detection threshold. Experiments have shown that, in many cases in which the keyword is not detected even though the user spoke it, the confidence value is close to (but below) the predetermined threshold. Similarly, usage tests show that users typically repeat the keyword when it is not recognized the first time. These observations indicate that within a short interval, there may be two pieces of the acoustic signal for which a confidence score comes close to the detection threshold, even if the confidence score does not exceed the detection threshold to trigger confirmation of keyword detection. In such situations, it is advantageous to loosen a criterion for keyword detection within the short interval.
In some embodiments, if the discrepancy 470 does not exceed a pre-determined first value 440, the threshold 420 may be lowered by a second value 450 for a short time interval 430. In various embodiments, the first value 440 may be set in a range of 10% to 25% of the threshold 420, which experiments have shown to be an acceptable value. In some embodiments, the first value 440 is set to 20% of the threshold 420. If the first value 440 is too low, false alarms are more likely to occur. If the first value 440 is set too high, the confidence score 410 may not exceed it during the first utterance, preventing the lowering of the threshold from occurring. The second value 450 may be set equal to or larger than the first value 440, so that when the user 190 utters the keyword again during the time interval 430, the confidence score 410 may reach the lowered threshold. Note that, if the threshold is lowered by too large a value, false alarms are more likely to occur each time a near detection occurs. If the threshold is lowered by too small a value, the second repetition of the keyword may still not be recognized. In some embodiments, the time interval 430 may be equal to 0.5-5 seconds as experiments have shown that users typically repeat the keyword within such a short period. Too long an interval may cause additional false alarms, while too short an interval may prevent a successful detection during the repetition of the keyword. The first value 440, the second value 450, and the time interval 430 can be configurable by the user 190 in some embodiments. In some other embodiments, the second value 450 may be a function on the actual value of the discrepancy 470. When the time interval 430 is complete, the detection threshold 420 may be set back to the original value.
It should be noted that, although
In other embodiments, after the near detection, the original keyword model can be temporarily replaced, for the time interval 430, by a model tuned to facilitate detection of the keyword. For example, the replacement keyword model can be trained using noisy training data that contain higher levels of noise (e.g., a low SNR environment), or in the case of GMMs, the model could include more mixtures than the original model, or include artificially broadened Gaussian variances. Experiments have shown that such tuning of the replacement keyword model may increase the value for the confidence score 410 when the same utterance of a keyword is repeated. The replacement keyword model can be used instead of, or in addition to, using the lowering of the detection threshold 420 for the time interval 430. In various embodiments, after a pre-determined time interval is passed, the original keyword model is restored, e.g., by detuning the tuned keyword model or otherwise replacing the tuned keyword model with the original keyword model.
According to various embodiments, if the confidence score 410 equals or exceeds the original threshold 420 during a second utterance of keyword, then the keyword is considered to be detected.
Both the lowering of the detection threshold and the tuning of the keyword model might otherwise increase chances for false keyword detection, however that is compensated by relying on the uncorrelated nature of false detection within the short window of time in which the keyword is repeated. This uncorrelated nature reduces the likelihood of having false keyword detection associated with the repetition of a keyword.
In yet other embodiments, the repeating of a keyword may be a requirement for the keyword detection. One reason for requiring the repeating is that it may be useful in certain circumstances (for example, when a user accidently uses a key phrase in conversation) to avoid unwanted detection and actions triggered therefrom. For example, a user may use the keyword “find my phone” to trigger the phone to make a sound, play a song, and so forth. Some embodiments may require the user to repeat “find my phone” twice in order to trigger the phone to perform the operation to avoid making the sound or playing the song if the phrase “find my phone” happened to be used in conversation, due to the nature of this key phrase.
In some embodiments, the method 500 commences in block 502 with receiving an acoustic signal. The acoustic signal represents at least one captured sound. In block 504, the method 500 includes determining a keyword confidence score for the acoustic signal. In some embodiments, the confidence score can be acquired/obtained using a keyword model operable to analyze the acoustic signal and determine the confidence score.
In block 506, the method 500 includes comparing the keyword confidence score to a pre-determined detection threshold. If the confidence score reaches or is above the detection threshold, the method 500 proceeds with confirming that the keyword is detected in block 518. If the confidence score is lower than the detection threshold, then the method 500 includes, in block 508, determining whether the confidence score is within a first value of the detection threshold. In various embodiments, the first value may be set in a range of 10% to 25% of the detection threshold, which experiments have shown to be an acceptable value. In some embodiments, the first value is set to 20% of the detection threshold. If the confidence score is not within the first value of the detection threshold, then the method 500 proceeds with confirming that the keyword is not detected in block 516.
In block 510, if the confidence score is within the first value of the detection threshold, then the method 500 proceeds with lowering the detection threshold for a certain time interval (for example 0.5-5 sec). In block 512, the method 500 includes determining a further confidence score for further acoustic signals captured within the certain time interval. In block 514, the method 500 includes determining whether the further confidence score equals or exceeds the lowered detection threshold. If the further confidence score is less than the lowered detection threshold, then the method 500 proceeds with confirming that keyword is not detected in block 516. If the further confidence score is above or equal to the lowered detection threshold, the method 500 proceeds with confirming that keyword is detected in block 518.
In block 520, the method 500 in the example in
Although the present embodiments have been particularly described with reference to preferred ones thereof, it should be readily apparent to those of ordinary skill in the art that changes and modifications in the form and details may be made without departing from the spirit and scope of the present disclosure. It is intended that the appended claims encompass such changes and modifications.
The present application claims priority to U.S. Provisional Patent Application No. 62/379,173 filed Aug. 24, 2016, the contents of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62379173 | Aug 2016 | US |