The present disclosure generally relates to speech recognition in mobile devices, and more specifically, to processing an input sound for detecting a target keyword in mobile devices.
Recently, the use of mobile devices such as smartphones and tablet computers has become widespread. These devices typically provide voice and data communication functionalities over wireless networks. In addition, such mobile devices typically include other features that provide a variety of functions designed to enhance user convenience.
One of the features that are being used increasingly is a voice assistant function. The voice assistant function allows a mobile device to receive a voice command and run various applications in response to the voice command. For example, a voice command from a user allows a mobile device to call a desired phone number, play an audio file, take a picture, search the Internet, or obtain weather information, without a physical manipulation of the mobile device.
In conventional mobile devices, the voice assistant function is typically activated in response to detecting a target keyword from an input sound. Detection of a target keyword generally involves extracting sound features from the input sound and normalizing the sound features one at a time. However, sequentially normalizing the sound features in such a manner may result in a delay in detecting the target keyword from the input sound. On the other hand, in a mobile device with limited power supply, the normalization of the sound features may be performed at once. In this case, however, such normalization typically results in a substantial process load that takes some time to return to a normal process load while depleting power resources.
The present disclosure provides methods and apparatus for detecting a target keyword from an input sound in mobile devices.
According to one aspect of the present disclosure, a method of detecting a target keyword from an input sound for activating a function in a mobile device is disclosed. In this method, a first plurality of sound features is received in a buffer, and a second plurality of sound features is received in the buffer. While receiving each of the second plurality of sound features in the buffer, a first number of the sound features are processed from the buffer. The first number of the sound features includes two or more sound features. Further, the method may include determining a keyword score for at least one of the processed sound features and detecting the input sound as the target keyword if at least one of the keyword scores is greater than a threshold score. This disclosure also describes apparatus, a device, a system, a combination of means, and a computer-readable medium relating to this method.
According to another aspect of the present disclosure, a mobile device includes a buffer, a feature processing unit, a keyword score calculation unit, and a keyword detection unit. The buffer is configured to store a first plurality of sound features and a second plurality of sound features. The feature processing unit is configured to process a first number of the sound features from the buffer while the buffer receives each of the second plurality of sound features. The first number of the sound features includes two or more sound features. The keyword score calculation unit is configured to determine a keyword score for each of the processed sound features. The keyword detection unit is configured to detect an input sound as a target keyword if at least one of the keyword scores is greater than a threshold score.
Embodiments of the inventive aspects of this disclosure will be understood with reference to the following detailed description, when read in conjunction with the accompanying drawings.
In response, the user 110 may activate various functions of the mobile device 120 through the voice assistant application 130 by speaking other voice commands. For example, the user may activate a music player 140 by speaking a voice command “PLAY MUSIC.” Although the illustrated embodiment activates the voice assistant application 130 in response to detecting the target keyword, it may also activate any other applications or functions in response to detecting an associated target keyword. In one embodiment, the mobile device 120 may detect the target keyword by retrieving a plurality of sound features from a buffer for processing while generating and receiving a next sound feature into the buffer as will be described in more detail below.
The processor 230 includes a digital signal processor (DSP) 232 and a voice assistant unit 238, and may be an application processor or a central processing unit (CPU) for managing and operating the mobile device 120. The DSP 232 includes a speech detector 234 and a voice activation unit 236. In one embodiment, the DSP 232 is a low power processor for reducing power consumption in processing sound streams. In this configuration, the voice activation unit 236 in the DSP 232 is configured to activate the voice assistant unit 238 when the target keyword is detected in the input sound stream 210. Although the voice activation unit 236 is configured to activate the voice assistant unit 238 in the illustrated embodiment, it may also activate any functions or applications that may be associated with a target keyword.
The sound sensor 220 may be configured to receive the input sound stream 210 and provide it to the speech detector 234 in the DSP 232. The sound sensor 220 may include one or more microphones or any other types of sound sensors that can be used to receive, capture, sense, and/or detect the input sound stream 210. In addition, the sound sensor 220 may employ any suitable software and/or hardware for performing such functions.
In one embodiment, the sound sensor 220 may be configured to receive the input sound stream 210 periodically according to a duty cycle. In this case, the sound sensor 220 may determine whether the received portion of the input sound stream 210 is greater than a threshold sound intensity. When the received portion of the input sound stream 210 is greater than the threshold sound intensity, the sound sensor 220 activates the speech detector 234 and provides the received portion to the speech detector 234 in the DSP 232. Alternatively, without determining whether the received portion exceeds a threshold sound intensity, the sound sensor 220 may receive a portion of the input sound stream periodically and activate the speech detector 234 to provide the received portion to the speech detector 234.
For use in detecting the target keyword, the storage unit 250 stores the target keyword and state information on a plurality of states associated with a plurality of portions of the target keyword. In one embodiment, the target keyword may be divided into a plurality of basic sound units such as phones, phonemes, or subunits thereof, and the plurality of portions representing the target keyword may be generated based on the basic sound units. Each portion of the target keyword is then associated with a state under a Markov chain model such as a hidden Markov model (HMM), a semi-Markov model (SMM), or a combination thereof. The state information may include transition information from each of the states to a next state including itself. The storage unit 250 may be implemented using any suitable storage or memory devices such as a RAM (Random Access Memory), a ROM (Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), a flash memory, or a solid state drive (SSD).
The speech detector 234 in the DSP 232, when activated, receives the portion of the input sound stream 210 from the sound sensor 220. In one embodiment, the speech detector 234 extracts a plurality of sound features from the received portion and determines whether the extracted sound features indicate sound of interest such as human speech by using any suitable sound classification method such as a Gaussian mixture model (GMM) based classifier, a neural network, a HMM, a graphical model, and a Support Vector Machine (SVM). If the received portion is determined to be sound of interest, the speech detector 234 activates the voice activation unit 236 and the received portion and the remaining portion of the input sound stream are provided to the voice activation unit 236. In some other embodiments, the speech detector 234 may be omitted in the DSP 232. In this case, when the received portion is greater than the threshold intensity, the sound sensor 220 activates the voice activation unit 236 and provides the received portion and the remaining portion of the input sound stream 210 directly to the voice activation unit 236.
The voice activation unit 236, when activated, is configured to continuously receive the input sound stream 210 and detect the target keyword from the input sound stream 210. As the input sound stream 210 is received, the voice activation unit 236 may sequentially extract a plurality of sound features from the input sound stream 210. In addition, the voice activation unit 236 may process each of the plurality of extracted sound features, obtain the state information including the plurality of states and transition information for the target keyword from the storage unit 250. For each processed sound feature, an observation score may be determined for each of the states by using any suitable probability model such as a GMM, a neural network, and a SVM.
From the transition information, the voice activation unit 236 may obtain transition scores from each of the states to a next state in a plurality of state sequences that are possible for the target keyword. After determining the observation scores and obtaining the transition scores, the voice activation unit 236 determines scores for the possible state sequences. In one embodiment, the greatest score among the determined scores may be used as a keyword score for the processed sound feature. If the keyword score for the processed sound feature is greater than a threshold score, the voice activation unit 236 detects the input sound stream 210 as the target keyword. In a particular embodiment, the threshold score may be a predetermined threshold score. Upon detecting the target keyword, the voice activation unit 236 generates and transmits an activation signal to turn on the voice assistant unit 238, which is associated with the target keyword.
The voice assistant unit 238 is activated in response to the activation signal from the voice activation unit 236. Once activated, the voice assistant unit 238 may turn on the voice assistant application 130 to output a message such as “MAY I HELP YOU?” on a touch display unit and/or through a speaker unit of the I/O unit 240. In response, a user may speak voice commands to activate various associated functions of the mobile device 120. For example, when a voice command for Internet search is received, the voice assistant unit 238 may recognize the voice command as a search command and perform a web search via the communication unit 260 through the network 270.
When the speech detector 234 determines an input sound stream 210 to be human speech, the segmentation unit 310 receives and segments the input sound stream 210 into a plurality of sequential frames of an equal time period. For example, the input sound stream 210 may be received and segmented into frames of 10 ms. The feature extractor 320 sequentially receives the segmented frames from the segmentation unit 310 and extracts a sound feature from each of the frames. In one embodiment, the feature extractor 320 may extract the sound features from the frames using any suitable feature extraction method such as the MFCC (Mel-frequency cepstral coefficients) method. For example, in the case of the MFCC method, components of an n-dimensional vector are calculated from each of the segmented frames and the vector is used as a sound feature.
The feature buffer 330 is configured to sequentially receive the extracted sound features from the feature extractor 320. In the case of 10 ms frames, the feature buffer 330 may receive each of the sound features in a 10 ms interval. In one embodiment, the feature buffer 330 may be a FIFO (first-in first-out) buffer where the sound features are sequentially written to the buffer and are read out in an order that they are received. In another embodiment, the feature buffer 330 may include two or more memories configured to receive and store sound features, and output one or more sound features in the order received. For example, the feature buffer 330 may be implemented using a ping-pong buffer or a dual buffer in which one buffer receives a sound feature while the other buffer outputs a previously written sound feature. In some embodiments, the feature buffer 330 may be implemented in the storage unit 250.
The feature statistics generator 340 accesses the sound features received in the feature buffer 330 and generates feature statistics of the sound features. The feature statistics may include at least one of a mean μ, a variance σ2, a maximum value, a minimum value, a noise power, a signal-to-noise ratio (SNR), a signal power, an entropy, a kurtosis, a high order momentum, etc. that are used in processing the sound features in the feature processing unit 350. In one embodiment, initial feature statistics may be generated for a plurality of sound features initially received in the feature buffer 330 and updated with each of the subsequent sound features received in the feature buffer 330 to generate updated feature statistics. For example, the initial feature statistics may be generated once for the first thirty sound features received in the feature buffer 330 and then updated with each of the subsequent sound features that are received in the feature buffer 330.
Once the feature statistics generator 340 generates the initial feature statistics for the plurality of initially received sound features, the feature buffer 330 receives a next sound feature. While the feature buffer 330 receives the next sound feature, the feature processing unit 350 receives a first number of the sound features from the feature buffer 330 in the order received (e.g., first-in first-out) and processes each of the predetermined number of the sound features. In a particular embodiment, the first number of sound features may be a predetermined number of sound features. For example, the first number of sound features may be two or more sound features. In one embodiment, the feature processing unit 350 may normalize each of the first number of sound features based on the associated feature statistics, which include a mean μ and a variance σ2. In other embodiments, the feature processing unit 350 may perform one or more of noise suppression, echo cancellation, etc. on each of the first number of sound features based on the associated feature statistics.
The first number of sound features (e.g., two or more sound features) may be adjusted (e.g., improved) based on available processing resources. For example, the feature processing unit 350 may process multiple sound features during a single time frame (e.g., a clock cycle) as opposed to processing a single sound feature during the single time frame. In a particular embodiment, the number of sound features processed by the feature processing unit 350 during a single time frame may be determined based on an availability of resources, as described with respect to
In some embodiments, since the sound features are processed in the order that they are received in the feature buffer 330, the feature processing unit 350 retrieves and normalizes the first number of the sound features starting from the first sound feature. In this manner, during the time it takes for the feature buffer 330 to receive a next sound feature, the feature processing unit 350 accesses and normalizes the first number of sound features from the feature buffer 330. After the feature processing unit 350 finishes normalizing the initially received sound features based on the initial feature statistics, the feature processing unit 350 normalizes the next sound feature based on the feature statistics updated with the next sound feature. The keyword score calculation unit 360 receives the first number of normalized sound features from the feature processing unit 350 and determines a keyword score for each of the normalized sound features. The keyword score may be determined in the manner as described above with reference to
The keyword detection unit 370 receives the keyword score for each of the first number of the normalized sound features and determines whether any one of the keyword scores is greater than a threshold score. As a non-limiting example, the threshold score may be a predetermined threshold score. In one embodiment, the keyword detection unit 370 may detect the input sound stream 210 as the target keyword if at least one of the keyword scores is greater than the threshold score. The threshold score may be set to a minimum keyword score for detecting the target keyword within a desired confidence level. When any one of the keyword scores exceeds the threshold score, the keyword detection unit 370 generates the activation signal to turn on the voice assistant unit 238.
As each of the plurality of frames R1 to RM is generated, the feature extractor 320 sequentially receives the frames R1 to RM, and extracts the plurality of sound features F1 to FM from the frames R1 to RM, respectively. In one embodiment, the sound features F1 to FM may be extracted in the form of MFCC vectors. The extracted sound features F1 to FM are then sequentially provided to the feature buffer 330 for storage and processing.
During the time periods T1 through TN, the feature buffer 330 sequentially receives and stores the sound features F1 to FN, respectively. Once the feature buffer 330 receives the N number of sound features F1 to FN, the feature statistics generator 340 accesses the sound features F1 to FN from the feature buffer 330 to generate the initial feature statistics SN. In the illustrated embodiment, the feature processing unit 350 does not normalize any sound features from the feature buffer 330 during the time periods T1 through TN.
During the time period TN+1, the feature processing unit 350 retrieves and normalizes a number of sound features (e.g., a predetermined number of sound features) from the feature buffer 330 while the feature buffer 330 receives the sound feature FN+1. In the illustrated embodiment, the feature processing unit 350 retrieves and normalizes the first two sound features F1 and F2 from the feature buffer 330 based on the initial feature statistics SN during the time period TNA. Alternatively, the feature processing unit 350 may be configured to normalize the sound features F1 and F2 based on the initial feature statistics SN during the time period TN. The sound features in the feature buffer 330 that are retrieved and normalized by the feature processing unit 350 are indicated as a box with a dotted line.
Since the feature processing unit 350 normalizes the sound features F1 and F2 during the time period TNA, time delays between receiving and normalizing the sound features F1 and F2 are approximately N time periods and N−1 time periods, respectively. When the feature buffer 330 receives the sound feature FN+1, the feature statistics generator 340 accesses the sound feature FN+1 from the feature buffer 330 and updates the initial feature statistics SN with the sound feature FN+1 during the time period TN+1 to generate updated feature statistics SN+1. Alternatively, the feature statistics generator 340 may update the initial feature statistics SN with the sound feature FN+1 to generate the updated feature statistics SN+1 at any time before the feature processing unit 350 normalizes the sound feature FN+1.
During the time period TN+2, the feature processing unit 350 retrieves and normalizes the next two sound features F3 and F4 from the feature buffer 330 based on the initial feature statistics SN while the feature buffer 330 receives a sound feature FN+2. When the feature buffer 330 receives the sound feature FN+2, the feature statistics generator 340 accesses the sound feature FN+2 from the feature buffer 330 and updates the previous feature statistics SN+1 with the sound feature FN+2 during the time period TN+2 to generate updated feature statistics SN+2. In this manner, the feature processing unit 350 normalizes each of the sound features F1 to FN based on the initial feature statistics SN, and each of the subsequent sound features including FN+1 by recursively updating the feature statistics.
In time periods TN+3 through TM−1, the number of sound features stored in the feature buffer 330 is reduced by one at each time period since one sound feature is written into the feature buffer 330 while two sound features are retrieved and normalized. During these time periods, the feature statistics generator 340 accesses sound features FN+3 to FM−1 and updates the previous feature statistics with the sound features FN+3 to FM−1 to generate updated feature statistics SN+3 to SM−1, respectively. For example, during the time period TN+3, the feature statistics generator 340 accesses the sound feature FN+3 and updates the feature statistics SN+2 with the sound feature FN+3 to generate updated feature statistics SN+3. In the illustrated embodiment, during the time period TM−1, the feature processing unit 350 retrieves and normalizes the sound features FM−3 and FM−2 from the feature buffer 330 based on features statistics SM−3 and SM−2, respectively, while the feature buffer 330 receives the sound feature FM−1.
As illustrated in
When the first plurality of sound features has been received in the feature buffer 330, the feature statistics generator 340, at 604, generates the initial feature statistics SN for the first plurality of sound features, e.g., a mean μ and a variance σ2. For sound features extracted in the form of MFCC vectors, each sound feature includes a plurality of components. In this case, the feature statistics may include a mean μ and a variance σ2 for each of the components of the sound features. In one embodiment, the feature statistics generator 340 may access the first plurality of sound features after the feature buffer 330 has received the first plurality of sound features. In another embodiment, the feature statistics generator 340 may access each of the first plurality of sound features as the feature buffer 330 receives the sound features.
In the illustrated method, during a time period T, the feature processing unit 350 receives and normalizes a first number of sound features from the output of the feature buffer 330 at 610 and 612 while a next sound feature of a second plurality of sound features is written into the feature buffer 330 at 606. On the input side, the feature buffer 330 receives the next sound feature (e.g., FN+1) of the second plurality of sound features at 606. As the next sound feature (e.g., FN+1) is received in the feature buffer 330, the feature statistics generator 340 accesses, at 608, the next sound feature (e.g., FN+1) from the feature buffer 330 and updates the previous feature statistics (e.g., SN) with the next sound feature (e.g., FN+1) to generate updated feature statistics (e.g., SN+1). For example, the feature statistics generator 340 generates the updated feature statistics SN+1 by calculating a new mean μ and a new variance σ2 of the sound features F1 to FN+1.
On the output side of the feature buffer 330, the feature processing unit 350 retrieves the first number of sound features that includes two or more sound features from the feature buffer 330 at 610. The feature processing unit 350 then normalizes the retrieved first number of sound features (e.g., F1 and F2) based on the feature statistics (e.g., SN) at 612. In one embodiment, the feature processing unit 350 may normalize each of the retrieved sound features based on the initial feature statistics if the retrieved sound feature is from the first plurality of sound features. For subsequent sound features (e.g., FN+1), the feature processing unit 350 may normalize each of the retrieved sound features based on the recursively updated feature statistics (e.g., SN+1). In the case of sound features extracted by using the MFCC method, the sound features may be in the form of MFCC vectors, and normalized based on mean values and variance values of each component of the MFCC vector.
At 614, the keyword score calculation unit 360 receives the normalized sound features and determines a keyword score for each of the normalized sound features as described above with reference to
On the other hand, if none of the keyword scores is greater than the threshold score, the method proceeds to 620 to determine whether the feature buffer 330 includes less than the first number of sound features. If the feature buffer 330 includes less than the first number of sound features, the method proceeds to 622 and 626 in
After the feature processing unit 350 has normalized the sound feature at 628, the keyword score calculation unit 360 receives the normalized sound feature and determines a keyword score for the normalized sound feature at 630, as described above with reference to
If the current resources of the mobile device 120 are insufficient to normalize the first number of sound features, the feature processing unit 350 decreases the first number at 740. On the other hand, if the current resources of the mobile device 120 are sufficient, the feature processing unit 350 determines whether the current resources of the mobile device 120 are sufficient to normalize more sound features at 750. If the resources of the mobile device 120 are insufficient to normalize more sound features, the feature processing unit 350 maintains the first number at 760. Otherwise, the feature processing unit 350 can normalize more sound features and proceeds to 770 to increase the first number.
During a time period TN+1, the feature processing unit 350 retrieves the first number of sound features from the feature buffer 330 and normalizes one or more sound features of the first number of sound features while the feature buffer 330 receives the sound feature FN+1. As shown, the feature processing unit 350 retrieves the first three sound features F1, F2, and F3 from the feature buffer 330, skips normalization of the sound feature F3, and normalizes two sound features F1 and F2 based on the initial feature statistics SN. The sound features in the feature buffer 330 that are retrieved by the feature processing unit 350 are indicated as a box with a dotted line and the sound features in the feature processing unit 350 that are received but not normalized are also indicated as a box with a dotted line. Alternatively, the skipping of the sound feature F3 may be implemented by the feature processing unit 350 retrieving only the sound features that are to be normalized, i.e., F1 and F2, from the feature buffer 330.
In one embodiment, the keyword score calculation unit 360 calculates a keyword score for a normalized sound feature of the sound feature F3 by using the normalized sound feature of the sound feature F2 as the normalized sound feature of the sound feature F3. The skipping process may be repeated for subsequent sound features (e.g., F6) that are received from the feature buffer 330. Thus, the process load may be reduced substantially by using a normalized sound feature and observation scores of the previous sound feature as a normalized sound feature and observation scores of a skipped sound feature. Further, since the difference between a skipped sound feature and a previous sound feature, which is used instead of the skipped sound feature for determining the keyword score, is generally not substantial, the skipping may not significantly degrade the performance in detecting the target keyword.
If the difference is determined to be less than a threshold difference at 1020, the feature processing unit 350 skips normalization of the current sound feature and uses a previous normalized sound feature as a current normalized sound feature at 1030. For example, if the difference between a current sound feature F3 and a previous sound feature F2 is less than a threshold difference, the feature processing unit 350 may skip normalization of the sound feature F3 and use a normalized sound feature of the sound feature F2 as a current normalized sound feature of the sound feature F3.
If the difference is determined to be equal to or greater than the threshold difference at 1020, the feature processing unit 350 normalizes the current sound feature based on associated feature statistics at 1040. The feature processing unit 350 then provides the current normalized sound feature to the keyword score calculation unit 360 for determining a keyword score for the current sound feature. By adaptively skipping normalization of a sound feature when a difference between the sound feature and a previous sound feature is not substantial, the process load may be reduced significantly without substantially degrading the performance in detecting the target keyword.
The feature processing unit 350 receives current resource information of the mobile device 120 at 1130. The feature processing unit 350 then determines based on the received resource information, at 1140, whether the current resources of the mobile device 120 are sufficient to normalize the number of sound features among the first number of sound features during the time period in which a sound feature is received in the feature buffer 330. If the current resources of the mobile device 120 are insufficient to normalize the number of sound features, the feature processing unit 350 decreases the number of sound features that are to be normalized at 1150. That is, the number of sound features that are retrieved from the feature buffer 330 but not normalized by the feature processing unit 350 is increased such that the process load is reduced.
On the other hand, if the current resources of the mobile device 120 are determined to be sufficient at 1140, the feature processing unit 350 determines whether the current resources of the mobile device 120 are sufficient to normalize more sound features at 1160. If the resources of the mobile device 120 are insufficient to normalize more sound features, the feature processing unit 350 maintains the number of sound features that are to be normalized at 1170. Otherwise, the mobile device 120 can normalize more sound features and proceeds to 1180 to increase the number of sound features that are to be normalized such that the performance in detecting the target keyword is enhanced.
Then, during the second time period P2, the available resources of the mobile device 120 increase to allow normalization of four sound features. Thus, the number of sound features that are to be normalized is adjusted to four and the feature processing unit 350 proceeds to normalize all four sound features. At the next time period P3, the available resources of the mobile device 120 decrease to allow normalization of three sound features. Accordingly, the number of sound features that are normalized is adjusted to three and the feature processing unit 350 proceeds to skip normalization of one sound feature.
The mobile device 1300 may be capable of providing bidirectional communication via a receive path and a transmit path. On the receive path, signals transmitted by base stations are received by an antenna 1312 and are provided to a receiver (RCVR) 1314. The receiver 1314 conditions and digitizes the received signal and provides the conditioned and digitized signal to a digital section 1320 for further processing. On the transmit path, a transmitter (TMTR) receives data to be transmitted from a digital section 1320, processes and conditions the data, and generates a modulated signal, which is transmitted via the antenna 1312 to the base stations. The receiver 1314 and the transmitter 1316 is part of a transceiver that supports CDMA, GSM, W-CDMA, LTE, LTE Advanced, and so on.
The digital section 1320 includes various processing, interface, and memory units such as, for example, a modem processor 1322, a reduced instruction set computer/digital signal processor (RISC/DSP) 1324, a controller/processor 1326, an internal memory 1328, a generalized audio encoder 1332, a generalized audio decoder 1334, a graphics/display processor 1336, and/or an external bus interface (EBI) 1338. The modem processor 1322 performs processing for data transmission and reception, e.g., encoding, modulation, demodulation, and decoding. The RISC/DSP 1324 performs general and specialized processing for the mobile device 1300. The controller/processor 1326 controls the operation of various processing and interface units within the digital section 1320. The internal memory 1328 stores data and/or instructions for various units within the digital section 1320.
The generalized audio encoder 1332 performs encoding for input signals from an audio source 1342, a microphone 1343, and so on. The generalized audio decoder 1334 performs decoding for coded audio data and provides output signals to a speaker/headset 1344. It should be noted that the generalized audio encoder 1332 and the generalized audio decoder 1334 are not necessarily required for interface with the audio source, the microphone 1343 and the speaker/headset 1344, and thus are not shown in the mobile device 1300. The graphics/display processor 1336 performs processing for graphics, videos, images, and texts, which is presented to a display unit 1346. The EBI 1338 facilitates transfer of data between the digital section 1320 and a main memory 1348.
The digital section 1320 is implemented with one or more processors, DSPs, microprocessors, RISCs, etc. The digital section 1320 is also fabricated on one or more application specific integrated circuits (ASICs) and/or some other type of integrated circuits (ICs).
In general, any device described herein is indicative of various types of devices, such as a wireless phone, a cellular phone, a laptop computer, a wireless multimedia device, a wireless communication personal computer (PC) card, a PDA, an external or internal modem, a device that communicates through a wireless channel, and so on. A device may have various names, such as access terminal (AT), access unit, subscriber unit, mobile station, client device, mobile unit, mobile phone, mobile, remote station, remote terminal, remote unit, user device, user equipment, handheld device, etc. Any device described herein may have a memory for storing instructions and data, as well as hardware, software, firmware, or combinations thereof.
The techniques described herein are implemented by various means. For example, these techniques may be implemented in hardware, firmware, software, or combinations thereof. Those of ordinary skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, the various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
For a hardware implementation, the processing units used to perform the techniques are implemented within one or more ASICs, DSPs, digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, a computer, or a combination thereof.
Thus, the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternate, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
If implemented in software, the functions may be stored at a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates the transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limited thereto, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Further, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. For example, a computer-readable storage medium may be a non-transitory computer-readable storage device that includes instructions that are executable by a processor. Thus, a computer-readable storage medium may not be a signal.
The previous description of the disclosure is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein are applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Although exemplary implementations are referred to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. Such devices may include PCs, network servers, and handheld devices.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
The present application claims priority from U.S. Provisional Patent Application No. 61/820,464, filed May 7, 2013, entitled “ADAPTIVE AUDIO FRAME PROCESSING FOR KEYWORD DETECTION,” and U.S. Provisional Patent Application No. 61/859,048, filed Jul. 26, 2013, entitled “ADAPTIVE AUDIO FRAME PROCESSING FOR KEYWORD DETECTION,” the contents of which are incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61820464 | May 2013 | US | |
61859048 | Jul 2013 | US |