The present disclosure relates generally to audio processing, and more specifically, to detecting a target keyword from an audio input in an electronic device.
In recent years, the use of electronic devices having sound capturing capabilities such as smartphones, tablet computers, and smart TVs has been increasing steadily. Such electronic devices may allow one or more applications or functions to be activated in response to a voice keyword. Since users of such devices typically use such voice activation features in a random time frame, these devices often operate in an active state continuously to enable such input sound to be captured and processed.
Operating such devices in an active state continuously generally allows receiving an entire keyword in detecting the keyword from an input sound. However, using such devices to continuously capture and process the input sound for detecting a keyword typically leads to increased power consumption or a reduced battery life in the case of mobile devices. Some devices have implemented a duty cycle to reduce the time when an input sound stream is received and processed. However, if the beginning of the keyword from a user falls in an inactive state of the duty cycle, it may result in a failure to detect a keyword, requiring the user to repeat the keyword.
In addition, when different users speak a same keyword to activate a function in an electronic device, they typically speak the keyword at different speeds such that different time periods may be devoted to each of the portions of the keyword. To address the differences according to users, some electronic devices have processed all combinations of all possible time periods for detecting the keyword. However, such a sound processing method generally increases computational complexity and requires significantly large computing resources.
The present disclosure provides methods and apparatus for detecting a target keyword to activate a function or an application.
According to one aspect of the present disclosure, a method of detecting a target keyword, which includes a plurality of portions, for activating a function in an electronic device is disclosed. The method receives an input sound starting from one of the plurality of portions of the target keyword and extracts a plurality of sound features from the input sound. Further, the method obtains data descriptive of a state network, where the state network includes a single starting state, multiple entry states, and transitions from the single starting state to each of the multiple entry states. Based on the extracted sound features and state network, the input sound may be detected as the target keyword. This disclosure also describes apparatus, a device, a system, a combination of means, and a computer-readable medium relating to this method.
According to another aspect of the present disclosure, an electronic device for detecting a target keyword to activate a function includes a sound sensor and a voice activation unit. The target keyword includes a plurality of portions. The sound sensor is configured to receive an input sound starting from one of the plurality of portions of the target keyword. The voice activation unit is configured to extract a plurality of sound features from the input sound, obtain data descriptive of a state network, and determine whether the input sound corresponds to the target keyword based on the extracted sound features and the state network. The state network includes a single starting state, multiple entry states, and transitions from the single starting state to each of the multiple entry states.
According to still another aspect of the present disclosure, a method for detecting a target keyword in an electronic device is disclosed. The target keyword includes a plurality of portions. The method receives an input sound and extracts a plurality of sound features from the input sound. Further, the method obtains state information associated with the plurality of portions of the target keyword. The state information includes a plurality of duration ranges for a plurality of states associated with the portions of the target keyword. Based on the extracted sound features and the state information, the input sound is detected as the target keyword. This disclosure also describes apparatus, a device, a system, a combination of means, and a computer-readable medium relating to this method.
According to yet another aspect of the present disclosure, an electronic device for detecting a target keyword including a plurality of portions is disclosed. The electronic device includes a sound sensor and a voice activation unit. The sound sensor is configured to receive an input sound. The voice activation unit is configured to extract a plurality of sound features from the input sound and obtain state information associated with the plurality of portions of the target keyword. The state information includes a plurality of duration ranges for a plurality of states associated with the portions of the target keyword. The voice activation unit is further configured to detect the input sound as the target keyword based on the extracted sound features and the state information.
Embodiments of the inventive aspects of this disclosure will be understood with reference to the following detailed description, when read in conjunction with the accompanying drawings.
Reference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be apparent to one of ordinary skill in the art that the present subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, systems, and components have not been described in detail so as not to unnecessarily obscure aspects of the various embodiments.
Upon detecting the target keyword, the mobile device 120 activates the voice assistant application 130, which can perform additional functions in response to other commands from the user 110. In some embodiments, the mobile device 120 may store a plurality of target keywords for use in detecting the target keywords from user's sound input. Each of the target keywords may be configured to activate an application or a function associated with the target keyword in the mobile device 120.
The processor 250 includes a digital signal processor (DSP) 252 and a voice assistant unit 260, and may be an application processor or a central processing unit (CPU) for managing and operating the electronic device 200. The DSP 252 includes a speech detector 254 and a voice activation unit 256. In one embodiment, the DSP 252 is a low power processor for reducing power consumption in processing sound streams. In this configuration, the voice activation unit 256 in the DSP 252 is configured to activate the voice assistant unit 260 when the target keyword is detected in an input sound stream. Although the voice activation unit 256 is configured to activate the voice assistant unit 260 in the illustrated embodiment, it may also activate any function or application that may be associated with a target keyword.
The sound sensor 210 may be configured to receive an input sound stream and provide it to the speech detector 254 in the DSP 252. The sound sensor 210 may include one or more microphones or any other types of sound sensors that can be used to receive, capture, sense, and/or detect a sound input to the electronic device 200. In addition, the sound sensor 210 may employ any suitable software and/or hardware for performing such functions.
In one embodiment, the sound sensor 210 may be configured to receive the input sound stream periodically according to a duty cycle. In this case, the sound sensor 210 may determine whether the received portion of the input sound stream exceeds a threshold sound intensity. When the received portion of the sound stream exceeds the threshold intensity, the sound sensor 210 activates the speech detector 254 and provides the received portion to the speech detector 254 in the DSP 252. Alternatively, without determining whether the received portion exceeds a threshold sound intensity, the sound sensor 210 may receive a portion of the input sound stream periodically and activate the speech detector 254 to provide the received portion to the speech detector 254.
For use in detecting the target keyword, the storage unit 230 stores the target keyword and state information on a plurality of states associated with a plurality of portions of the target keyword. In one embodiment, the target keyword may be divided into a plurality of basic units of sound such as phones, phonemes, or subunits thereof, and the plurality of portions representing the target keyword may be generated based on the basic units of sound. Each portion of the target keyword is then associated with a state under a Markov chain model such as a hidden Markov model (“HMM”), a semi-Markov model (“SMM”), or a combination thereof. The state information may include a predetermined number of entry states among the states associated with the target keyword and transition information from each of the states to a next state including itself. The storage unit 230 may be implemented using any suitable storage or memory devices such as a RAM (Random Access Memory), a ROM (Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), a flash memory, or an SSD (solid state drive).
The speech detector 254 in the DSP 252, when activated, receives the portion of the input sound stream from the sound sensor 210. In one embodiment, the speech detector 254 extracts a plurality of sound features from the received portion and determines whether the extracted sound features indicate sound of interest such as speech by using any suitable sound classification method such as a Gaussian mixture model (GMM) based classifier, a neural network, the HMM, a graphical model, and a Support Vector Machine (SVM) technique. If the received portion is determined to be sound of interest, the speech detector 254 activates the voice activation unit 256 and the received portion and the remaining portion of the input sound stream are provided to the voice activation unit 256. In some other embodiments, the speech detector 254 may be omitted in the DSP 252. In this case, when the received portion exceeds the threshold intensity, the sound sensor 210 activates the voice activation unit 256 and provides the received portion and the remaining portion of the input sound stream directly to the voice activation unit 256.
The voice activation unit 256, when activated, is configured to continuously receive the input sound stream and detect the target keyword from the input sound stream. As the input sound stream is received, the voice activation unit 256 may sequentially extract a plurality of sound features from the input sound stream. In addition, the voice activation unit 256 may obtain the state information including the plurality of states, the predetermined number of entry states, and transition information for the target keyword from the storage unit 230. For each sound feature, an observation score may be determined for each of the states by using any suitable probability model such as the GMM, the neural network, and the SVM.
From the transition information, the voice activation unit 256 may obtain transition scores from each of the states to a next state in a plurality of state sequences that are possible for the target keyword. The transition information may also include transition scores for the predetermined number of entry states to allow the voice activation unit 256 to detect the target keyword in the input sound stream based on the multiple entry states associated with the basic units of sound of the target keyword. By providing such multiple entry states into the target keyword, the voice activation unit 256 may detect the target keyword by processing the input sound stream that starts after the beginning of the target keyword.
After determining the observation scores and obtaining the transition scores, the voice activation unit 256 determines keyword scores for the possible state sequences. In one embodiment, if the greatest keyword score among the determined keyword scores exceeds a predetermined threshold score, the voice activation unit 256 detects the input sound stream as the target keyword. Upon detecting the target keyword, the voice activation unit 256 generates and transmits an activation signal to turn on the voice assistant unit 260, which is associated with the target keyword.
The voice assistant unit 260 is activated in response to the activation signal from the voice activation unit 256. Once activated, the voice assistant unit 260 may perform a voice assistant function by outputting a message such as “MAY I HELP YOU?” on a touch screen and/or through a speaker of the I/O unit 220. In response, a user may speak voice commands to activate various associated functions of the electronic device 200. For example, when a voice command for Internet search is received, the voice assistant unit 260 may recognize the voice command as a search command and perform a web search via the communication unit 240 through the network 270.
The sound detector 320 may analyze signal characteristics of the received portion of the input sound stream and determine whether the received portion exceeds a threshold sound intensity. For example, the sound detector 320 may analyze sound energy values or amplitude values of the received portion to determine the sound intensity of the received portion. If the received portion is determined to be sound exceeding the threshold sound intensity, the sound detector 320 may provide an activation signal and the received portion to the speech detector 254.
The speech detector 254, when activated, receives the portion of the input sound stream from the sound detector 320. The speech detector 254 may then extract one or more sound features from the received portion by using any suitable signal processing scheme such as MFCC (Mel-frequency cepstral coefficients), LPC (linear predictive coding), or LSP (line spectral pair) method. Using the extracted sound features, the speech detector 254 classifies the received portion of the input sound stream to determine whether the received portion includes speech.
In the illustrated embodiment, if the received portion of the input sound stream is determined to exceed the threshold sound intensity and include speech, the duty cycle function may be disabled to provide the remaining portion of the input sound stream to the voice activation unit 256 continuously (e.g., using a full duty cycle or a 100% duty cycle). According to one embodiment, if the sound detector 320 determines that the received portion exceeds a threshold sound intensity, it disables the duty cycle function of the sound sensor 210 so that the microphone 310 and the sound detector 320 may continue to operate in an active state to receive and transmit the remaining portion of the input sound stream to the speech detector 254. If the speech detector 254 determines that the initial portion received during the active state of the duty cycle includes speech, the speech detector 254 provides the initial portion and the remaining portion of the input sound stream to the voice activation unit 256. On the other hand, if the speech detector 254 determines that the initial portion does not include speech, the speech detector 254 generates an activation signal, which is provided to the sound sensor 210 to activate the duty cycle function for the microphone 310 and the sound detector 320.
In another embodiment, if the sound detector 320 determines that the received portion exceeds a threshold sound intensity, it activates the speech detector 254 and provides the received portion to the speech detector 254. If the speech detector 254 determines that the received portion includes speech, the speech detector 254 provides the microphone 310 and the sound detector 320 with a deactivation signal of the duty cycle function. Upon receiving the deactivation signal, the microphone 310 and the sound detector 320 may continue to operate in an active state to receive and transmit the remaining portion of the input sound stream to the speech detector 254 so that the speech detector 254 provides the portions to the voice activation unit 256. On the other hand, if the speech detector 254 determines that the initial portion received during the active state of the duty cycle does not include speech, the speech detector 254 does not provide the activation signal to the microphone 310 and the sound detector 320 such that the microphone 310 and the sound detector 320 operate according to the duty-cycle function.
In the illustrated diagram, a period between the starts of one portion (e.g., the portion 410) and a next portion (e.g., the portion 420) represents a duty cycle period T1. A period T2 represents the time duration in which the duty cycle is in an active state. The inactive state of the duty cycle is indicated by T1-T2, during which time period the input sound stream is not received. Due to receiving an input sound stream according to a duty cycle, the initial portion 430 of the input sound stream that is provided to the voice activation unit 256 for detecting a target keyword may correspond to a portion of the target keyword that is subsequent to the beginning portion of the target keyword.
The segmentation unit 510 receives the input sound from the speech detector 254 and segments the received input sound into a plurality of sequential frames of an equal time period. The feature extractor 520 sequentially receives the frames from the segmentation unit 510 and extracts a sound feature from each of the frames. In one embodiment, the feature extractor 520 may extract the sound features from the frames using any suitable feature extraction method such as the MFCC method. For example, in the case of the MFCC method, components in an n-dimensional vector are calculated from each of the segmented frames and the vector is used as a sound feature.
In the score determination unit 530, the observation score determination unit 550 sequentially receives the sound features and receives state information for the target keyword from the storage unit 230. According to one embodiment, the state information for the target keyword may include a plurality of states associated with a plurality of portions of the target keyword and a probability model (e.g., probability function) such as the GMM that is provided for each of the states. As described above, the target keyword may be divided into a plurality of basic units of sound and the plurality of portions representing the target keyword may be generated based on the basic units of sound corresponding to the states. In some embodiments, the states for the target keyword may also include a non-keyword state (e.g., a “filler” state), which is not associated with any of the plurality of portions representing the target keyword and which may be used as a starting state. For example, in the case of a target keyword including a predetermined number of states, each of which corresponds to a basic unit of sound such as a phone, a non-keyword state may represent basic units of sound other than those that are included in the target keyword.
As each sound feature is received, the observation score determination unit 550 determines an observation score for each of the states associated with the target keyword based on the sound feature and the state information received from the storage unit 230. In one embodiment, the observation score for each of the states is determined for the received sound feature by calculating a probability value according to the probability model for the associated state. Each of the probability values thus calculated may be used as the observation score for the associated state. A high observation score for a state indicates a high probability that the sound feature corresponds to a basic unit of sound for the state. The observation score determination unit 550 provides the observation scores for each of the received sound features to the maximum keyword score determination unit 560 for determining keyword scores for a plurality of state sequences, which are possible for the target keyword.
The maximum keyword score determination unit 560 receives the observation scores for each of the sound features and obtains the state information from the storage unit 230. In this configuration, the state information may include the plurality of states for the target keyword including the non-keyword state (including a single, non-keyword starting state), a predetermined number of entry states among the plurality of states, and transition information from each of the states to a next state including itself. An entry state represents a first state to which the non-keyword state (or starting state) may transition in each of possible state sequences for the target keyword.
The transition information in the state information includes a transition score from each of the states to a next state in each of the possible state sequences for the target keyword. The transition score may represent a probability value that each of the states transitions to the next state in each possible state sequence. The transition score also includes transition scores from the non-keyword state to the predetermined number of entry states.
Based on the received observation scores and the transition scores, the maximum keyword score determination unit 560 calculates a keyword score for each of the possible state sequences. In this case, a state sequence may start from a non-keyword state (i.e., a starting state) since the non-keyword state is assigned before the input sound is received. As such, the transition scores include a transition score from the non-keyword state to any one of the entry states, and also include a transition score from the non-keyword state to itself in the state sequence. As a set of observation scores for each sound feature is received from the observation score determination unit 550, the maximum keyword score determination unit 560 adds a next state to each state sequence and determines a keyword score for each of the updated state sequences in the manner as described above.
As a set of keyword scores are calculated for the state sequences, the maximum keyword score determination unit 560 selects the greatest keyword score among the keyword scores. The keyword scores may be calculated to determine the greatest keyword score by using any suitable method such as a Viterbi algorithm. After determining the greatest keyword score, the maximum keyword score determination unit 560 provides it to the keyword detection unit 540. In one embodiment, a greatest keyword score is provided to the keyword detection unit 540 only when a last state of a state sequence having the greatest keyword score corresponds to the last basic unit of sound (e.g., the last phone) for the target keyword.
Upon receiving the greatest keyword score from the maximum keyword score determination unit 560, the keyword detection unit 540 detects the target keyword in the input sound based on the greatest keyword score. For example, the keyword detection unit 540 may receive a threshold score for detecting the target keyword from the storage unit 230 and detect the target keyword if the greatest keyword score is greater than the received threshold score. In this case, the threshold score may be set to a minimum keyword score for detecting the target keyword within a desired confidence level.
In some embodiments, the maximum keyword score determination unit 560 determines a non-keyword score for a non-keyword state sequence. The non-keyword score may be obtained from the keyword scores for the possible state sequences including the non-keyword state sequence and provided to the keyword detection unit 540. The keyword detection unit 540 determines a confidence value based on a difference between the greatest keyword score and the non-keyword score, and detects the target keyword in the input sound. In this case, the keyword detection unit 540 may receive a threshold confidence value from the storage unit 230 and detect the target keyword if the confidence value is greater than the threshold confidence value. Using the difference between the greatest keyword score and the non-keyword score in detecting the target keyword may improve detection accuracy particularly when the input sound includes ambient sound such as noise that may affect keyword scores. Once the target keyword is detected, the keyword detection unit 540 generates and provides an activation signal to turn on the voice assistant unit 260, which is associated with the target keyword.
Then at 630, the voice activation unit 256 obtains state information associated with the initial portion and the subsequent portions of the target keyword from the storage unit 230. As described above with reference to
In the illustrated embodiment, when the observation score determination unit 550 receives each of the sound features F1, F2, F3, F4, and F5, which are received sequentially in predetermined time intervals, a set of observation scores for the states is determined. For example, in the case of the sound feature F1, a set of the observation scores 710 is determined for the states (i.e., states “F,” “S,” “T,” “A,” “R,” “T,” etc.). After determining the set of the observation scores 710 for the sound feature F1, the observation score determination unit 550 may sequentially determine a plurality of sets of observation scores 720 to 750 for the sound feature F2 to F5, respectively. The observation score determination unit 550 may sequentially provide the sets of observation scores 710 to 750 for the sound feature F1 to F5, respectively, to the maximum keyword score determination unit 560 for detecting the target keyword.
Initially, a sound feature F0 in the diagram 800 indicates that an input sound has not been received. Once the input sound is received, the sound features F1 to F5 are sequentially extracted from the received input sound. As such, the non-keyword state “F” is only assigned to the sound feature F0 and serves as a single starting state for all the state sequences. The diagram 800 also shows transition lines from the starting state “F” for the sound feature F0 to each of the possible next states associated with the next sound feature F1, i.e., the entry states “S,” “T,” “A,” and “R.” Then at the sound feature F1, a transition line from each of the states for the sound feature F1 to each of the states for the next sound feature F2 (i.e., a next state) is shown for each of the possible state sequences. Such transition lines may be predetermined for the target keyword and applied to each of the remaining sound features F2, F3, F4, and F5 of the received input sound in the same manner, as shown in the diagram 800.
In this configuration, each transition line is associated with a transition score. Some of the states may not have transition lines to some of the next states. For example, a transition line from the state “S” for a sound feature to states “A,” “R,” and “T” for a next sound feature is not provided. In one embodiment, if there is no transition line from a current state to a next state, a state sequence from the current state to the next state may not be generated. In another embodiment, where a transition line is not provided from one state to a next state, a transition score may be set to be a large negative number (e.g., −10) to ensure that a state sequence including such a transition score may not have a greatest keyword score for use in detecting the target keyword.
Based on the diagram 800, an exemplary table of transition scores that are associated with transition lines and no transition lines may be provided as follows.
As shown in Table 1, in the case of no transition line from a current state (e.g., the state “S”) to a next state (e.g., the state “A”), an associated transition score of −10 is assigned. Initially, since a current state “F” has four transition lines to the entry states “S,” “T,” “A,” and “R,” transition scores of 0.8, 0.8, 0.7, and 0.9 are assigned to the four transition lines from the current state “F” to the entry states “S,” “T,” “A,” and “R.” By providing the plurality of entry states, even if the voice activation unit 256 receives an input sound that does not start from the beginning portion of the target keyword (i.e., the state “S”), the target keyword may be detected from such an input sound.
In this embodiment, among keyword scores that are determined for possible state sequences that ends in each of the states associated with each sound feature, a greatest keyword score is determined Such greatest keyword scores are then used as candidate keyword scores in detecting the target keyword, for example, by selecting a greatest keyword score among the candidate keyword scores and comparing the selected greatest keyword score with a threshold score. Further, the candidate keyword scores may be used in determining greatest keyword scores for possible state sequences that ends in each of the states associated with each next sound feature (i.e., next candidate keyword scores). The candidate keyword scores and the selected greatest keyword score may be determined by using the Viterbi algorithm based on the observation scores provided in
In the diagram 900, in the case of the sound feature F1, each of the illustrated states “F,” “S,” “T,” “A,” “R,” and “T” has one possible state sequence from the starting state “F” for the sound feature F0, and greatest scores (i.e., candidate scores) for the states are determined as 1.0, 1.6, 1.7, 0.8, 1.0, and −9.9, respectively. The candidate keyword scores may be determined by summing a transition score from the non-keyword state “F” to each of the states for the sound feature F1 and an observation score of the associated state for the sound feature F1. For example in this manner, the candidate score 1.7 for the state “T” included in the set of entry states 810 is determined by summing a transition score from the non-keyword state “F” to the state “T” and an observation score for the state “T.” In the illustrated example, a large negative number (e.g., −10) is assigned as the transition score to the state “T” which is not included in the set of entry states 810 such that any consecutive state sequence including the state “T” at the point of the sound feature F1 cannot have a greatest keyword score for use in detecting the target keyword.
In the case of the sound feature F2, a candidate keyword score is determined for each state by summing a candidate keyword score for state sequences having each of the states for the sound feature F1 as a last state, a transition score from each state for the sound feature F1 to the state for the sound feature F2, and an observation score of the state for the sound feature F2. For each of the states associated with the sound feature F2, the greatest sum among the above sums is selected as the candidate keyword score for possible state sequences having the state as a last state. For example, keyword scores for possible state sequences having the state “A” for the sound frame F2 as a last state are provided in Table 2 as follows.
In Table 2 above, the greatest keyword score of 3.2 among the sums above is selected as the candidate keyword score for possible state sequences having the state “A” associated with the sound feature F2. In the same manner, the candidate keyword scores for the other illustrated states, i.e., “F,” “S,” “T,” “R,” “T” may be determined as 1.9, 2.9, 2.7, 2.0, and 1.8. The candidate keyword score of 3.2 for the state “A” may then be selected as the greatest keyword score for the sound feature F2 as shown in the diagram 900.
For each of the remaining sound features F3, F4, and F5, a set of candidate keyword scores for possible state sequences having as a last state each of the states associated with each of the remaining sound features F3, F4, and F5 may be calculated in the same manner as the sound feature F2. Among the candidate keyword scores, the greatest keyword score for each of the sound features F3, F4, and F5 may also be determined in a similar manner. The greatest keyword score for each of the sound features F1 to F5 may be used to detect the target keyword. In the illustrated example of
The maximum keyword score determination unit 560 determines keyword scores for the possible state sequences by using the observation scores and the transition scores at 1030. In this process, the greatest keyword scores may be selected among the determined keyword scores and provided to the keyword detection unit 540 in the manner as described above with reference to
In one embodiment, the score determination unit 530 receives the extracted sound features from the storage unit 230 and determines the greatest keyword score for all possible state sequences in the manner as described above with reference to
As shown in
In one embodiment, the score determination unit 530 may receive a plurality of reference input sounds for the target keyword, each of which is captured based on a full duty cycle. For each of the reference input sounds, the score determination unit 530 determines entry states according to an initial inactive period in a duty cycle by determining a reference state sequence and state time periods for a plurality of states in the reference state sequence in a similar manner as described above with respect to the reference state sequence 1102. The numbers of the entry states in the reference state sequences may then be averaged to determine entry states for use in detecting the target keyword from an input sound.
At 1230, a plurality of reference state sequences for the reference input sounds are determined based on the state information. At 1240, the voice activation unit 256 determines a plurality of state time periods for a plurality of states in the reference state sequences by backtracking the reference state sequences, as described in detail with reference to
The voice activation unit 256 may be configured to segment an input sound into portions of variable durations to detect a target keyword in the input sound. In some embodiments, a possible range of such durations for each of the states associated with the target keyword may be predetermined. For each duration associated with each state, a score (e.g., a probability value) may be assigned for determining a greatest keyword score.
The variable segmentation unit 1310 receives the input sound from a sound sensor (e.g., the sound sensor 210) and segments the received input sound into a plurality of frames having a plurality of durations. For example, if an input sound of a 30 ms duration is received, it may be segmented into a first frame of a 10 ms duration, which is located from 0 ms to 10 ms in the input sound. In a similar manner, the input sound may be segmented into a second frame of a 10 ms duration from 10 to 20 ms, and a third frame of a 10 ms duration from 20 ms to 30 ms, a fourth frame of a 20 ms duration from 0 ms to 20 ms, and a fifth frame of a 20 ms duration from 10 ms to 30 ms.
The variable segmentation unit 1310 may provide the plurality of segmented frames to the feature extractor 1320 and the entire input sound as an input sound frame (e.g., a 30 ms duration in the above example) to the feature extractor 1320. In this process, the variable segmentation unit 1310 also provides the feature extractor 1320 with frame information on a duration and a location of each frame. Upon receiving the frames and the frame information, the feature extractor 1320 extracts and outputs a sound feature from each of the frames using any suitable feature extraction method. In the above example, the feature extractor 1320 receives a total of six frames and extracts a total of six sound features from the frames.
The score determination unit 1330 is configured to generate a greatest keyword score to be provided to the keyword detection unit 1340 based on the extracted sound features. The observation score determination unit 1350 in the score determination unit 1330 receives the sound features and the frame information from the feature extractor 1320. The observation score determination unit 1350 also receives state information for the target keyword from the storage unit 230. The state information includes a plurality of states associated with the target keyword and a probability model for each of the states, as described above with reference to
In this embodiment, the state information further includes duration information on a range of durations that is predetermined for each of the states. The predetermined range of durations for each state may be set to a possible time range for the state. For example, a user may speak a sound corresponding to the state “S” in a relatively short duration while another user may take longer to speak the sound. Accordingly, the predetermined range of durations for a state may be set to include a range of durations that users may typically take in speaking a sound associated with the state.
For each received sound feature associated with a specified duration, the observation score determination unit 1350 may decide on whether to determine an observation score for each of the states based on the received state information. In one embodiment, the observation score determination unit 1350 determines whether the duration is not within the predetermined range of durations for each state based on the frame information which includes the duration. If the duration is determined not to be within the predetermined range of durations, the observation score determination unit 1350 decides not to determine the observation score. Otherwise, the observation score determination unit 1350 proceeds to determine the observation score.
In determining the observation score, a probability value for each of the states may be calculated according to a probability model for the associated state. The calculated probability value may be used as an observation score for the associated state. The observation score determination unit 1350 then provides the determined observation scores for each sound feature to the maximum keyword score determination unit 1360 for determining keyword scores for a plurality of state sequences, which are possible for the target keyword.
The maximum keyword score determination unit 1360 then receives the observation scores for each of the sound features and the frame information including the durations and the locations of the frames, which are associated with the sound features, respectively. In addition, the maximum keyword score determination unit 1360 receives the state information including transition scores (as described above in connection with
The duration information further includes, for each of the states, a duration score determined for each of the durations. The predetermined duration scores for each state may be set in association with probability values that the state falls in the durations, respectively. By calculations using the received observation scores, the transition scores, and the duration scores based on the frame information and the predetermined ranges of durations, the maximum keyword score determination unit 1360 determines keyword scores for the possible state sequences. Among the determined keyword scores, the maximum keyword score determination unit 1360 determines the greatest keyword score for use in detecting the target keyword.
The maximum keyword score determination unit 1360 may determine the keyword scores according to a duration-restricted semi-Markov model scheme. For example, if one of the possible state sequences includes a state whose duration is not within a predetermined range of durations for the state, the maximum keyword score determination unit 1360 may not determine a keyword score for the state sequence while determining keyword scores for other state sequences not including such a state. In this example, the maximum keyword score determination unit 1360 may select the greatest keyword score among the determined keyword scores by using any suitable method such as a Viterbi algorithm, as follows.
where t indicates a frame size of a current input sound, s represents a current state, V(t,s) indicates the greatest keyword score for all possible state sequences having the state s as a last state that starts from one of the time frames 1 to t, d represents the duration of the state s, dmin(s) represents the smallest duration of the state s among a predetermined range of durations for the state s, dmax(s) represents the greatest duration of the state s among the predetermined range of durations for the state s, s′ represents the previous state of the state s, T(s,s′) indicates a transition score from the previous state s′ to the current state s, O(t,d,s) represents observation scores for the time frames from frame t−d+1 to frame t for the state s, and D(d,s) indicates a duration score for s when a duration of the state s is d.
As shown in Equation 1 under the SMM scheme, the greatest keyword score for the target keyword is calculated based on the restricted range of durations for each state. By using such restricted ranges of durations, the maximum keyword score determination unit 1360 may be implemented with relatively low computational complexity. Further, since the restricted ranges of durations are predetermined taking into consideration different speaking speeds of users, the keyword detection performance is not degraded substantially.
After determining the greatest keyword score, the maximum keyword score determination unit 1360 provides the greatest keyword score to the keyword detection unit 1340. Upon receiving the greatest keyword score from the maximum keyword score determination unit 1360, the keyword detection unit 1340 detects the target keyword in the input sound based on the greatest keyword score. For example, the keyword detection unit 1340 may use a threshold score for detecting the target keyword from the storage unit 230, as described above with reference to
In one embodiment, for determining the range of durations for each of the states, the score determination unit 1330 may receive a plurality of reference input sounds for the target keyword. For each of the reference input sounds, the score determination unit 1330 determines a reference state sequence and, by backtracking the reference state sequence, also determines durations associated with each of the states in the reference state sequence in a similar manner as described above in connection with
In the illustrated embodiment, traverse times of transition lines from a previous state to a current state may indicate the durations in the predetermined range for each of the states. For example, in case of a state “A” at times T4 and T5, transition lines from a previous state “T” at time T2 to the current state “A” at times T4 and T5 are traversed in 20 ms and 30 ms, respectively. In this case, the predetermined range of durations of the state “A” is from 20 ms to 30 ms in the state sequences. Accordingly, for the state “A,” an observation score may be determined and a duration score can be predetermined for only segmented features of 20 ms or 30 ms. The observation score and the predetermined duration score may then be used to determine keyword scores and then the greatest keyword score among them to detect the target keyword. As such, since the keyword scores for use in detecting the target keyword are calculated based on the restricted ranges of durations, a computational time for detecting the target keyword may be significantly reduced under the SMM scheme.
For each of the sound features, the score determination unit 1330 determines an observation score of each of the states based on the probability model of each state, at 1530. Based on the transition information, the score determination unit 1330 obtains a transition score from each of the states to a next state in the possible state sequences, at 1540. At 1550, the score determination unit 1330 obtains the predetermined range of durations and the duration score determined for each of the durations from the duration information. By calculations using the received observation scores, the transition scores, and the duration scores based on the frame information and the predetermined range of durations, the score determination unit 1330 determines keyword scores for the possible state sequences and selects the greatest keyword score among the determined keyword scores, at 1560.
The mobile device 1600 may be capable of providing bidirectional communication via a receive path and a transmit path. On the receive path, signals transmitted by base stations are received by an antenna 1612 and are provided to a receiver (RCVR) 1614. The receiver 1614 conditions and digitizes the received signal and provides the conditioned and digitized signal to a digital section 1620 for further processing. On the transmit path, a transmitter (TMTR) receives data to be transmitted from a digital section 1620, processes and conditions the data, and generates a modulated signal, which is transmitted via the antenna 1612 to the base stations. The receiver 1614 and the transmitter 1616 is part of a transceiver that supports CDMA, GSM, W-CDMA, LTE, LTE Advanced, and so on.
The digital section 1620 includes various processing, interface, and memory units such as, for example, a modem processor 1622, a reduced instruction set computer/digital signal processor (RISC/DSP) 1624, a controller/processor 1626, an internal memory 1628, a generalized audio encoder 1632, a generalized audio decoder 1634, a graphics/display processor 1636, and/or an external bus interface (EBI) 1638. The modem processor 1622 performs processing for data transmission and reception, e.g., encoding, modulation, demodulation, and decoding. The RISC/DSP 1624 performs general and specialized processing for the mobile device 1600. The controller/processor 1626 controls the operation of various processing and interface units within the digital section 1620. The internal memory 1628 stores data and/or instructions for various units within the digital section 1620.
The generalized audio encoder 1632 performs encoding for input signals from an audio source 1642, a microphone 1643, and so on. The generalized audio decoder 1634 performs decoding for coded audio data and provides output signals to a speaker/headset 1644. It should be noted that the generalized audio encoder 1632 and the generalized audio decoder 1634 are not necessarily required for interface with the audio source, the microphone 1643 and the speaker/headset 1644, and thus are not shown in the mobile device 1600. The graphics/display processor 1636 performs processing for graphics, videos, images, and texts, which is presented to a display unit 1646. The EBI 1638 facilitates transfer of data between the digital section 1620 and a main memory 1648.
The digital section 1620 is implemented with one or more processors, DSPs, microprocessors, RISCs, etc. The digital section 1620 is also fabricated on one or more application specific integrated circuits (ASICs) and/or some other type of integrated circuits (ICs).
In general, any device described herein is indicative of various types of devices, such as a wireless phone, a cellular phone, a laptop computer, a wireless multimedia device, a wireless communication personal computer (PC) card, a PDA, an external or internal modem, a device that communicates through a wireless channel, and so on. A device may have various names, such as access terminal (AT), access unit, subscriber unit, mobile station, client device, mobile unit, mobile phone, mobile, remote station, remote terminal, remote unit, user device, user equipment, handheld device, etc. Any device described herein may have a memory for storing instructions and data, as well as hardware, software, firmware, or combinations thereof.
The present application claims priority from commonly owned U.S. Provisional Patent Application No. 61/820,498 filed on May 7, 2013 and claims priority from commonly owned U.S. Provisional Patent Application No. 61/859,058 filed on Jul. 26, 2013, the content of each of which is expressly incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61820498 | May 2013 | US | |
61859058 | Jul 2013 | US |