Exemplary embodiments of the present invention are explained in detail below with reference to the accompanying drawings.
A hard disk drive (HDD) 6, a compact disc ROM (CD-ROM) drive 8, a communication controlling unit 10, an input unit 11, and a displaying unit 12 are connected to the bus 5 via respective input/output (I/O) interfaces (not shown). The HDD 6 stores therein computer programs and the like. The CD-ROM drive 8 is configured to read a CD-ROM 7. The communication controlling unit 10 controls communicating between the speech recognition device 1 and a network 9. The input unit 11 includes a keyboard or a mouse. The speech recognition device 1 receives operational instructions from a user via the input unit 11. The displaying unit 12 is configured to and display information thereon and includes a cathode ray tube (CTR), a liquid crystal display (LCD), and the like.
The CD-ROM 7 is a recording medium that stores therein computer software such as an operating system (OS) or a computer program. When the CD-ROM drive 8 reads a computer program stored in the CD-ROM 7, the CPU 2 installs the computer program on the HDD 6.
Incidentally, instead of the CD-ROM 7 it is possible to use, for example, an optical disk such as a digital versatile disk (DVD), a magnetic optical disk, a magnetic disk such as a flexible disk (FD), and a semiconductor memory. Furthermore, instead of using a physical recording medium such as the CD-ROM 7, the communication controlling unit 10 can be configured to download a computer program from the network 9 via the Internet, and the downloaded computer program can be stored in the HDD 6. In such a configuration, a transmitting server needs to include a storage unit such as the recording medium as described above to store therein the computer program. The computer program can be activated by using a predetermined OS. The OS can perform some of processes. The computer program can be included in a group of computer program files that includes predetermined applications software and OS.
The CPU 2 controls operations of the entire speech recognition device 1, and performs each process based on the computer program loaded on the HDD 6.
Of the functions that the computer program installed on the HDD 6 causes the CPU 2 to execute, a function included in the speech recognition device 1 is described in detail below.
An input signal (not shown) is input to the feature extracting unit 103. The feature extracting unit 103 extracts a feature to be used for speech recognition from the input signal by analyzing the input signal, and outputs the extracted feature to the self-optimized acoustic model 100. Various types of acoustic features can be used as the feature. Alternatively, it is possible to use high-order features such as a gender of a speaker, a phonemic context, etc. As examples of the high-order features, a thirty-nine dimensional acoustic feature that is a combination of static features of Mel frequency cepstrum coefficients (MFCCs) or perceptual linear predictive (PLP) static features, delta (primary differentiation) and delta delta (secondary differentiation) parameters, and energy parameters, those are used in the conventional speech recognition method, a class of gender, and a class of the signal to noise ratio (SNR) of an input signal are used for speech recognition.
The self-optimized acoustic model 100 includes a hidden Markov model (HMM) 101 and a decision tree 102. The decision tree 102 is a tree diagram that is hierarchized at each branch. The HMM 101 is identical to that is used in the conventional speech recognition method. One or a plurality of the decision tree(s) 102 corresponds to Gaussian mixture models (GMMs) used as the feature of each state of the HMM in the conventional speech recognition method. The self-optimized acoustic model 100 is used to calculate a likelihood of a state of the HMM 101 with respect to a speech feature input from the feature extracting unit 103. The likelihood denotes the plausibility of a model, i.e., how the model explains a phenomenon and how often the phenomenon occurs with the model.
The language model 105 is a stochastic model for estimating the types of contexts each word is used. The language model 105 is identical to that is used in the conventional speech recognition method.
The decoder 104 calculates the likelihood of each word, and determines a word having a maximum likelihood (see
The HMM 101 and the decision tree 102 are described in detail below.
In the HMM 101, feature time-series data and a label of each phoneme that are output from the feature extracting unit 103 are recorded in associated manner.
An operation of the decision tree 102 is described in detail below with reference to
Parameters of the number of the nodes and leaves of the decision tree 102, features and questions that are used in each node, the likelihood output from each leaf, and the like are determined by the learning process based on learning data. Those parameters are optimized to obtain the maximum likelihood and the maximum recognition rate. If the learning data includes enough data, and also if the speech signal is obtained in the actual place where speech recognition is executed, the decision tree 102 is also optimized in the actual environment.
Processes performed by the self-optimized acoustic model 100 for calculating the likelihood of each state of the HMM 101 with respect to received features are described in detail below with reference to
First, the decision tree 102 corresponding to a certain state of the HMM 101 that indicates a target phoneme is selected (step S1).
Subsequently, the root node 300 is set to be an active node, i.e., a node that can ask a question, while the nodes 301 and the leaves 302 are set to be non-active nodes (step S2). Then, a feature that corresponds to the data set at the steps S1 and S2 is retrieved from the feature extracting unit 103 (step S3).
By using the retrieved feature, the root node 300 calculates an answer to the question that is stored in the root node 300 in advance (step S4). It is determined whether the answer to the question is “Yes” (step S5). If the answer is “Yes” (Yes at step S5), a child node indicating “Yes” is set to be an active node (step S6). If the answer is “No” (No at step S5), a child node indicating “No” is set to be an active node (step S7).
Then, it is determined whether the active node is the leaf 302 (step S8). If the active node is the leaf 302 (Yes at step S8), the likelihood stored in the leaf 302 is output because the leaf 302 is not branched any more to other node (step S9). If the active node is not the leaf 302 (No at step S8), the system control proceeds to step S3.
As described above, the features, the questions about the features, and the likelihood those depending on an input are written in the acoustic model using the decision tree 102. Therefore, the decision tree 102 can effectively optimize the acoustic features, questions relating to high-order features, and the likelihood depending on an input signal or a state of recognition. The optimization can be achieved by the learning process that is explained in detail below.
A learning sample of a target state corresponding to the decision tree 102 is input and the decision tree 102 including only one number of the root node 300 (step S11) is created. In the decision tree 102, the root node 300 branches into nodes, and the nodes further branches into child nodes.
Then, a target node to be branched is selected (step S12). Incidentally, the node 301 needs to include a certain amount of learning samples (for example, a hundred or more learning samples), and also the learning samples need to be composed by a plurality of classes.
It is determined whether the target node fulfills the above conditions (step S13). If the result of the determination is “No” (No at step S13), the system control proceeds to step S17 (step S18). If the result of the determination is “Yes” (Yes at step S13), all available questions about all features (learning samples) input to the target node 301 are asked and all branches (into child nodes) that are obtained by answers to the questions are evaluated (step S14). The evaluation at the step S14 is performed based on the increasing rate of the likelihood caused by the branches of the nodes. The questions about the features, which are the learning samples, are different depending on the features. For example, the question about the acoustic feature is expressed by either large or small. The question about the gender or types of noises is expressed by a class. Namely, if the feature is expressed by either large or small, the question is whether the feature exceeds a threshold. On the other hand, if the feature is expressed by a class, the question is whether the feature belongs to a certain class.
Then, a suitable question to optimize the evaluation is selected (step S15). In other words, all the available questions to all the learning samples are evaluated, and a question to optimize the increasing rate of the likelihood is selected.
In accordance with the selected question, the learning sample is branched into two leaves 302: “Yes” and “No”. Then, the likelihood of each of the leaves 302 is calculated based on the learning sample belonging to each of the branched leaves (step S16). The likelihood of a leaf L is calculated by the following Equation:
Likelihood stored at leaf L=P(true class|L)/P(true class) and the result of the calculation is stored in the leaf L,
where P(true class|L) denotes the posterior probability of the true class in the leaf L, and P(true class) denotes the prior probability of the true class.
Then, the system control returns to the step S12, and the learning process is performed to a new leaf. The decision tree 102 grows each time the steps S12 to S16 are repeated. In the event, if there is no target node that fulfills the conditions (No at step S13), pruning target nodes are pruned (steps S17 and S18). The pruning target nodes are pruned (deleted) from bottom up, i.e., from the lowest-order node to the highest-order node. Specifically, all the nodes having two child nodes are evaluated for the decrease in the likelihood when the child nodes are deleted. The node in which the least likelihood decreases is pruned (step S18) repeatedly until the number of the nodes drops below a predetermined value (step S17). If the number of the nodes drops below the predetermined value (No at step S17), a first round of the learning process to the decision tree 102 is terminated.
When the learning process to the decision tree 102 is terminated, the force alignment is performed on a speech sample for learning by using the learned acoustic model, thereby updating the learning sample. The likelihood of each leaf of the decision tree 102 are updated by using the updated learning sample. Those processes are repeatedly performed by predetermined times or until the increasing rate of the entire likelihood drops below a threshold, and then the learning process is completed.
In this manner, parameters of features and acoustic models can be dynamically self-optimized depending on the level of the input signal or the state of speech recognition. In other words, it is possible to optimize parameters of the acoustic models, for example, types and the number of features, which include not only acoustic features but also high-order features, the number of commoditized structures and sharing, the number of states, the number of context depending models, depending on conditions and states of input speech, phonemic recognition, and speech recognition. As a result, high recognition performance can be achieved.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2006-255549 | Sep 2006 | JP | national |