Embodiments of the present invention are related to speech technology, and more specifically to a text-dependent speaker recognition method and system based on Functional Data Analysis and Mel-Frequency Cepstral Coefficient features.
In recent years, there has been an increasing interest in the field of speaker recognition. Speaker recognition is a process of automatically recognizing who is speaking by characteristics of an individual's voice, also called voice recognition. It has been developed for use in consumer electronic devices, such as mobile telephones, games platforms, personal computers and personal digital assistants. There are two major applications of speaker recognition technologies. Speaker verification involves determining whether a given voice belongs to a certain speaker. It is usually used to control access to restricted services, for example, access to computer networks, websites, online or telephone banking systems, online purchasing or voice mail, and access to secure equipment. Speaker identification involves matching a given voice to one of a set of known voices. No matter for which application, the goal of a speaker recognition system is to extract, characterize and recognize the information in the speech signal conveying speaker identity.
Speaker recognition technologies may be divided into two categories, text-dependent speaker recognition (TDSR) and text-independent speaker recognition (TISR). TDSR requires the speakers to provide utterances of the same text for both training and testing.
This text, known as “pass phrase,” can be a piece of information such as a name, birth city, favorite color or a sequence of number. TISR recognizes a speaker without requiring a specific pass phrase. TDSR systems generally provide better recognition performance than TISR systems, especially for short training and testing utterances.
A TDSR system typically includes a computer or other electronic device equipped with a source of sound input, such as a microphone, to collect and interpret human speech. The collected speech waveform is converted into digital data representing signals at the discrete time intervals. The digitized speech data is processed to extract voice features that convey speaker information. For example, information about the speaker's vocal tract shape via the resonances and glottal source via the pitch harmonic may be included in the speech spectrum. The voice features are usually in a form of a sequence of acoustic vectors. In training sessions, the voice features extracted from the speech signal are used to create a model or template stored in a database. In testing sessions, the extracted features from the utterance are then compared to the reference features in the database that are obtained from the previous training sessions to find an optimal match for the given features. As an example, dynamic time warping (DTW) is one of the common modeling techniques to align and measure the similarity between the test phrase and the templates in the database.
Mel-Frequency Cepstral Coefficient (MFCC) is one of the known methods for extraction of the best parametric representation of acoustic signals. It offers a compact representation of the speech spectral envelops or the impact of the vocal tract shape in rendering a particular sound. It however only captures a highly local portion of the significant temporal dynamics and thus cannot reflect some overall statistical characteristics hidden behind the sentence.
Some researches and developments have been focused on Functional Data Analysis (FDA). FDA is about analysis of information on curves or surfaces, or anything else varying over a continuum. It provides both visual and quantitative results. In recent years, it has been proved that FDA shows good performance on the speech feature analysis and pitch re-synthesis.
It is within this context that embodiments of the present invention arise.
Embodiments of the present invention can be readily understood by referring to the following detailed description in conjunction with the accompanying drawings.
Application of embodiments of the present invention described herein to the particular case of recognition algorithms, such as speech recognition, image recognition, or pattern recognition can be seen from the flow diagram of algorithm 100 of
By way of example and without limitation of the embodiments of the invention, the components x0 . . . xn may be cepstral coefficients of a speech signal. A cepstrum (pronounced “kepstrum”) is the result of taking the Fourier transform (FT) of the decibel spectrum as if it were a signal. The cepstrum of a time domain speech signal may be defined verbally as the Fourier transform of the log (with unwrapped phase) of the Fourier transform of the time domain signal. The cepstrum of a time domain signal S(t) may be represented mathematically as FT(log(FT(S(t))+j2πq), where q is the integer required to properly unwrap the angle or imaginary part of the complex log function. Algorithmically: the cepstrum may be generated by the sequence of operations: signal→FT→log→phase unwrapping→FT→cepstrum.
There is a compext cepstrum and a real cepstrum. The real cepstrum uses the logarithm function defined for real values, while the complex cepstrum uses the complex logarithm function defined for complex values also. The complex cepstrum holds information about magnitude and phase of the initial spectrum, allowing the reconstruction of the signal. The real cepstrum only uses the information of the magnitude of the spectrum. By way of example and without loss of generality, the algorithm 100 may use the real cepstrum.
The cepstrum can be seen as information about rate of change in the different spectrum bands. For speech recognition applications, the spectrum is usually first transformed using the Mel Frequency bands. The result is called the Mel Frequency Cepstral Coefficients or MFCCs. A frequency f in hertz (cycles per second) may be converted a dimensionless pitch m according to: m=1127.01048 loge(1+f/700). Similarly a mel pitch can be converted to a frequency in hertz using: f=700(em/1127.01048−1).
In the case of speech recognition, certain patterns of combinations of features x0 . . . xn may correspond to units of speech (e.g., words) or sub-units, such as syllables, phonemes or other sub-units of words. The features may also contain information characteristic of the source of the signal, e.g., characteristic of the speaker in the case of speech recognition. In accordance with aspects of the present invention, the system may represent the discrete data for each of the test features by a corresponding fitting function, as indicated at 104. Each fitting function may be defined in terms of a finite number of continuous basis functions and a corresponding finite number of expansion coefficients. The fitting functions may be compressed through Functional Principal Component Analysis (FPCA) to generate corresponding sets of principal components of the fitting functions for each test feature, as indicated at 106. Each principal component for a given test feature is uncorrelated to each other principal component for the given test feature. The system may then calculate a distance between a set of principal components for the given test feature and a set of principal components for one or more training features, as indicated at 108. The test feature may then be classified according to the distance calculated, as indicated at 110. A state of the system may then be adjusted according to a classification of the test feature determined from the distance calculated, as indicated at 112.
The Basis Functions
As mentioned above in connection with 104 of
In equation (2) the functions φk,k=1, . . . K are a set basis functions and the parameters ci1, ci2, . . . , ciK are coefficients of the expansion. By way of example, and not by way of limitation, the basis functions may be Fourier basis functions that simulate the MFCC features. The Fourier basis functions may be defined as: φ0(t)=1, φ2r−1(t)=sin rωt, φ2r(t)=cos rωt. These basis functions may be uniquely determined through defining the number of the basis function K and the period ω.
The Solution to the Calculation of the Expansion Coefficients
After the basis functions are decided, the xi(t) may be defined by the coefficients ci1, ci2, . . . , ciK. The data fitting level may be determined the sum of squared errors (SSE) or residual between the discrete data for a feature and a corresponding fitting function. The SSE or residual may be defined as in equation (3) below.
By way of example and not by way of limitation, the classic least square method shown in Eq (3) above may be used to solve this minimization problem.
Roughness Penalty Method
When the number of the basis function K is too big or too small, it may result in overfitting or underfitting problems for the least square method. A roughness penalty method may be applied to improve the functional fitting problem. The roughness penalty method solves the fitting issue based on the closeness of the fit and existence of the overfitting, i.e., to make sure there is no dramatic changes in a local range.
Solving the fitting issues based on the closeness of the fit may be settled well by minimizing the squared errors. On the other hand, the integration of square of the second derivate may measure the existence of the overfitting, which may be expressed as:
PEN2(x)=∫{D2x(s)}2ds=∥D2x∥2 (4)
Since these two goals are opposite, the middle ground of SSE and PEN2 should be taken. Finally, the criterion can be built as:
Where λ is a smoothing parameter to control the level between SSE and PEN. When λ is small, the estimation will be toward to SSE. When the smoothing parameter λ becomes bigger, the estimation there will be a higher roughness penalty and the curve will be smoother.
Choosing of Smoothing Parameter λ
Using the Roughness Penalty method may come up with a new issue with respect to the selection of the number of basis function K and the smoothing parameter λ. In one example, the Generalized Cross-Validation measure GCV may locate a best value for these parameters to define the basis function and the residual criterion. Details of discussions on Generalized Cross-Validation may be found in M. Gubian, F. Cangemi and L. Boves, “Automatic and Data Driven Pitch Contour Manipulation with Functional Data Analysis,” Speech Prosody, 2010, Chicago, and fully incorporated herein by reference for all purposes. Generally, the smaller the GCV value is, the better the fitting will be. The definition of the GCV values may be express as:
This GCV value may provide the direction on which value of λ and the basis number K may give a better fitting level. Details of discussions on GCV values may be found in J. O. Ramsay and B. W. Silverman, “Applied Functional Data Analysis—Method and Case Studies,” Springer, 2002, and fully incorporated herein by reference for all purposes.
Functional Principal Component Analysis
As mentioned above in association with 106 of
In equation (7) fi is the i-th principal component for the data. Each succeeding component in turn has the highest variance possible under the constraint that it is uncorrelated with the preceding components. In FPCA, the continuous function xi(t),tε[0,T] may be considered as one variable as in equation (8) below:
fi′=∫OTβ(s)xi(s)ds=∫βxi (8)
In equation (8), the function β(s) corresponds to the linear weighting coefficients (β1, β2, . . . , βp), and f′i is the i-th functional principal component for the functional data xi(t).
The problem of finding the principle components fi′ may be abstractly expressed as set forth in equation (9) below.
Equation (9) explains how to calculate the weighting function β(s) that is used to obtain the principal components fi′ in FCPA. More specifically, equation (9) describes a criterion which is needed to be optimized for the purpose of determining the best weight function β(s). The first equation of (9) is an optimization target and the second one is a constraint condition.
The principal components fi′ contain compressed data information in the fitting functions xi(t) for the test features from the original function data. Thus, by compressing the fitting functions xi(t) using FCPA one may generate corresponding sets of principal components fi′ for each test feature.
Distance Measures
As mentioned above in associated with 108 of
It should be noted that other distance measures or similarity measurements may be applied. Details of discussions on distance measures may be found in M. Gubian, F. Cangemi and L. Boves, “Joint analysis of F0 and speech rate with functional data analysis,” ICASSP 201, Prague, and fully incorporated herein by reference for all purposes.
Other distance measures that can be used include, but are not limited to:
It can be found that the Minkowski Distance (p) is a generalization of the Chebyshev Distance (p=1) and the Manhattan Distance (p−>∞).
Application to Speech Recognition
As noted above in connection with 110 and 112 of
Text Dependent Speaker Recognition System
Embodiments of the invention may be implemented on a suitably configured computer apparatus.
The memory 205 may be in the form of an integrated circuit, e.g., RAM, DRAM, ROM, and the like. The memory 205 may also be a main memory that is accessible by all of the processor modules. In some embodiments, the processor module 201 may have local memories associated with each core. A program 203 may be stored in the main memory 205 in the form of processor readable instructions that can be executed on the processor modules. The program 203 may be configured to perform text-dependent speaker recognition methods as discussed above with respect to
The apparatus 200 may also include well-known support functions 209, such as input/output (I/O) elements 211, power supplies (P/S) 213, a clock (CLK) 215, and a cache 217. The apparatus 200 may optionally include a mass storage device 219 such as a disk drive, CD-ROM drive, tape drive, or the like to store programs and/or data. The device 200 may optionally include a display unit 221, audio speakers unit 222, and user interface unit 225 to facilitate interaction between the apparatus and a user. The display unit 221 may be in the form of a cathode ray tube (CRT) or flat panel screen that displays text, numerals, graphical symbols or images. The user interface 225 may include a keyboard, mouse, joystick, light pen, or other device that may be used in conjunction with a graphical user interface (GUI). The apparatus 200 may also include a network interface 223 to enable the device to communicate with other devices over a network, such as the internet.
In some embodiments, the system 200 may include an optional microphone 229, which may be a single microphone or a microphone array. The microphone 229 can be coupled to the processor 201 via the I/O elements 211. By way of example, and not by way of limitation, the input human utterances may be recorded using the microphone 229.
The components of the system 200, including the processor 201, memory 205, support functions 209, mass storage device 219, user interface 225, network interface 223, and display 221 may be operably connected to each other via one or more data buses 227. These components may be implemented in hardware, software or firmware or some combination of two or more of these.
Non-Transitory Computer-Readable Storage Medium
According to another embodiment, instructions for text-dependent speaker recognition based on FDA and MFCC features may be stored in a computer readable storage medium. By way of example, and not by way of limitation,
The storage medium 300 contains text-dependent speaker recognition instructions 301 configured for text-dependent speaker recognition based on FDA and MFCC features in accordance with the method described above with respect to
The instructions 301 may also include compressing fitting function through FPCA instructions 307 that compress the fitting functions through FPCA to generate corresponding sets of principal components of the fitting functions for each test feature. Then the calculation instructions 309 calculate distance between principal components for test features and training features. The classification instructions 311 in turn classify test features based on the calculation. The state change instructions 313 may adjust a state of the system according to the classification.
Experiments and Results
A number of experiments were performed to test text-dependent speaker recognition based on FDA and MFCC features in accordance with an embodiment of the present invention against prior art speaker recognition techniques. In the experiments, there were five different speakers. Each speaker uttered about 240 different short words and each word was repeated three times. Every utterance was recorded for training purposes and all three recordings for the same word were used for verification. The length of each utterance was about 2 seconds in average, and every word was sampled at 16 kHz sampling rate with 16-bit width. The verification was passed only when the same speaker uttered the same word.
The 16-dimensional MFCC features were extracted from the utterances with 30 triangular mel filters used in the MFCC calculation. For each frame, the MFCC coefficients and their first derivative formed a 32-dimensional feature vector. The Fourier basis functions were chosen to smooth the MFCC features.
The simulation results were compared to similar results for a Dynamic Time Warping system with MFCC features. This system is provided as an example of a classic technique for text-dependent speaker recognition. TABLE I shows the performances of the prior art system and an experiment that used the FDA coefficients as the features without FPCA compression and used traditional Euclidean Distance as the distance measure. The Equal Error Rate (EER) was used to evaluate the system performance.
From TABLE I, the performance of a system using the FDA coefficients without FPCA compression was not as good as the performance of the classic MFCC-DTW system. It may be resulting from some redundant information contained in the coefficients.
From the 240 words uttered by each speaker, the first fifty words (i.e., words 1-50) and words 100-150 were separately selected to run experiments for the purpose of testing the stability of FPCA. TABLE II and TABLE III show these results below, and where nharm represents the number of harmonics or principal components to compute.
From TABLE II and TABLE III, the MFCC-FPCA system showed improvements on the equal error rate over the system without FPCA compression above in connection with TABLE I. The MFCC-FPCA system effectively reduced the redundant information, and the MFCC-FPCA system with Euclidean Distance as distance measure achieved an equivalent performance as the classic MFCC-DTW TDSR system.
At last, experiments on a MFCC-FPCA system with different similarity measurements were preformed. The words 100-150 were chosen for the experiments. The number of harmonics or principal components to compute (nharm) was 5. TABLE IV shows the results.
As shown in TABLE IV, a MFCC-FPCA system with cosine similarity as the distance measure had the best performance result.
While the above is a complete description of the preferred embodiment of the present invention, it is possible to use various alternatives, modifications, and equivalents. Therefore, the scope of the present invention should be determined not with reference to the above description, but should, instead, be determined with reference to the appended claims, along with their full scope of equivalents. Any feature described herein, whether preferred or not, may be combined with any other feature described herein, whether preferred or not. In the claims that follow, the indefinite article “A” or “An” refers to a quantity of one or more of the item following the article, except where expressly stated otherwise. In the claims that follow, the word “or” is to be interpreted as a non-exclusive or, unless otherwise specified. The appended claims are not to be interpreted as including means-plus-function limitations, unless such a limitation is explicitly received in a given claim using the phrase “means for”.
This application is a nonprovisional of and claims the priority benefit of commonly owned, co-pending U.S. Provisional Patent Application No. 61/621,810, to Zhang et al, filed Apr. 9, 2012, and entitled “TEXT DEPENDENT SPEAKER RECOGNITION WITH LONG-TERM FEATURE BASED ON FUNCTIONAL DATA ANALYSIS” the entire disclosures of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4956865 | Lennig et al. | Sep 1990 | A |
4977598 | Doddington et al. | Dec 1990 | A |
RE33597 | Levinson et al. | May 1991 | E |
5031217 | Nishimura | Jul 1991 | A |
5050215 | Nishimura | Sep 1991 | A |
5129002 | Tsuboka | Jul 1992 | A |
5148489 | Erell et al. | Sep 1992 | A |
5222190 | Pawate et al. | Jun 1993 | A |
5228087 | Bickerton | Jul 1993 | A |
5345536 | Hoshimi et al. | Sep 1994 | A |
5353377 | Kuroda et al. | Oct 1994 | A |
5438630 | Chen et al. | Aug 1995 | A |
5455888 | Iyengar et al. | Oct 1995 | A |
5459798 | Bailey et al. | Oct 1995 | A |
5473728 | Luginbuhl et al. | Dec 1995 | A |
5502790 | Yi | Mar 1996 | A |
5506933 | Nitta | Apr 1996 | A |
5509104 | Lee et al. | Apr 1996 | A |
5535305 | Acero et al. | Jul 1996 | A |
5581655 | Cohen et al. | Dec 1996 | A |
5602960 | Hon et al. | Feb 1997 | A |
5608840 | Tsuboka | Mar 1997 | A |
5615296 | Stanford et al. | Mar 1997 | A |
5617486 | Chow et al. | Apr 1997 | A |
5617509 | Kushner et al. | Apr 1997 | A |
5627939 | Huang et al. | May 1997 | A |
5649056 | Nitta | Jul 1997 | A |
5649057 | Lee et al. | Jul 1997 | A |
5655057 | Takagi | Aug 1997 | A |
5677988 | Takami et al. | Oct 1997 | A |
5680506 | Kroon et al. | Oct 1997 | A |
5680510 | Hon et al. | Oct 1997 | A |
5719996 | Chang et al. | Feb 1998 | A |
5745600 | Chen et al. | Apr 1998 | A |
5758023 | Bordeaux | May 1998 | A |
5787396 | Komori et al. | Jul 1998 | A |
5794190 | Linggard et al. | Aug 1998 | A |
5799278 | Cobbett et al. | Aug 1998 | A |
5812974 | Hemphill et al. | Sep 1998 | A |
5825978 | Digalakis et al. | Oct 1998 | A |
5860062 | Taniguchi et al. | Jan 1999 | A |
5880788 | Bregler | Mar 1999 | A |
5890114 | Yi | Mar 1999 | A |
5893059 | Raman | Apr 1999 | A |
5903865 | Ishimitsu et al. | May 1999 | A |
5907825 | Tzirkel-Hancock | May 1999 | A |
5930753 | Potamianos et al. | Jul 1999 | A |
5937384 | Huang et al. | Aug 1999 | A |
5943647 | Ranta | Aug 1999 | A |
5956683 | Jacobs et al. | Sep 1999 | A |
5963903 | Hon et al. | Oct 1999 | A |
5963906 | Turin | Oct 1999 | A |
5983178 | Naito et al. | Nov 1999 | A |
5983180 | Robinson | Nov 1999 | A |
6009390 | Gupta et al. | Dec 1999 | A |
6009391 | Asghar et al. | Dec 1999 | A |
6023677 | Class et al. | Feb 2000 | A |
6061652 | Tsuboka et al. | May 2000 | A |
6067520 | Lee | May 2000 | A |
6078884 | Downey | Jun 2000 | A |
6092042 | Iso | Jul 2000 | A |
6112175 | Chengalvarayan | Aug 2000 | A |
6138095 | Gupta et al. | Oct 2000 | A |
6138097 | Lockwood et al. | Oct 2000 | A |
6141641 | Hwang et al. | Oct 2000 | A |
6148284 | Saul | Nov 2000 | A |
6151573 | Gong | Nov 2000 | A |
6151574 | Lee et al. | Nov 2000 | A |
6188982 | Chiang | Feb 2001 | B1 |
6223159 | Ishii | Apr 2001 | B1 |
6226612 | Srenger et al. | May 2001 | B1 |
6236963 | Naito et al. | May 2001 | B1 |
6246980 | Glorion et al. | Jun 2001 | B1 |
6253180 | Iso | Jun 2001 | B1 |
6256607 | Digalakis et al. | Jul 2001 | B1 |
6292776 | Chengalvarayan | Sep 2001 | B1 |
6405168 | Bayya et al. | Jun 2002 | B1 |
6418412 | Asghar et al. | Jul 2002 | B1 |
6629073 | Hon et al. | Sep 2003 | B1 |
6662160 | Wu et al. | Dec 2003 | B1 |
6671666 | Ponting et al. | Dec 2003 | B1 |
6671668 | Harris | Dec 2003 | B2 |
6671669 | Garudadri et al. | Dec 2003 | B1 |
6681207 | Garudadri | Jan 2004 | B2 |
6691090 | Laurila et al. | Feb 2004 | B1 |
6801892 | Yamamoto | Oct 2004 | B2 |
6832190 | Junkawitsch et al. | Dec 2004 | B1 |
6868382 | Shozakai | Mar 2005 | B2 |
6901365 | Miyazawa | May 2005 | B2 |
6907398 | Hoege | Jun 2005 | B2 |
6934681 | Emori et al. | Aug 2005 | B1 |
6980952 | Gong | Dec 2005 | B1 |
7003460 | Bub et al. | Feb 2006 | B1 |
7133535 | Huang et al. | Nov 2006 | B2 |
7139707 | Sheikhzadeh-Nadjar et al. | Nov 2006 | B2 |
7454341 | Pan et al. | Nov 2008 | B1 |
7457745 | Kadambe et al. | Nov 2008 | B2 |
7941313 | Garudadri et al. | May 2011 | B2 |
7970613 | Chen | Jun 2011 | B2 |
8527223 | AbuAli et al. | Sep 2013 | B2 |
20040220804 | Odell | Nov 2004 | A1 |
20050010408 | Nakagawa et al. | Jan 2005 | A1 |
20100211391 | Chen | Aug 2010 | A1 |
20110137648 | Ljolje et al. | Jun 2011 | A1 |
Number | Date | Country |
---|---|---|
0866442 | Sep 1998 | EP |
09290617 | Nov 1997 | JP |
2000338989 | Aug 2000 | JP |
2291499 | Jan 2007 | RU |
Entry |
---|
Bocchieri, “Vector Quantization for the efficient Computation of Continuous Density Likelihoods”, Apr. 1993, International conference on Acoustics, Speech, and signal Processing, IEEE, pp. 692-695. |
G. David Forney, Jr., “The Viterbi Agorithm”—Proceeding of the IEEE, vol. 61, No. 3, p. 268-278, Mar. 1973. |
Hans Werner Strube, “Linear Prediction on a Warped Frequency Scale,”—The Journal of the Acoustical Society of America, vol. 68, No. 4, p. 1071-1076, Oct. 1980. |
International Search Report and Written Opinion of the International Searching Authority for PCT/US2013/03342 mailed Jul. 11, 2013. |
J.O. Ramsay and B.W. Silverman, “Applied Functional Data Analysis—Method and Case Studies,” Springer, 2002. |
Kai-Fu Lee et al., “Speaker-Independent phone Recognition Using Hidden Markov Models”—IEEE Transaction in Acoustics, Speech, and Signal Processing, vol. 37, No. 11, p. 1641-1648, Nov. 1989. |
Lawrence Rabiner, “A Tutorial on Hidden Markov Models and Selected Application Speech Recognition”—Proceeding of the IEEE, vol. 77, No. 2, Feb. 1989. |
Leonard E. Baum et al., “A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains,”—The Annals of Mathematical Statistics, vol. 41, No. 1, p. 164-171, Feb. 1970. |
Li Lee et al., “Speaker Normalization Using Efficient Frequency Warping Procedures” 1996 IEEE, vol. 1, pp. 353-356. |
M. Gubian, F. Cangemi and L. Boves, “Automatic and Data Driven Pitch Contour Manipulation with Functional Data Analysis,” Speech Prosody, 2010, Chicago. |
M. Gubian, F. Cangemi and L. Boves, “Joint analysis of F0 and speech rate with functional data analysis,” ICASSP 201, Prague. |
Mullensiefen Daniel, Statistical techniques in music psychology: An update [online] [retrireved on Jun. 7, 2013]. Retrieved from the Internet: <URL: http://www.doc.gold.ac.uk/˜mas03dm/papers/SchneiderFest09—MupsyStats.pdf>, p. 13, paragraph 3, p. 14, paragraph 1. |
Rohit Sinha et al., “Non-Uniform Scaling Based Speaker Normalization” 2002 IEEE, May 13, 2002, vol. 4, pp. I-589-I-592. |
Steven B. Davis et al., “Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences”—IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP 28, No. 4, p. 357-366, Aug. 1980. |
U.S. Appl. No. 61/621,810, entitled “Text Dependent Speaker RecognitionWith Long-Term Feature Based on Functional Data Analysis” to Zhang et al., filed Apr. 9, 2012. |
Vasilache, “Speech recognition Using HMMs With Quantized Parameters” , Oct. 2000, 6th International Conference on Spoken Language Processing (ICSLP 2000), pp. 1-4. |
Number | Date | Country | |
---|---|---|---|
20130268272 A1 | Oct 2013 | US |
Number | Date | Country | |
---|---|---|---|
61621810 | Apr 2012 | US |