The present invention generally relates to automatic speech recognition, and relates in particular to noise robust automatic speech recognition.
Embedded noise robust automatic speech recognition (ASR) systems need to conserve memory due to the small size and limited resources of devices such as cell phones, car navigation, digital TVs, and home appliances. However, ASR systems are notorious for consuming large amounts of computational resources, including Random Access Memory (RAM). This tendency of ASR systems can be especially problematic in embedded devices that also need to allocate such resources for other functions that often need to run concurrently with ASR functions. Yet, reducing the amount of memory consumed by a noise robust ASR heavily impacts recognition accuracy and/or robustness to noise.
Referring to
What is needed is a way to reduce the memory requirements of embedded noise robust ASR systems with reduced impact on recognition accuracy and/or robustness to noise. The present invention fulfills this need by making several changes to a noise robustness system employing a model domain method.
In accordance with the present invention, model compression is combined with model compensation. Model compression is needed in embedded ASR to reduce the size and the computational complexity of compressed models. Model-compensation is used to adapt in real-time to changing noise environments. The present invention allows for the design of smaller ASR engines (memory consumption reduced to up to one-sixth) with reduced impact on recognition accuracy and/or robustness to noises.
Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The present invention will become more fully understood from the detailed description and the accompanying drawings, wherein:
The following description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
In some embodiments, the present invention combines sub-space tying for model compression with alpha-Jacobian model-compensation for noise robustness to achieve a compact noise robust speech recognition system. Unfortunately this combination cannot be accomplished directly as the subspace tying structure does not allow for model-compensation. This difficulty arises because the distortion function used in model compensation requires a full space transformation (full dimensionality) of the acoustic models that invalidates the tying structure.
One area of interest in the present invention is the present solution to this issue. Specifically, a model compensation distortion function is designed that does not invalidate the tying structure, thus allowing for the coexistence of subspace tying and model-compensation. The design of the model compensation distortion function can be accomplished by making several changes in a noise robust ASR system to the following modules: (a) front-end analysis: the front-end whitening matrix can be block-diagonal to isolate a set independent subspaces (block-diagonal covariance matrix); (b) model-compensation: the model-compensation distortion function can be operating independently on the same subspaces identified by the front-end analysis (and cannot be a full-space transformation); and (c) subspace model compression: the subspaces used for the tying can be aligned with the independent subspaces defined in the front-end.
One ingredient of this method can be in the definition of the subspaces corresponding to the block-diagonal whitening matrix in the front-end. These subspaces need to be large enough to allow a good coverage of the speech signal correlation structure in the front-end and in the model-compensation step, but small enough to allow a low distortion error from the subspace tying step. In general, the subspace definition is an NP-Complete problem for which there is no computable exact solution, but for which an interactive converging algorithm can be provided.
The whitening matrix or matrices can take various forms depending on the characteristics of the independent subspaces. For example, in some embodiments, the independent subspaces can span over different time frames, and the whitening matrices include decorrelation across a 2-dimensional time-frequency axis. Also, in additional or alternative embodiments, such 2-D decorrelation matrices are decomposable as discrete cosine transform in the frequency domain and time derivative in the time domain.
Turning to
The model compensation distortion technique of the present invention allows reduction of the acoustic models size by up to ⅙th of the initial size, and reduces the computational load to up to ⅓rd of the initial while allowing great robustness to noise thanks to the usage of model-compensation. The complexity of the model compensation is also reduced because of the smaller set of distributions to compensate.
Turning now to
It is envisioned that a similar approach can be employed for speaker adaptation, with subspace transformations (such as MLLR constrained to subspaces). For example, subspace tied acoustic model whitening can be employed with model compensation and an additional subspace tying regarding compensated acoustic models for update purposes (store to RAM or flash ROM, etc.).
The description of the invention is merely exemplary in nature and, thus, variations that do not depart from the gist of the invention are intended to be within the scope of the invention. Such variations are not to be regarded as a departure from the spirit and scope of the invention.
This application claims the benefit of U.S. Provisional Application No. 60/659,054, filed on Mar. 4, 2005. The disclosure of the above application is incorporated herein by reference in its entirety for any purpose.
Number | Name | Date | Kind |
---|---|---|---|
5027408 | Kroeker et al. | Jun 1991 | A |
5167004 | Netsch et al. | Nov 1992 | A |
5487129 | Paiss et al. | Jan 1996 | A |
5742694 | Eatwell | Apr 1998 | A |
5745382 | Vilim et al. | Apr 1998 | A |
5960397 | Rahim | Sep 1999 | A |
5999899 | Robinson | Dec 1999 | A |
6430528 | Jourjine et al. | Aug 2002 | B1 |
6513004 | Rigazio et al. | Jan 2003 | B1 |
6526379 | Rigazio et al. | Feb 2003 | B1 |
6529872 | Cerisara et al. | Mar 2003 | B1 |
6580814 | Ittycheriah et al. | Jun 2003 | B1 |
6591235 | Chen et al. | Jul 2003 | B1 |
6658385 | Gong et al. | Dec 2003 | B1 |
6687672 | Souilmi et al. | Feb 2004 | B2 |
6691090 | Laurila et al. | Feb 2004 | B1 |
6691091 | Cerisara et al. | Feb 2004 | B1 |
6915259 | Rigazio et al. | Jul 2005 | B2 |
6980952 | Gong | Dec 2005 | B1 |
6983264 | Shimizu | Jan 2006 | B2 |
7197456 | Haverinen et al. | Mar 2007 | B2 |
7379868 | Reynolds | May 2008 | B2 |
7426464 | Hui et al. | Sep 2008 | B2 |
7499857 | Gunawardana | Mar 2009 | B2 |
7562013 | Gotanda et al. | Jul 2009 | B2 |
20020165712 | Souilmi et al. | Nov 2002 | A1 |
20030033143 | Aronowitz | Feb 2003 | A1 |
20040181408 | Acero et al. | Sep 2004 | A1 |
20040199384 | Hong | Oct 2004 | A1 |
20050004795 | Printz | Jan 2005 | A1 |
20050060142 | Visser et al. | Mar 2005 | A1 |
20060015331 | Hui et al. | Jan 2006 | A1 |
20070033028 | Yao | Feb 2007 | A1 |
Number | Date | Country | |
---|---|---|---|
20070208560 A1 | Sep 2007 | US |
Number | Date | Country | |
---|---|---|---|
60659054 | Mar 2005 | US |