Impulsivity estimates of mixtures of the power exponential distrubutions in speech modeling

Information

  • Patent Grant
  • 6804648
  • Patent Number
    6,804,648
  • Date Filed
    Thursday, March 25, 1999
    25 years ago
  • Date Issued
    Tuesday, October 12, 2004
    20 years ago
Abstract
A parametric family of multivariate density functions formed by mixture models from univariate functions of the type exp(−|x|β) for modeling acoustic feature vectores are used in automatic recognition of speech. The parameter β is used to measure the non-Gaussian nature of the data. β is estimated from the input data using a maximum likelihood criterion. There is a balance between β and the number of data points that must be satisfied for efficient estimation.
Description




BACKGROUND OF THE INVENTION




FIELD OF THE INVENTION




The present invention generally relates to the technology of speech recognition and, more particularly, to a parametric family of multivariate density functions formed by mixture models from univariate functions for modeling acoustic feature vectors used in automatic recognition of speech.




BACKGROUND DESCRIPTION




Most pattern recognition problems require the modeling probability density of feature vectors in feature space. Specifically, in the problem of speech recognition, it is necessary to model the probability density of acoustic feature vectors in the space of phonetic units. Purely Gaussian densities have been known to be inadequate for this purpose due to the heavy tailed distributions observed by speech feature vectors. See, for example, Frederick Jelenik,


Statistical Methods for Speech Recognition


, MIT Press (1997). As an intended remedy to this problem, practically all speech recognition systems attempt modeling by using a mixture model with Gaussian densities for mixture components. Variants of the standard K-means clustering algorithm are used for this purpose. The classical version the K-means algorithm as described by John Hartigan in


Clustering Algorithms


, John Wiley & Sons (1975), and Anil Jain and Richard Dubes in


Algorithms for Clustering Data


, Prentice Hall (1988), can also be viewed as a special case of the expectation-maximization (EM) algorithm (see A. P. Dempster, N. M. Laird and D. B. Baum, “Maximum likelihood from incomplete data via the EM algorithm”,


Journal of Royal Statistical Soc


., Ser. B, vol., 39, pp. 1-38, 1997) for mixtures of Gaussians with variances tending to zero. See also Christopher M. Bishop,


Neural Networks for Pattern Recognition


, Cambridge University Press (1997), and F. Marroquin and J. Girosi, “Some extensions of the K-means algorithm for image segmentation and pattern classification”, MIT Artificial Intelligence Lab. A. I. Memorandum no. 1390, January 1993. The only attempt to model the phonetic units in speech with non-Gaussian mixture densities is described by H. Ney and A. Noll in “Phoneme modeling using continuous mixture densities”,


Proceedings of IEEE Int. Conf on Acoustics Speech and Signal Processing


, pp. 437-440, 1998, where Laplacian densities were used in a heuristic base estimation algorithm.




S. Basu and C. A. Micchelli in “Parametric density estimation for the classification of acoustic feature vectors in speech recognition”,


Nonlinear Modeling: Advanced Black


-


Box Techniques


(Eds. J. A. K. Suykens and J. Vandewalle), pp. 87-118, Kluwer Academic Publishers, Boston (1998), attempted to model speech data by building probability densities from a given univariate function h(t) for t≧0. Specifically, Basu and Micchelli considered mixture models from component densities of the form










p


(


x

u

,


)


=


ρ
d







1


det













exp






(


-

(

h


(

Q


(
x
)


)


)


,

x



R
d






where









(
1
)








Q


(
x
)


=




γ
d



(

x
-
μ

)


t






-
1




(

x
-
u

)




,

x


R
d


,




(
2
)








m
β

=




R
+





t
β



f


(
t
)









t




,




(
3
)













(when the integral is finite and R


+


denotes the positive real axis)











ρ
d

=



Γ


(

d
2

)





(

m

d
2


)


d
2






π

d
2




(

m


d
2

-
1


)




d
2

+
1




,
and




(
4
)







γ
d

=



m

d
2



d






m


d
2

-
1




.





(
5
)













If the constraints ρ


d


and γ


d


are positive and finite, then the vector μ∈R


d


and the positive definite symmetric d×d matrix Σ are the mean and the covariance of this density. Particular attention was given to the choice h(t)=t


α/2


, t>0, α>0; the case α=2 corresponds to the Gaussian density, whereas the Laplacian case considered by H. Ney and A. Noll, supra, corresponds to α=1. Smaller values of α correspond to more peaked distributions (α→0 yields the δ function), whereas larger values of α correspond to distributions with flat tops (α→∞ yields the uniform distribution over elliptical regions). For more details about these issues see S. Basu and C. Micchelli, supra. This particular choice of densities has been studied in the literature and referred to in various ways; e.g., α-stable densities as well as power exponential distributions. See, for example, E. Gòmez, M. A. Gòmez-Villegas, and J. M. Marin, “A multivariate generalization of the power exponential family of distributions”,


Comm. Stat.—Theory Meth


. 17(3), pp. 589-600, 1998, and Owen Kenny, Douglas Nelson, John Bodenschatz and Heather A. McMonagle, “Separation of nonspontaneous and spontaneous speech”,


Proc. ICASSP


, 1998.




In S. Basu and C. Micchelli, supra, an iterative algorithm having the expectation-maximization (EM) flavor for estimating the parameters was obtained and used for a range of fixed values of α (as opposed to the choice of α=1 in H. Ney and A. Noll, supra, and α=2 in standard speech recognition systems). A preliminary conclusion from the study in S. Basu and C. Micchelli was that the distribution of speech feature vectors in the acoustic space are better modeled by mixture models with non-Gaussian mixture components corresponding to α<1. As a consequence of these encouraging results, we became interested in automatically finding the “best” value of α directly from the data. It is this issue that is the subject of the present invention.




SUMMARY OF THE INVENTION




It is therefore an object of the present invention to provide a parametric family of multivariate density functions formed by mixture models from univariate functions of the type exp(−|x|


β


) for modeling acoustic feature vectors used in automatic recognition of speech.




According to the invention, the parameter β is used to measure the non-Gaussian nature of the data. In the practice of the invention, β is estimated from the data using a maximum likelihood criterion. Among other things, there is a balance between β and the number of data points N that must be satisfied for efficient estimation. The computer implemented method for automatic machine recognition of speech iteratively refines parameter estimates of densities comprising mixtures of power exponential distributions whose parameters are means (μ), variances (σ), impulsivity numbers (α) and weights (w). The iterative refining process begins by predetermining initial values of the parameters μ, σ and w. Then, {circumflex over (μ)}


l


, {circumflex over (σ)}


l


derived from the following equations







μ
i
l

=






k
=
1

N





(




j
=
1

d





(


x
j
k

-


μ
^

j
l


)

2



σ
^

j
l



)





α
^

l

/
2

-
1




A
lk



x
i
k







k
=
1

N





(




j
=
1

d





(


x
j
k

-


μ
^

j
l


)

2



σ
^

j
l



)





α
^

l

/
2

-
1




A
lk









and







σ
i
l

=




α
^

l





γ
d



(


α
^

l

)





α
^

l

/
2







k
=
1

N





(




j
=
1

d





(


x
j
k

-


μ
^

j
l


)

2



σ
^

j
l



)





α
^

l

/
2

-
1






A
lk



(


x
i
k

-


μ
^

i
l


)


2





A
l

















for i=1, . . . ,d and l=1, . . . ,m. Then σ is updated by assuming that θ=(μ,σ,α), {circumflex over (θ)}=({circumflex over (μ)},{circumflex over (σ)},{circumflex over (α)}) and letting H(μ,σ)=E


{circumflex over (θ)}


(log f(·|θ)), in which case H has a unique global maximum at μ={circumflex over (μ)}, σ={circumflex over (σ)} where







β


(

α
,

α
^


)


=



{


α





Γ






(


α
+
1


α
^


)



Γ






(

1

α
^


)



}


2
α





Γ






(

3
α

)


Γ






(

1

α
^


)



Γ






(

3

α
^


)


Γ






(

1
α

)














The l dimension is set by μ


l


={circumflex over (μ)}


l


, σ


l


={circumflex over (σ)}


l


, and α


l


={circumflex over (α)}


l


. Finally, the convergence of a log likelihood function B(α) of the parameters is determined in order to get final values of μ, σ and α. The B(α) is







B


(

Λ
,

w
^

,

Λ
^


)


=




l
=
1

m








B
l



(

Λ
,

w
^

,

Λ
^


)













where








B
l



(

Λ
,

w
^

,

Λ
^


)


=




k
=
1

N









A
lk



(



1
2



(




i
=
1

d







log






σ
i
l



)


+

log







ρ
d



(

α
l

)



-



(


γ
d



(

α
l

)


)



α
l

/
2





(




i
=
1

d









(


x
i
k

-

μ
i
l


)

2


σ
i
l



)



α
l

/
2




)


.



















BRIEF DESCRIPTION OF THE DRAWINGS




The file of this patent contains at least one drawing executed in color. Copies of this patent with color drawing(s) will be provided by the Patent and Trademark Office upon request and payment of the necessary fee.




The foregoing and other objects, aspects and advantages will be better understood from the following detailed description of a preferred embodiment of the invention with reference to the drawings, in which:





FIG. 1

is a graph showing the plot of average log-likelihood for −1<μ<1 and 0<α<5;





FIG. 2

is a graph showing the plot of average log-likelihood for −1<μ<1 and 0<α<0.002;





FIG. 3

is a graph showing the plot of L


σ


(x


1


,α) for 0<α<0.002;





FIGS. 4

to


42


are plots of optimal α for dimensions 0-38, respectively, of leaf


513


;





FIG. 43

is a graph showing a comparison of α update versus nonupdate for α=1;





FIG. 44

is graph showing a comparison of α update versus nonupdate for α=2;





FIG. 45

is a graph of log-likelihood gains of α update formula as a function of iteration;





FIG. 46

is block diagram of a computer system of the type on which the subject invention may be implemented; and





FIGS. 47A and 47B

, taken together, are a flow diagram illustrating the logic of the computer implemented process according to the invention.











DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION




For the specific problem of automatic machine recognition of speech as considered by S. Basu and C. Micchelli, supra, preliminary attempts were made to find the “best” value of α by running the speech recognizer for several different values of α and then choosing the value of α corresponding to the lowest word recognition error. The choice of recognition accuracy as the optimality criterion for α is dictated by the practical need for best recognizer. While other criteria for recognition accuracy (e.g., error rates based on phonemes, syllables or at a more detailed level of grannularity, the classification accuracy of feature vectors) can be thought of, we are primarily concerned with automated methods of finding an optimal value of the parameter α.




In the general context of multivariate mixture densities, this is a difficult task; therefore, the description which follows is restricted to the case of modeling univariate data with one mixture component of this form. In this setting, we present an assortment of iterative formulas for estimating the parameter α from the data, and study their numerical performance. For mixtures of multivariate α-densities, we resort to a technique arising in the context of EM estimation methodology. Recall from S. Basu and C. A. Micchelli, supra, and Christopher M. Bishop, supra, that the increase in the log-likelihood is bounded from below (via Jensen's inequality) by a term (often referred to as the Q-function in the EM literature) that can be conveniently maximized via optimization techniques. In the case of the present invention, we carry out the maximization with respect to the extra parameter α. In this regard, we take advantage of the special dependency of Q on α. Moreover, we extend this method to include modeling data by using mixture components each having different values of α.




Our experimental results in the context for modeling acoustic feature vectors for the recognition of speech indicate that optimal values of α are smaller than one. In this way, we further substantiate the non-Gaussian form of the mixture components for modeling of speech feature vectors in acoustic space.




Maximum Log-likelihood




Let T={x


1


,x


2


, . . . ,x


N


}⊂R


d


be independent samples from a probability density function p. Then, the likelihood function of the data is









k
=
1

N








p


(

x
k

)


.











Assume that the data comes from a parametric family p(·|λ), λ∈Ω⊂R


q


of probability densities. Maximum likelihood estimation (MLE) method demands that the log-likelihood











L
N



(
λ
)


=




i
=
1

N



log






p


(


x
i


λ

)








(
6
)













as a function of λ is made as large as is practical. This specifies a data dependent choice of λ and hence a density function to model the data. When N is large and the data is selected as random samples from the density p(·|θ), the quantity 1/NL


N


approaches








E




θ


(log


p


(·|λ).  (7)






By Jensen's inequality






&AutoLeftMatch;







E
θ



{

log






p


(

·


λ


)



}


-


E
θ



{

log






p


(

·


θ


)



}



=




E
θ



{

log







p


(

·


λ


)



p


(

·


θ


)




}












log





R
d






p


(

x

λ

)



p


(

x

θ

)









p


(

x

θ

)









x










=


0














Therefore, the global maximum of (7) occurs for λ=θ. We add to this fact by observing if equality occurs above, i.e.,








E




θ


{log


p


(·|λ)}=


E




θ


{log


p


(·|θ)},  (8)






then p(x|λ)=p(x|θ), a.e., x∈R


d


(provided every member of the family vanishes on sets of measure zero). Thus, whenever any probability density in our family determines its parameter uniquely, we conclude that λ=θ is the unique global maximum of the function in (7). In particular, the family of elliptical densities of the type (6) have this property. We might add that there are parametric families of importance in applications for which the density does not determine its parameters, for example mixture models of Gaussians.




In the remainder of this section, we improve upon these facts univariate densities of the form








f


(
t
)


=


ρ

σ







exp






(

-

h


(

γ








(

t
-

μ
^


)

2

σ


)



)



,

t

R

,










where









γ
=






R
+





t

1
2



exp






(

-

h


(
t
)



)








t







R
+





t

-

1
2




exp






(

-

h


(
t
)



)








t









and





(
9
)






ρ
=








R
+





t

1
2



exp






(

-

h


(
t
)



)








t



)


1
2








R
+





t

-

1
2




exp






(

-

h


(
t
)



)








t



)


3
2



.





(
10
)













For this class of densities, our parameter space consists of the mean μ∈R, the variance σ∈R


+


and the function h. We use the notation θ=(μ,σ,h) to denote the parameters that determine this density which we now denote by f(·|θ).




Lemma 1 Let θ=(μ,σ,h), {circumflex over (θ)}=({circumflex over (μ)},{circumflex over (σ)},ĥ) and suppose h and ĥ are increasing functions of R


+


. Then, the function F(μ)=E


{circumflex over (θ)}


(log f(·|θ)) has a unique local maximum at μ={circumflex over (μ)}.




Proof: Direct computation confirms the equation.







F


(
μ
)


=


log





ρ

-


log





σ

2

-




+



+





h






(


γ







(

t
-
μ

)

2


σ

)







ρ


σ
^








exp






(

-


h
^



(



γ
^








(

t
-

μ
^


)

2



σ
^


)



)









t

.














To prove the claim, we show that F′(μ) is negative for μ>{circumflex over (μ)}. To this end, we observe that









F




(
μ
)


=



2





γ






ρ
^



σ







σ
^




-



0





th









(

γ







t
2

σ


)





[


g


(


(

t
-
μ
+

μ
^


)

2

)


-

g


(


(

t
-

μ
^

+
μ

)

2

)



]








t





,










where







g


(
t
)


:=



exp


(

-


h
^



(


γ
^







t

σ
^



)



)



t



R
.












Let us first point out that F′(μ)<0 for μ>{circumflex over (μ)}. This claim follows by noting that (y−μ+{circumflex over (μ)})


2


<(y+μ−{circumflex over (μ)})


2


when μ>{circumflex over (μ)} and y>0 and that g(·) is a decreasing function.




Lemma 2 Let θ=(μ,σ,h), {circumflex over (θ)}=({circumflex over (μ)},{circumflex over (σ)},ĥ). Suppose that h(x) and xh′(x) are increasing functions of x for x>0. Then the function F(μ,σ)=E


{circumflex over (θ)}


(log f(·|θ)) has a unique global maximum at μ={circumflex over (μ)}, σ={circumflex over (σ)}.




Proof: In Lemma 1, we proved that for any σ the function F(·,σ) is maximized for μ={circumflex over (ν)}. Thus, it suffices to consider the function G:=F ({circumflex over (μ)},·). The function has the form








G


(
σ
)


=


log





ρ

-


log





σ

2

-

2





ρ








0





h


(

γ







σ
^

σ



y
2


)







exp






(

h


(

γ






y
2


)


)








y






,

σ


R
+












Alternatively, we have the equation







G


(
σ
)


=


log





ρ

+

log





φ

-


1
2






log






σ
^


-

2





ρ








0





h
~







(

φ





y

)


exp






(

-


h
~



(
y
)



)









y

.















where {tilde over (h)}(y):=h(γy


2


), y∈R


+


and






φ
=




σ
^

σ


.











Therefore, we obtain











G
l



(
σ
)


=


-


x
3


2






σ
^







{


1
x

-

2





ρ




0




y








h
~

l



(
xy
)




exp


(

-


h
~



(
y
)



)









y





}

.






(
11
)













Moreover, using integration by parts, we get













2





ρ




0




y








h
~

l



(
y
)




exp


(

-


h
~



(
y
)



)









y




=

2





ρ




0




y









y








(

exp


(

-


h
~



(
y
)



)


)








y










=

2





ρ




0





exp


(

-


h
~



(
y
)



)









y










=

ρ





-







exp


(

-


h
~



(
y
)



)









y










=
1







(
12
)













Combining these two equations, we have that








G
l



(
σ
)


=


-


ρ






x
2



σ
^







0





exp


(

-


h
~



(
y
)



)









{


y








h
~

l



(
y
)



-

x





y




h
~

l



(

x





y

)




}

.














By hypothesis, we conclude that (y{tilde over (h)}


l


(y)−xy{tilde over (h)}


l


(xy)) is negative if x>1 and, positive if x<1. This implies that G′(σ) is negative if σ>{circumflex over (σ)} and G′(σ) is positive if σ<{circumflex over (σ)} thereby proving the claim.




The next result is restricted to the family of functions








h


(


t


)=


t




α/2


, α>0


t∈R




+








In this case, our parameter vector has the form θ=(μ,σ,α). To state the next result, we observer that the constants in equations (9) and (10) are given by










γ


(
α
)


=




Γ


(

3
/
α

)



Γ


(

1
/
α

)








and






ρ


(
α
)



=




αΓ

1
/
2




(

3
/
α

)



2



Γ

3
/
2




(

1
/
α

)




.






(
13
)













Lemma 3 Suppose that θ=(μ,σ,α), {circumflex over (θ)}=({circumflex over (μ)},{circumflex over (σ)},{circumflex over (α)}). Let H(μ,σ)=E


{circumflex over (θ)}


(log f(·|θ)). Then H has a unique global maximum at μ={circumflex over (μ)}, σ={circumflex over (σ)} where







β


(

α
,

α
^


)


=



{


αΓ


(


α
+
1


α
^


)



Γ


(

1

α
^


)



}


2
α






Γ


(

3
α

)




Γ


(

1

α
^


)





Γ


(

3

α
^


)




Γ


(

1
α

)















Proof: It follows from Lemma 1 that for any σ>0, the function H(·,σ) is maximized for μ={circumflex over (μ)}. Furthermore, by following the proof of Lemma 2, we get







H


(


μ
^

,
σ

)


=



log

ρ



(
α
)


-


1
2




log

σ



(
α
)



-



2


ρ


(

α
^

)





γ


(

α
^

)



1
/
2







0






(



γ


α
^




γ


(

α
^

)



σ




t
2


)


α
/
2




exp


(

-

t

α
^



)










t

.















Evaluating the above integral, we obtain the equation







H


(


μ
^

,
σ

)


=


log

ρ

-


1
2



log

σ


-



2

ρ


σ
^




γ


(

α
^

)



1
/
2






(


γ


α
^




γ


(

α
^

)



σ


)


α
/
2




1

α
^





Γ


(


α
+
1


α
^


)


.













We proceed by differentiating the function H with respect to σ.










H


(


μ
^

,
σ

)





σ


=


1
σ



{


-

1
2


+


α

α
^





ρ


(

α
^

)




γ


(

α
^

)



1
/
2






(


γ






σ
^




γ


(

α
^

)



σ


)


α
/
2




Γ


(


α
+
1


α
^


)




}












It is now easily verified that










H


(


μ
^

,
σ

)





σ


=

{





<
0

,





if





σ

>


β
(

α
,

α
^


)



σ
^









>
0

,





if





σ

<


β
(

α
,

α
^


)



σ
^

















Thus, σ=ψ(α){circumflex over (σ)} is the global maximum of the function H({circumflex over (μ)},·).




Remark: Note that we have β(α,{circumflex over (α)}))=1.




To prove the main result of the section, we need a fact about the gamma function. As we will need to differentiate log Γ(x), we introduce the notation







Ψ


(
x
)


=






x




(

log






Γ


(
x
)



)


=



Γ




(
x
)



Γ


(
x
)














The function Ψ(x) is known as the digamma function and Ψ′(x) as the trigamma function. From M. Abramowitz and I. Stegan,


Handbook of Mathematical Statistics


, Dover Publications, New York, Ninth Dover printing (1972), the trigamma function has the representation











Ψ




(
x
)


=





n
=
0










1


(

x
+
n

)

2







for





x


>
0





(
14
)













and satisfies Ψ′(x)>0 for x>0 and







Ψ


(

x
+
1

)


=


1

x
2


+


Ψ


(
x
)


.












We shall use this in proving the following theorem.




Theorem 1 Suppose that θ=(μ,σ,α) and {circumflex over (θ)}=({circumflex over (μ)},{circumflex over (σ)},{circumflex over (α)}). Let H(μ,σ,α)=E


{circumflex over (θ)}


{f(·|θ)}. Then, H has a unique local maximum for μ={circumflex over (μ)}, σ={circumflex over (σ)} and α={circumflex over (α)}.




Proof: From Lemma 1 and Lemma 3, it suffices to consider μ={circumflex over (μ)} and α=β(α,{circumflex over (α)}){circumflex over (σ)}.




We shall write H(α) for the expression H({circumflex over (μ)},β(α,{circumflex over (α)}){circumflex over (σ)},α) and get







H


(
α
)


=


log





ρ

-


1
2


log






β


(

α
,

α
^


)




σ
^


-



2


ρ
^




α
^




γ
^







(


γ






σ
^




β


(

α
,

α
^


)




γ
^



σ
^



)


α
/
2





Γ


(


α
+
1


α
^


)


.













We define t=1/α, {circumflex over (t)}=1/{circumflex over (α)}, {overscore (H)}(t)=H(α) and observe that








{overscore (H)}


(


t


)=−log Γ(1


+t


)−


t+t


+log Γ(


{circumflex over (t)}+{circumflex over (t)}/t


)+


t


log Γ(


{circumflex over (t)}


)+


f


½ log({circumflex over (γ)}/{circumflex over (σ)})−log 2.






Our goal is to show that for every {circumflex over (t)}>0 the function {overscore (H)} has a unique local maximum for t>0 at t={circumflex over (t)}. Before doing so, let us observe that











lim

t







H
_



(
t
)



=

-






(
15
)













for the proof of this claim, we invoke Stirling's formula (see Abramowitz and Stegan, supra)






log Γ(


t


+1)=(


t


+½)log


t


−(


t


+1)+1/2 log 2


π+O


(1


/t


).






Substituting this relation into the definition of {overscore (H)} we get











H
_



(
t
)


=




-

(



(

t
+

1
2


)


log





t

-

(

t
+
1

)

+


1
2


log





2

π


)


-
t
+

t





log





t

-











(


t





log






Γ


(

t
^

)



+

t







t
^

t



Ψ


(

t
^

)




)

+

t





log






Γ


(

t
^

)



+


1
2



log


(


γ
^


σ
^


)



-

log





2

+

O


(

1
t

)









=





-

1
2



log





t

+
1
-


1
2


log





2

π

-


t
^







Ψ


(

t
^

)



+


1
2



log


(


γ
^


σ
^


)



-

log





2

+


O


(

1
t

)


.















from which (15) follows. Returning to our goal to show that {overscore (H)} has a unique local maximum at {overscore (t)}, we compute the derivative









H
_

l



(
t
)


=


-

Ψ


(

t
+
1

)



+



t
^

t



Ψ


(


t
^

+


t
^

t


)



+

log
(


t

t
^





Γ


(


t
^

+
1

)



Γ


(


t
^

+


t
^

t


)




)












from which we can see that {overscore (H)}


l


({circumflex over (t)})=0. The second derivative of {overscore (H)} is given by












H
_

ll



(
t
)


=


1
t

-


Ψ
l



(

t
+
1

)


-




t
^

2


t
3






Ψ
l



(


t
^

+


t
^

t


)


.







(
16
)













We shall now demonstrate that the second drivative has a unique simple zero for t>0. The first step in this argument is to show that {overscore (H)}


ll


is strictly negative for t∈[0,2{circumflex over (t)}]. To this end, we require the following inequality












Ψ
l



(
t
)


>


1
t

+

1

2


t
2





,

t
>
0.





(
17
)













For the proof, we use the convexity of the function t→t


+2


and the trapezoidal rule, which gives the estimate
















n
=
0










1
2



{


1


(

t
+
n

)

2


+

1


(

t
+
n
+
1

)

2



}



>






n
=
0












t
+
n


t
+
n
+
1





1

x
2









x










=





t





1

x
2









x









=




1
t

.








(
18
)













Substituting equation (14) into the above inequality yields the desired conclusion. Next, we use (17) and the expression (16) for {overscore (H)}


11


to obtain the inequality










H
_

ll



(
t
)





t
-

2


t
^




2



t
2



(

t
+
1

)





,

t
>
0.











This inequality is insufficient to prove the result. Further estimates for HΔ are required.




Maximum Likelihood Estimation




The preceding lemma indicates the possibility that μ={circumflex over (μ)}, σ={circumflex over (σ)}. and α={circumflex over (α)} may indeed be the global maximum for H(μ,σ,α). We conjecture that this is the case. However, not knowing the true value of {circumflex over (μ)}, {circumflex over (σ)} and {circumflex over (α)} we can, of course, not compute H(μ,σ,α) and are left with maximizing







L


(

μ
,
σ
,
α

)


=




i
=
1

N








f


(



x
i

;
μ

,
σ
,
α

)


.












This is equivalent to maximizing








1
N


log





L

=


log





ρ

-


1
2


log





σ

-



(

γ
σ

)


α
/
2




1
N






i
=
1

N









&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α

.














One would expect that 1/N log L(μ,σ,α)≈H(μ,σ,α) and, therefore, that the maximum would occur at μ={circumflex over (μ)}, σ={circumflex over (σ)} and α={circumflex over (α)} for large values of N. However, as we show later, max


μ,σ


1/N log L(μ,σ,α) goes to infinity as α→0. Recall that this was not so for H(μ,σ,α). One must therefore take care that the value of α does not become too small when seeking a local maximum of







1
N


log






L
.











Towards showing this behavior of








1
N


log





L

,










we find a lower bound for







max

μ
,
σ





1
N


log







L


(

μ
,
σ
,
α

)


.












Lemma 4 Let








L
σ



(

μ
,
α

)


=



max

σ

0





1
N


log







L


(

μ
,
σ
,
α

)


.




If










i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α




>
0











then











L
σ



(

μ
,
α

)


=


log


(

α
2

)


-


1
α


log





α

-

log






Γ


(

1
α

)



-

1
α

-


1
α



log


(


1
N






i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



)








(
19
)













Proof: Evaluating log L yields the expression








1
N


log






L


(

μ
,
σ
,
α

)



=


log





ρ

-


1
2


log





σ

-



(

γ
α

)


α
2




1
2






i
=
1

N









&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α

.














Clearly,








1
N


log





L



-












as σ→0 and σ→∞. Since log L is continuously differentiable with respect to σ for 0<σ<∞, it follows that

















σ




[

(


1
N


log





L

)

]



σ
=

σ
0



=
0










at the maximizing value for σ, namely σ


0


. Differentiating, we get:
















σ




(


1
N


log





L

)


=


-

1

2

σ



+


α

2

σ





(

γ
σ

)


α
2




1
N






i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α














Solving for
















σ




(


1
N


log





L

)


=
0










we get the equations












(

γ

σ
0


)


α
2




1
N






i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



=


1
α






and





(
20
)







σ
0

=


γ


(


α
N






i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



)



2
α






(
21
)













We now have








L
σ



(

μ
,
α

)


=


log





ρ

-


1
2


log






σ
0


-



(

γ

σ
0


)


α
2




1
N






i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α














Substituting in (20) we get that








L
σ



(

μ
,
α

)


=


log





ρ

-


1
2


log






σ
0


-

1
α












and (21) gives











L
σ



(

μ
,
α

)


=


log


(

ρ

γ


)


-


1
α







log


(


α
N










i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



)








(
22
)













Equation (19) is then arrived at by using (13) and (22).




After maximizing over σ, one needs to maximize over μ. The problem with maximizing over μ, however, is that for α<1 the derivative with respect to μ of 1/N log L(μ,σ


0


,α) develops singularities at the data points and the maximum may occur at any of the data points as well as at value of μ satisfying











μ








{

log






L


(

μ
,

σ
0

,
α

)



}


=
0.










We will show that if μ=x


j


for some j∈{1,2, . . . ,N} then log L(x


j





0


,α)→∞ as α→0. To achieve this, we must understand what happens to








1
α







log


(


1
N










i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



)



as





α


0.










Now the arithmetic geometric inequality becomes and equality as α→0, i.e.,








lim

α

0





(


1
N






i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



)


1
α



=




i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;


1
N













if |x


i


−μ|>0 for all i=1,2, . . . ,N (see page 15 of G. Hardy, J. E. Littlewood and G. Polya, Inequalities, Cambridge Mathematical Library, 1991).




For small values of α, the approximation








1
α







log


(


1
N










i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



)



=



1
N










i
=
1

N







log


&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;




+

O






(
α
)













holds if |x


i


−μ|>0 for i=1,2, . . . ,N, considering the more general case, where μ=x


j


for some i∈{1,2, . . . , N}, we introduce the set S={i: x


i


≠μ


i


, i∈{1,2, . . . ,N}}. We then have






&AutoLeftMatch;






1
α







log


(


1
N










i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



)



=


1
α







log


(


1
N










i

S









&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



)









=



1
α







log


(


&LeftBracketingBar;
S
&RightBracketingBar;

N

)



+

(


1

&LeftBracketingBar;
S
&RightBracketingBar;











i

S








log







&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α




)

+

O


(
α
)

















Using this in addition to Lemma 4 we arrive at Lemma 5.




Lemma 5 For small values of α we have








L
σ



(

μ
,
α

)






1
2






log





α

-


1
α







log


(


&LeftBracketingBar;
S
&RightBracketingBar;

N

)



+


1

&LeftBracketingBar;
α
&RightBracketingBar;











i

S








log






&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;



1
2






log






(

π
2

)




+

O


(
α
)













where S={i: x


i


≠μ


i


, i∈{1,2, . . . ,N}}.




Proof: Stirling's formula tells us that







log






Γ


(
z
)



=



(

z
-

1
2


)






log






(
z
)


-
z
-


1
2






log






(

2





π

)


+

O






(

1
z

)













for large values of z. If α is small, we, therefore, have







log






Γ


(

1
α

)



=



(


1
α

-

1
2


)






log






(

1
α

)


-

1
α

-


1
2






log






(

2





π

)


+

O






(
α
)













Together with the arithmetic-geometric connection this provides us











L
σ



(

μ
,
α

)


=




log


(

α
2

)


-


1
α



log


(

2

π

)



-

1
α

-











{



(


1
α

-

1
2


)


log





α

-

1
α

-


1
2



log


(

2

π

)



+

O


(
α
)



}

-










{



1
α



log


(


&LeftBracketingBar;
S
&RightBracketingBar;

N

)



+


1

&LeftBracketingBar;
S
&RightBracketingBar;







i

S











log


&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;




+

O


(
α
)



}







=





1
2


log





α

-


1
α



(


&LeftBracketingBar;
S
&RightBracketingBar;

N

)


+


1

&LeftBracketingBar;
S
&RightBracketingBar;







i

S











log


&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;




+












1
2



log


(

π
2

)



+


O


(
α
)


.















Clearly, if μ=x


i


for some i∈{1,2,3, . . . ,N} we have |S|<N. Thus, fixing μ=x


j


we get








L
σ



(

μ
,
α

)






1
2


log





α

-


1
α



log


(

1
-

1
N


)



+


1

&LeftBracketingBar;
S
&RightBracketingBar;







i

S











log


&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;




+


1
2



log


(

π
2

)



+

O


(
α
)













But







1

&LeftBracketingBar;
S
&RightBracketingBar;







i

S




log


&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;













is independent of α and log α is dominated by







1
α



log


(

1
-

1
N


)












and so we see that








L




σ


(μ,α)→∞ for α→∞.






This was not the case for H(μ,σ,α). On the other hand we see that







-

1
α



log






(

1
-

1
N


)



  starts  to  dominate



1
N



L


(

μ
,
σ
,
α

)




  when








1
2


log





α

-


1
α


log






(

1
-

1
N


)



>
0










We would like a rule of thumb that ensures that







1
α


log






(

1
-

1
N


)











does not dominate. We pick






α
=

1
N











and evaluate










1
2






log





α

-


1
α







log


(

1
-

1
N


)








α
=

1
N




=



-

1
2







log





N

+
1
+

O


(

1
N

)













But −½ log N+1<0 if N>exp(2)=7.3891, so we can safely use the rule of thumb that α>1/N for our experience as we will mostly that N≧8.




We conclude this section by noting that the “optimal” choice of α is a local maximum of 1/N log(μ,σ,α) for which α>1/N.




Numerical Experiments




To verify the theory of preceding sections, we generated synthetic data from a standard Gaussian density, x


i


, i=1,2, . . . ,N. Lemma 5 provides an explicit formula for L


σ


(μ,α). Plotting this function for μ and σ for our synthetic data, we expect to see a local maximum at μ=0, σ=2 according to Theorem 1. Also, we expect the log-likelihood to go to infinity for small values of α. We make plots of L


σ


(μ,α) for values of μ and σ around 0 and 2 respectively, and for small values of α (with α<1/N).




FIG.


1


and

FIG. 2

illustrate these two points. Investigating the value of average log-likelihood in the figures, we find that the maximum likelihood in

FIG. 1

is approximately −1.3629 and is attained for μ=−0.06 and α=2.109. In

FIG. 2

the likelihood is −3.9758. This is not according to our intention. The reason for this is that

FIG. 2

is generated on a rectangular grid, and the data points happen not to fall on the grid points. If we plot L


σ


(x


l


,α) the situation is different.

FIG. 3

shows L


σ


(x


l


,α) as a function of α. We see that the likelihood is larger than −3.9758 and −1.3629 and indeed seems to grow towards infinity as α goes to zero. This graph confirmed the behavior predicted in Lemma 5. After our theoretical exposition, it seems appropriate to shift gear and consider some real data. We are, in particular, interested in acoustic data of speech. Digitized speech sampled at a rate of 16 KHz is considered. A frame consists of a segment of speech of duration 25 msec. and produces thirty-nine dimensional acoustic cepstral vectors via the following process, which is standard in speech recognition literature. Frames are advanced every 10 msec. to obtain succeeding acoustic vectors.




First, magnitudes of discrete Fourier transform of samples of speech data in a frame are considered in a logarithmically warped frequency scale. Next, these amplitude values themselves are transformed to a logarithmic scale, and subsequently, a rotation in the form of discrete cosine transform is applied. The first thirteen components of the resulting vector are retained. First and the second order differences of the sequence of vectors so obtained are then appended to the original vector to obtain the thirty-nine dimensional cesptral acoustic vectors.




As in supervised learning tasks, we assume that these vectors are labeled according to the basic sounds they correspond to. In fact, the set of 46 phonemes are subdivided into a set of one hundred twenty-six different variants each corresponding to a “state” in the hidden Markov model used for recognition proposed. They are further subdivided into more elemental sounds called allophones by using the method of decision trees depending on the context in which they occur. See Frederick Jelenik, “Statistical Methods for Speech Recognition”, MIT Press, 1997, L. R. Bahl, P. V. Desouza, P. S. Gopaikrishnan, M. A. Picheny, Context dependent vector quantization for continuous speech recognition,


Proceedings of IEEE Int. Conf. On Acoustics Speech and Signal Processing


, pp. 632-635, 1993, and Leo Breiman, “Classification and Regression Trees” Wadsworth International, Belmont, Calif., 1983.




We chose an allophone at random and made plots of L


σ


(μ,α) for each dimension to determine the best value of α. The particular allophone we chose corresponded to the phone AX as in the world “MAXIMUM” pronounced M AB K S A MUM. These plots are shown in

FIGS. 4

to


42


. Looking at the figures, we observe that L


σ


(μ,α) is highly unsymmetric. Moreover, it appears that the first and second difference (dimensions


12


to


25


,

FIGS. 12

to


29


, and dimensions


26


to


38


,

FIGS. 30

to


42


) generally prefer lower values of α than the unappended cepstral vector.




Table 1 shows the optimal values of α for each dimension corresponding to the phonemes AX, F, N and T. These phonemes were chosen so as to represent four distinct sounds corresponding to vowel, fricative, nasal and stop. The optimal values of α were extracted from two dimensional surface plots of L


σ


(μ,α) for the respective allophones and dimensions.












TABLE 1











Optimal choice of α for a vowel, a fricative, a nasal and a stop














Allophone

















513




1300




2300




3100














Phoneme
















dimension




AX




F




N




T









 0




2.01




1.60




1.58




1.39






 1




2.78




1.92




1.77




1.60






 2




1.84




1.92




1.86




1.77






 3




1.80




1.83




1.78




1.83






 4




1.86




1.90




2.04




1.76






 5




2.02




1.80




1.87




1.80






 6




2.59




1.84




1.77




1.86






 7




1.90




1.98




1.89




1.89






 8




1.91




2.05




1.82




1.73






 9




2.64




1.95




2.13




1.73






10




1.86




2.00




1.78




1.92






11




2.26




1.82




2.02




1.94






12




2.22




2.23




1.76




1.70






13




1.40




1.40




1.04




1.50






14




1.58




1.76




1.43




1.46






15




1.52




1.88




1.22




1.59






16




1.88




1.85




1.24




1.56






17




1,48




1.77




1.29




1.66






18




1.40




1.87




1.46




1.78






19




1,58




1.90




1.44




1.86






20




1.48




1.89




1.50




1.87






21




1.44




2.02




1.54




1.76






22




1.54




1.98




1.55




1.94






23




1.53




1.92




1.63




1.86






24




1.61




1.87




1.51




1.73






25




1.30




1.34




096




2.26






26




1.46




1.43




0.78




1.48






27




1.66




2.00




1.45




1.97






28




1.57




1.87




1.39




1.93






29




1.33




1.73




1.54




2.14






30




1.42




1.82




1.42




1.94






31




1.46




1.99




1.66




1.90






32




1.49




1.91




1.55




1.90






33




1.54




1.94




1.67




1.83






34




1.46




1.93




1.52




1.82






35




1.52




1.86




1.63




1.74






36




1.55




1.90




1.67




1.85






37




1.57




1.82




1.58




1.87






38




1.13




1.62




1.24




1.84














Numerical Computations of α, μ and σ




As seen from the foregoing discussion, the optimal choice for σ is given by (21) for a fixed μ and α. Once this choice of σ is made the average log-likelihood is giving by (19). Therefore, we need only worry about finding μand α. Since we are unable to do this analytically, we aim at finding a numerically convergent iteration scheme that converges to the optimal values. For the value of μ, we use the same approach as in Basu and Micchelli, supra,. Differentiating L


σ


(μ,α) with respect to μ we get











L
σ



(

μ
,
α

)





μ


=





i
=
1

N









&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;


α
-
2




(


x
i

-
μ

)







i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α













Equating this to zero provides the stationary equation for μ, which we can rewrite as









u
=





i
=
1

N









&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;


(

α
-
2

)




x
i







i
=
1

N








&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;


(

α
-
2

)








(
23
)













Considering μ's on the right hand side as old values, we compute updated values of μ from (23). Despite the analysis performed in Basu and Micchelli, supra, the iterative formula for computing μ so obtained, is seen to converge numerically.




It remains to find the optimal choice of α. To this end, we consider a portfolio of iterative methods, each of which are tested numerically. Clearly, we know a priori that α>0. Any iteration method considered ought to guarantee that this is so regardless of the starting value of α. Differentiating L


σ


(μ,α) with respect to α we obtain the stationary equation for α.











log


(
α
)


+

ψ


(


1
α

+
1

)


+

log


(

S


(
α
)


)


-


α







S
l



(
α
)




S


(
α
)




=
0




(
24
)













where







S


(
α
)


=


1
N






i
=
1

N









&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α

.













Here, we have used the identity







ψ


(


1
α

+
1

)


=


ψ


(

1
α

)


+

α
.












One way of ensuring that α is positive is by isolating log α on one side and exponentiating both sides to get









α
=



exp


(



α







S
l



(
α
)




S


(
α
)



-

ψ


(


1
α

+
1

)



)



S


(
α
)



.





(
25
)













Rewriting this gives the alternate iteration









α
=


α
2



S


(
α
)





exp


(


-


α






S
l



(
α
)



s





α



+

ψ


(


1
α

+
1

)



)


.






(
26
)













Using the formula







ψ


(


1
α

+
1

)


=


ψ


(

1
α

)


+
α











we isolate α as follows






α
=



α







S
l



(
α
)




S


(
α
)



-

log






S


(
α
)



-

log






(
α
)


-

ψ


(

1
α

)













The right hand side of the above equation is not necessarily positive. To make it possible we square both sides and divide by α. This gives the iteration formula for α









α
=


1
α





(



α







S
l



(
α
)




S


(
α
)



-

log






S


(
α
)



-

log


(
α
)


-

ψ


(

1
α

)



)

2

.






(
27
)













Modifying this equation, we also get another iteration formula, namely,









α
=




α
3



(



α







S
l



(
α
)




S


(
α
)



-

log






S


(
α
)



-

log






(
α
)


-

ψ


(

1
α

)



)



-
2


.





(
28
)













Finally, we wish to test Newton's method for locating zeros of an equation. However, Newton's method does not guarantee positivity. To circumvent this problem we introduce the temporary variable α=β


2


. We are solving the equation f(α)=0, where







f


(
α
)


=


ψ


(

1
+

1
α


)


+

log


(
α
)


+

log






S


(
α
)



-


α







S




(
α
)




S


(
α
)














Classical Newton's iteration is








α
^

=

α
-


f


(
α
)




f




(
α
)





,










where {circumflex over (α)} is the updated estimate of the root. As we introduce α=β


2


we consider the function g(β)=f(β


2


). Newton's iteration applied to g(β) gives us






β
=

β
-



g


(
β
)




g




(
β
)



.












Converting this to a formula involving α we get











α
^

=


(


α

-


f


(
α
)



2


α




f




(
α
)





)

2










(
29
)













where








f




(
α
)


=



-

1

α
2





ψ


(

1
+

1
α


)



+

1
α

-

α











S




(
α
)




S


(
α
)



-


(


S




(
α
)


)

2





(

S


(
α
)


)

2














We must now evaluate this portfolio of five formulas on a numerical basis. We initially evaluate the methods using synthetically generated data whose distribution is known equal to a Gaussian with mean zero and variance one. On such data, the optimal choice of μ, σ and α should approach the values μ=0, σ=1 and α=2 as the number of data points approach infinity. Using 10,000 data points, we obtained the sequence of approximants in Table 2 from the initial value μ=1.0 and α=1.0. Note that we purposely picked the initial values away from the known optimal values.












TABLE 2











Numerical literates of equations (26) and (28)














equation (26)




equation (28)

















iteration




μ




α




L


σ


(μ, α)




μ




α




L


σ


(μ, α)




















0




1.0000




1.0000




−1.845198




1.0000




1.0000




−1.845198






1




0.8590




1.1920




−1.737647




0.8590




1.4715




−1.714279






2




0.6230




1.3924




−1.598179




0.3935




1.7826




−1.490178






3




0.3607




1.5564




−1.483843




0.0823




1.8616




−1.419558






4




0.1594




1.6779




−1.431333




0.0095




1.9117




−1.415738






5




0.0511




1.7750




−1.418282




−0.0015




1.9457




−1.415540






6




0.0105




1.8504




−1.416146




−0.0027




1.9666




−1.415494






7




−0.0001




1.9043




−1.415705




−0.0030




1.9789




−1.415479






8




−0.0022




1.9400




−1.415556




−0.0031




1.9862




−1.415473






9




−0.0027




1.9625




−1.415501




−0.0031




1.9904




−1.415471






10




−0.0029




1.9762




−1.415481




−0.0032




1.9929




−1.415471






11




−0.0031




1.9854




−1.415474




−0.0032




1.9944




−1.415471






12




−0.0031




1.9894




−1.415472




−0.0032




1.9952




−1.415471






13




−0.0032




1.9923




−1.415471




−0.0032




1.9957




−1.415471






14




−0.0032




1.9940




−1.415471




−0.0032




1.9960




−1.415471






15




−0.0032




1.9950




−1.415471




−0.0032




1.9962




−1.415471






16




−0.0032




1.9956




−1.415471




−0.0032




1.9963




−1.415471






17




−0.0032




1.9959




−1.415471




−0.0032




1.9963




−1.415471






18




−0.0032




1.9961




−1.415471




−0.0032




1.9964




−1.415471






19




−0.0032




1.9962




−1.415471




−0.0032




1.9964




−1.415471














The asymptotic value of the maximum log-likelihood is E(log G(X)) where G(·) is a Gaussian with variance one and mean equal to zero and X is a normal random variable with distribution G(·). Computing this value yields the quantity







1
2






log







2





π

e











equals −1.41893. L


σ


(μ,α) may of course, be larger than this value for a particular set of data and the deviation will indicate overtraining.




Only two of the five methods converge from the starting point μ=1, σ=1. The first twenty approximants are shown in Table 2.




Disapointingly enough Newton's method did not converge from an arbitrary starting point. However, it converge locally. We initialized μ to be






u
=



1
N










i
=
1

N








x
i






and





μ



=
1.











With this choice, Newton's method converged rapidly. See Table 3.












TABLE 3











Numerical iterates of equations (26), (28) and (29) for a near optimal starting point (synthetic data).















equation (26)




equation (28)




equation (29)




















Iteration




μ




α




L


σ


(μ, α)




μ




α




L


σ


(μ, α)




μ




α




L


σ


(μ, α)























0




−0.003




1.00




−1.46




−0.003




1.00




−1.46




−0.003




1.00




−1.46






1




−0.001




1.14




−1.44




−0.001




1.35




−1.43




−0.001




5.30




−1.51






2




−0.000




1.30




−1.43




0.000




1.59




−1.42




−0.084




2.97




−1.43






3




0.001




1.46




−1.42




0.000




1.75




−1.41




−0.064




2.19




−1.41






4




0.002




1.60




−1.42




−0.000




1.85




−1.41




−0.018




2.00




−1.41






5




0.001




1.72




−1.41




−0.001




1.91




−1.41




−0.003




1.99




−1.41






6




−0.000




1.81




−1.41




−0.002




1.94




−1.41




−0.003




1.99




−1.41






7




−0.001




1.88




−1.41




−0.002




1.96




−1.41




−0.003




1.99




−1.41






8




−0.002




1.92




−1.41




−0.003




1.97




−1.41




−0.003




1.99




−1.41






9




−0.002




1.95




−1.41




−0.003




1.98




−1.41




−0.003




1.99




−1.41






10




−0.002




1.97




−1.41




−0.003




1.99




−1.41




−0.003




1.99




−1.41






11




−0.003




1.98




−1.41




−0.003




1.99




−1.41




−0.003




1.99




−1.41






12




−0.003




1.98




−1.41




−0.003




1.99




−1.41




−0.003




1.99




−1.41






13




−0.003




1.99




−1.41




−0.003




1.99




−1.41




−0.003




1.99




−1.41






14




−0.003




1.99




−1.41




−0.003




1.99




−1.41




−0.003




1.99




−1.41














The remaining two methods appear not to converge regardless of proximity of starting value to the optimum. We demonstrate this by choosing the start value α=2 and






μ
=


1
N










i
=
1

N




x
i

.













Observe that both methods diverge from the optimal value, see Table 4.












TABLE 4











Numerical literates of equations (25) and (27)














itera-




equation (25)





equation (27)

















tion




μ




α




L


σ


(μ,α)




μ




α




L


σ


(μ,α)




















 0




−0.0032




2.0000




−1.415471




−0.0032




2.0000




−1.415471






 1




−0.0032




2.0015




−1.415471




−0.0032




2.0015




−1.415471






 2




−0.0033




2.0036




−1.415472




−0.0033




2.0036




−1.415472






 3




−0.0033




2.0066




−1.415473




−0.0033




2.0066




−1.415473






 4




−0.0033




2.0109




−1.415476




−0.0033




2.0108




−1.415476






 5




−0.0033




2.0169




−1.415481




−0.0033




2.0169




−1.415481






 6




−0.0034




2.0256




−1.415492




−0.0034




2.0254




−1.415492






 7




−0.0034




2.0380




−1.415515




−0.0034




2.0376




−1.415514






 8




−0.0035




2.0559




−1.415560




−0.0035




2.0549




−1.415557






 9




−0.0037




2.0819




−1.415653




−0.0037




2.0796




−1.415644






10




−0.0039




2.1200




−1.415845




−0.0039




2.1151




−1.415817






11




−0.0042




2.1769




−1.416249




−0.0041




2.1662




−1.416163






12




−0.0046




2.2639




−1.417116




−0.0045




2.2404




−1.416854






13




−0.0052




2.4023




−1.419039




−0.0050




2.3493




−1.418230






14




−0.0062




2.6359




−1.423528




−0.0058




2.5116




−1.420964






15




−0.0078




3.0724




−1.434908




−0.0069




2.7583




−1.426381






16




−0.0108




4.0568




−1.468221




−0.0084




3.1443




−1.437055






17




−0.0204




7.4348




−1.596076




−0.0109




3.7725




−1.457929






18




−0.1085




54.4963




−2.057873




−0.0154




4.8505




−1.498251






19




  4.0544




15407.7




−INF




−0.0265




6.8322




−1.573973














It is well and good that our methods converge for synthetic data. However, in the real world things are not as nice. We therefore, test our remaining working methods on real data. We used the same data we previously used to maximize the log likelihood as a function of μ and σ vis graphical methods. Firstly, we run the three methods on leaf 513 dimension 0 using the starting point α=2 and






μ
=


1
N






i
=
1

N








x
i

.













See Table 5.












TABLE 5











Numerical iterates of equations (26), (28) and (29) for a near optimal starting point (real data)















equation (26)




equation (28)




equation (29)




















Iteration




μ




α




L


σ


(μ, α)




μ




α




L


σ


(μ, α)




μ




α




L


σ


(μ, α)























0




80.5




2.0




−4.65




80.5




2.00




−4.65




80.5




2.00




−4.65






1




80.5




1.9




−4.65




80.5




2.01




−4.65




80.5




2.00




−4.65






2




80.5




1.9




−4.65




80.5




2.01




−4.65




80.5




2.00




−4.65






3




80.6




1.9




−4.65




80.5




2.01




−4.65




80.5




2.01




−4.65






4




80.6




1.9




−4.65




80.5




2.01




−4.65




80.5




2.01




−4.65






5




80.6




1.9




−4.65




80.5




2.01




−4.65




80.5




2.01




−4.65






6




80.6




1.9




−4.65




80.5




2.01




−4.65




80.5




2.01




−4.65






7




80.6




1.8




−4.65




80.5




2.01




−4.65




80.5




2.01




−4.65






8




80.6




1.8




−4.65




80.5




2.01




−4.65




80.5




2.01




−4.65






9




80.6




1.7




−4.65




80.5




2.01




−4.65




80.5




2.01




−4.65






10




80.7




1.6




−4.65




80.5




2.01




−4.65




80.5




2.01




−4.65






11




80.7




1.5




−4.65




80.5




2.01




−4.65




80.5




2.01




−4.65






12




80.8




1.4




−4.66




80.5




2.01




−4.65




80.5




2.01




−4.65






13




80.8




1.2




−4.67




80.5




2.01




−4.65




80.5




2.01




−4.65






14




80.9




1.1




−4.68




80.5




2.01




−4.65




80.5




2.01




−4.65






15




80.9




0.9




−4.70




80.5




2.01




−4.65




80.5




2.01




−4.65






16




81




0.8




−4.73




80.5




2.01




−4.65




80.5




2.01




−4.65






17




81




0.7




−4.75




80.5




2.01




−4.65




80.5




2.01




−4.65






18




81




0.6




−4.79




80.5




2.01




−4.65




80.5




2.01




−4.65






19




81




0.5




−4.82




80.5




2.01




−4.65




80.5




2.01




−4.65














To further test the remaining working methods, we ran the experiments for all dimension of leaf


513


, computed the optimal value of α and compared them with the previous results of the graphical method. All methods seem to converge to nearby values. See Table 6.












TABLE 6











Optimal choice of α for AX via graphical method,






Newton's method (29) and equation (28) using 20 iterations
















dim




Graph




Newton




eq. (28)











 0




2.01




2.012




2.012







 1




2.78




2.787




2.790







 2




1.84




1.838




1.838







 3




1.80




1.803




1.803







 4




1.86




1.867




1.867







 5




2.02




2.022




2.022







 6




2.59




2.530




2.596







 7




1.90




1.909




1.909







 8




1.91




1.913




1.913







 9




2.64




2.633




2.643







10




1.86




1.866




1.866







11




2.26




2.265




2.265







12




2.22




2.223




2.223







13




1.40




1.402




1.403







14




1.58




1.591




1.591







15




1.52




1.521




1.521







16




1.88




1.890




1.889







17




1,48




1.480




1.480







18




1.40




1.402




1.402







19




1,58




1.577




1.577







20




1.48




1.488




1.488







21




1.44




1.445




1.445







22




1.54




1.543




1.543







23




1.53




1.535




1.535







24




1.61




1.614




1.614







25




1.30




1.305




1.309







26




1.46




1.474




1.474







27




1.66




1.663




1.663







28




1.57




1.565




1.565







29




1.33




1.328




1.328







30




1.42




1.422




1.422







31




1.46




1.466




1.466







32




1.49




1.488




1.488







33




1.54




1.547




1.547







34




1.46




1.460




1.460







35




1.52




1.526




1.526







36




1.55




1.557




1.557







37




1.57




1.572




1.572







38




1.13




1.153




1.157















Applications to Speech Recognition




We have now considered at length one dimensional α-densities. For, in particular, applications to speech recognition, the data is multidimensional and of a nature so complex as not to be accurately described in a single α-density. In Basu and Micchelli, supra, various mixtures of multidimensional α-densities were introduced and successfully applied to speech recognition. However, the value of α was fixed a priori and left constant over the mixture components. We will discuss how the individual mixture components can have differing values of α and how one goes about finding the optimal choices of α.




Let us describe how mixtures of multidimensional α-densities are constructed. The individual components are given by











P


(

x


λ
l


)


=




P
d



(

α
l

)







i
=
1

d







σ
i
l





exp


{

-


(



γ
d



(

α
l

)







i
=
1

d









(


x
i

-

μ
i
l


)

2


σ
i
l




)



α
l

/
2



}



,




(
30
)













where








ρ
d



(
α
)


=



α
2




Γ


(

d
2

)




Γ


(


d
+
2

α

)



d
2






(

d





π

)


d
2





Γ


(

d
α

)




d
2

+
1









and







γ
d



(
α
)



=


Γ


(


d
+
2

α

)



d






Γ


(

d
α

)















and λ


l


denotes the collection of parameters α


l


, μ


l


and σ


l


, where l=1, . . . ,m. The mixture density is now given by







P


(


x

Λ

,
w

)


=




l
=
1

m








w
l




p


(

x


λ
l


)


.













The log-likelihood of a data set {x


k


}


k=1




N


is, thus, given as







log





L

=




k
=
1

N








log


(




l
=
1

m








w
l



p


(

x


λ
l


)




)


.












We are ultimately interested in maximizing log L. A desirable property of an iteration scheme would, therefore, be to increase the value of log L. We denote old parameters by “hatted” quantities and mimic the EM philosophy as expounded in Christopher M. Bishop, “


Neural Networks for Pattern Recognition


”, Cambridge University Press, 1997. We have











log





L

-

log






L
^



=





k
=
1

N







{






l
=
1

m




w
l



p


(


x
k



λ
l


)








j
=
1

m





w
^

j



p


(


x
k




λ
^

j


)





×




w
^

l



p


(


x
k




λ
^

l


)






w
^

l



p


(


x
k




λ
^

l


)





}







k
=
1

N






l
=
1

m











w
^

l



p


(


x
k




λ
^

l


)







l
=
1

m





w
^

j



p


(


x
k




λ
^

j


)






log


{



w
l



p


(


x
k



λ
l


)






w
^

j



p


(


x
k




λ
^

j


)




}









(
31
)













where the well known Jensen's inequality (see Christopher M. Bishop, supra) logΣ(b


i


a


i


)≧Σb


i


log a


i


, where Σb


i


=1 with b


i


≧0 and a


i


≧0, arising from the concavity of the logarithmic function has been used in the last step with








b
l

=




w
^

l



p


(


x
k




λ
^

l


)





(




j
=
1

m









w
^

j



p


(


x
k




λ
^

j


)




)


-
1







for





l

=
1


,





,

m
.











We regroup equation (31) into three types of terms.




 log


L


−log


{circumflex over (L)}A+B+C






where







A


(

w
,

w
^

,

Λ
^


)


=




l
=
1

m




A
l


log






(

w
l

)








B


(

Λ
,

w
^

,

Λ
^


)


=




k
=
1

N






l
=
1

m




A
lk


log






p


(


x
k



λ
l


)










C


(


w
^

,

Λ
^


)


=




k
=
1

N






l
=
1

m




A
lk


log






(



w
^

l



p


(


x
k



λ
l


)



)






and









A
l


k

=






w
^

l



p


(


x
k



λ
l


)







j
=
1

m





w
^

j



p


(


x
k



λ
j


)










and






A
l


=




k
=
1

N




A
l



k
.














Note that the term C only depends on old parameters and A depends only on w


l


, l=1, . . . ,m and old parameters, whereas B depends on Λ and old parameters. Clearly, log L−log {circumflex over (L)}=0 when the old parameters and the new parameters are equal. Maximizing log L−log {circumflex over (L)} for a particular parameter while the others are fixed guarantees that the log-likelihood does not decrease. This can be done explicitly for w


l


, l=1, . . . ,m subject to the constraint










l
=
1

m



w
l


=
1.










Using Lagrange multipliers we arrive at the following equation.










A
l


w
l


-
Θ

=
0

,

l
=
1

,

.






,
m










Where Θ is the Lagrange multiplier. Solving for Θ we get







Θ
=





l
=
1

m



A
l


=
N


,










which yields w


l


=1/NA


l


. This was done in Basu and Micchelli, supra. Similarly, one may try to maximize with respect to μ


l


and σ


l


, but this cannot be done explicitly. The stationary equation is available in Basu and Micchelli, supra, and the update formulas










μ
i
l

=






k
=
1

N





(




j
=
1

d





(


x
j
k

-


μ
^

j
l


)

2



σ
^

j
i



)





α
^

l

/
2

-
1




A
lk



x
i
k







k
=
1

N





(




j
=
1

d




(


x
j
k

-


μ
^

j
l


)



σ
^

j
i



)





α
^

l

/
2

-
1




A
lk









and





(
32
)







σ
i
i

=




α
^

l





γ
d



(


α
^

l

)





α
^

l

/
2







k
=
1

N





(




j
=
1

d





(


x
j
k

-


μ
^

j
l


)

2



σ
^

j
i



)





α
^

l

/
2

-
1






A
lk



(


x
i
k

-


μ
^

i
l


)


2





A
l






(
33
)













for i=1, . . . ,d and l=1, . . . ,m are suggested. It remains to construct update formulas for α


l


, l=1, . . . ,m. We have







log






p


(

x


λ
l


)



=



1
2







(




i
=
1

d



log






σ
i
l



)


+

log







ρ
d



(

α
l

)



-


(



γ
d



(

α
l

)







i
=
1

d





(


x
i

-

μ
i
l


)

2



σ
^

i
i




)



α
l

/
2













which makes it possible to separate the α


l


variables.







B


(

Λ
,

w
^

,

Λ
^


)


=




l
=
1

m




B
l



(

Λ
,

w
^

,

Λ
^


)













where








B
l



(

Λ
,

w
^

,

Λ
^


)


=




k
=
1

N





A
lk



(



1
2







(




i
=
1

d



log






σ
i
l



)


+

log






ρ
d



(

α


)


-



(


γ
d



(

α


)


)



α
l

/
2





(




i
=
1

d









(


x
i
k

-

μ
i
l


)

2


σ
i
l



)



α
l

/
2




)


.












To maximize log L−log {circumflex over (L)} with respect to α


l


, it therefore, suffice to maximize B


l


(Λ,ŵ,{circumflex over (Λ)}) with respect to α


l


. This can be done numerically as was done in the previous section. However, we decide to maximize B


l


(Λ,ŵ,{circumflex over (Λ)}) by route force. Note that this can be done without incurring much computational cost. Assuming that we wish to compute B


l


(Λ,ŵ,{circumflex over (Λ)}) for α


l





min





min





α


, . . . ,α


min


+N


α


Δ


α





max


we note that the greatest computational cost is in computing







S
kl

=




i
=
1

d





(


x
i
k

-

μ
i
l


)

2


σ
i
l













for k=1,2, . . . ,m.




Once(S


kl


)


Δα


and (S


kl


)


α






min




have been computed, the quantities (S


kl


)


α






min






+jΔ






α




can be easily computed from the corresponding value for j−1 by one single multiplication. In any case, as we are maximizing log L−log {circumflex over (L)} over a discrete set of α's, that contain the previous value of α, we are guaranteed that the log-likelihood is nondecreasing.




Numerical Experiments




Two measurable quantities to evaluate the above technique for optimizing α are average log-likelihood and performance of the speech recognizer. We deal with the former first. The data used is again from leaf


513


. We computed the log-likelihood after each iteration with and without using the update formula for α. This we did for two different starting values of α. We found that the likelihood gain was considerable for α=2, but not for α=1, which indeed is that the update formula for α gives consistent improvement in log-likelihood and is more robust than the other method. See

FIGS. 43

,


44


and


45


.




As the ultimate objective in speech recognition is to differentiate different sounds, we decide to evaluate the discriminatory power of our density estimates. To this end, we evaluated the densities for all allophones (there are approximately 3500 of them) and compared the density of the “correct” allophone with all of the other. If the correct allophone yielded higher likelihood value than all the other allophone, we indeed achieved our goal. We produced frequencies for the correct leaf to be among the top 1, 10, 100 and 1000 highest densities. These numbers are displayed in Table 7. As can be seen, the discriminatory power of the scheme with updated α's is significant better than without updating α.




For a typical speech recognition system using 46 phonemes and approximately 3,000 basic sound (leaves from a decision tree), approximately 121,000 α-Gaussians were used. Note that preferred values of α tends to be less than 1.0, re-confirmimg on a systematic basis that non-Gaussian mixture components are prefferred.




Table 7: Leaf discrimination for initial value α=1 fpr no update of α versus update of α. Columns with headings “1”, “10”, “100” and “1000” contain the number of vectors for which the correct leaf was among the 1, 10, 100 and 1000 first leaves. Exactly 100 vectors were sampled for each leaf.























Leaf




1




10




100




1000




Ave.
























Without update of α


















0




25




78




96




100




20.5







1




24




70




96




100




23.3







2




56




88




98




 99




23.2







3




28




83




97




100




13.3







4




29




80




96




100




16.2







5




33




79




97




 99




29.1







6




20




66




95




100




23.8







7




15




52




88




100




57.1







8




46




78




95




100




32.4







9




38




76




95




100




21.4











With update of α


















0




32




74




98




100




16.2







1




24




73




97




100




13.3







2




64




92




98




 99




22.6







3




36




88




98




100




804







4




25




80




97




100




12.9







5




33




75




98




 99




33.2







6




30




71




96




100




18.6







7




21




57




90




 99




48.0







8




43




82




94




100




26.3







9




43




78




95




100




15.4















The invention may be implemented in various forms of hardware, software, firmware or a combination thereof. In particular, the system modules described herein for extracting and estimating densities of acoustic feature vectors is preferably implemented in software, in an application program which is loaded into and executed by a general purpose computer, such as an IBM RS6000 work station based on the PowerPC processor running an operating system (OS) such as the IBM AIX OS (IBM's version of the UNIX OS), or a personal computer (PC) based on an Intel Pentium or PowerPC processor running a suitable windowing operating system. Such computer platforms include one or more central processing units (CPUs), a random access memory (RAM) and input/output (I/O) interfaces. The various processes and functions described herein may be part of the micro-instruction code or application programs which are executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as additional storage devices and a printing device. The methods described here may also be implemented in portable computing devices, such as personal digital assistants (PDAs), over telephone channels and in clients/server platforms.




In general, the hardware platform is represented in FIG.


46


. An input unit


11


receives physical speech from a user, then converts it to a digital signal and sends it to CPU


12


. The input unit


11


may be a microphone connected to an audio adapter board plugged into a feature bus of the computer system. The audio adapter board typically includes an analog-to-digital converter (ADC) for sampling and converting acoustic signals generated by the microphone to digital signals which are input to the CPU


12


. The CPU


12


is controlled by software, which is stored in memory unit


13


(typically a combination of primary memory (e.g., RAM) and secondary memory (e.g., a hard disk storage device)), and calculates {circumflex over (μ)}


l


, {circumflex over (σ)}


l


and B


l





0


+iΔ) by using the initial μ, σ and w as the initial value. Furthermore, CPU


12


stores the value of function B(α) in memory unit


13


in order to compute the value of {circumflex over (α)}


l


. In this case, {circumflex over (σ)}


l


can be updated by the above Lemma 3. Therefore, CPU


12


can use the second values of μ, σ and α to compute the second value of function B(α) and stores it in memory unit


13


. After the computation, the optimal α may be obtained when function B(α) converges or the maximum iteration is reached. After acquiring the final values of μ, σ and α, CPU


12


can use the final μ, σ and α to match an already saved index table in the memory unit


13


in order to find the correct word corresponding to the inputted speech. The found word can be printed out by output unit


14


, such as a printer or a monitor.




The process is illustrated in the flow diagram of

FIGS. 47A and

47B. The process begins in function block


801


where μ, σ and w are initialized. The iteration number is set to 1 in function block


802


, and l is set to 1 in function block


803


. The process then enters a computation loop which begins by computing μ


l


, σ


l


from equations (32) and (33) in function block


804


. Then, in function block


805


, l is set to l+1, and a test is made in decision block


806


to determine whether l is equal M. If not, the process loops back to function block


804


. If, on the other hand, l is equal to M, then l is set to 1 in function block


808


, and i is set to 0 in function block


809


. At this point, the process enters a second processing loop. In function block


810


, B


l





0


+iΔ) is computed and stored. A test is then made in decision block


811


to determine if α


0


+iΔ greater than α


1


. If not, i is incremented to i+1 in function block


812


, and the process loops back to function block


810


. If, on the other hand, α


0


+iΔ greater than α


1


, then, {circumflex over (α)}


l


is calculated in function block


813


(FIG.


47


B). In function block


814


, a is updated, as in Lemma 3. For dimension l, we now have μ


l


={circumflex over (μ)}


l


, σ


l


={circumflex over (σ)}


l


and α


l


={circumflex over (α)}


l


in output block


815


. A test is then made in decision block


816


to determine if l is equal to m. If not, l is incremented to l+1 in function block


817


, and the process loops back to function block


809


(FIG.


47


A). Otherwise, a further test is made in decision block


818


to determine if there is convergence. If not, the iteration number is incremented in function block


819


, and the process loops back to function block


803


(FIG.


47


A). If there is convergence as determined in decision block


818


, then the final values of μ, σ, α are output at output block


820


.




Conclusion




We have addressed the issue of finding the optimal value of α in probability densities for speech data. Furthermore, the strategy of allowing different mixture components to have different α-values were also examined. The results indicate a cleat departure from Gaussian mixture modeling.




It is reasonable to assume that the optimal value of α depends on the number of components in the mixture model used. An extreme situation is when the number of components equals the number of data points and the best modeling is then achieved by a set of delta functions (i.e., α-densities with α=0) coinciding with the data points. On the other, if the data is, say, guassian, distributed and one is forced to use only one component in the mixture model, clearly a set of delta functions appears to be most inappropriate. Thus, a more adequate strategy for finding “optimal” value of α, or more generally h(t), must take into account the number of mixture components more appropriate function.




While the invention has been described in terms of preferred embodiments, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and scope of the appended claims.



Claims
  • 1. A computer implemented method for automatic machine recognition of speech, comprising the steps of:inputting acoustic data; and iteratively refining parameter estimates of densities comprising mixtures of power exponential distributions whose parameters are means (μ), variances (σ), impulsivity numbers (α) and mixture weights (w), wherein the step of iteratively refining comprises the steps of: predetermining initial values of the parameters μ, σ and w; iteratively deriving the parameters μ, σ and α such that for impulsivities equal to that of a speech Gaussian yields updating for an expectation-maximization process; and determining a non-decreasing value of a log likelihood of the parameters in order to get final values of the parameters μ, σ and α.
  • 2. The computer implemented method of claim 1, wherein the log likelihood of the parameters is determined by B⁡(Λ,w^,Λ^)=∑ml=1⁢Bl⁡(Λ,w^,Λ^)where Bl⁡(Λ,w^,Λ^)=∑k=1N⁢ ⁢Alk⁡(12⁢(∑i=1d⁢ ⁢log⁢ ⁢σil)+log⁢ ⁢ρd⁡(αl)-(γd⁡(αl))αl/2⁢(∑i=1d⁢ ⁢(χik-μil)2σil)αl/2).
  • 3. The computer implemented method of claim 2, further comprising: storing the final values of the parameters μ, σ and α.
  • 4. A computer implemented method for automatic machine recognition of speech, comprising the steps of:inputting acoustic data; and iteratively refining parameter estimates of densities comprising mixtures of power exponential distributions whose parameters are means (μ), variances (σ), impulsivity numbers (α) and mixture weights (w), wherein the step of iteratively refining comprises the steps of: predetermining initial values of the parameters μ, σ and w; deriving μl and σl from the following equations μil=∑k=1N⁢(∑j=1d⁢(xjk-μ^jl)2σ^ji)α^l/2-1⁢Alk⁢xik∑k=1N⁢(∑j=1d⁢(xjk-μ^jl)σ^ji)α^l/2-1⁢Alk⁢ ⁢andσii=α^l⁢γd⁡(α^l)α^l/2⁢∑k=1N⁢(∑j=1d⁢(xjk-μ^jl)2σ^ji)α^l/2-1⁢Alk⁡(xik-μ^il)2Alfor i=1, . . . ,d and l=1, . . . ,m;updating σ by assuming that θ=(μ,σ,α), {circumflex over (θ)}=({circumflex over (μ)},{circumflex over (σ)},{circumflex over (α)}) and letting H(μ,σ)=E{circumflex over (θ)}(log f(·|θ)), in which case H has a unique global maximum at μ={circumflex over (μ)}, σ={circumflex over (σ)} where β⁡(α,α^)={α⁢ ⁢Γ⁡(α+1α^)Γ⁡(1α)}2α⁢ ⁢Γ⁡(3α)⁢ ⁢Γ⁡(1α)Γ⁡(3α)⁢ ⁢Γ⁡(1α);setting the l dimension by μl={circumflex over (μ)}l, σl={circumflex over (σ)}l and αl={circumflex over (α)}l; and determining the convergence of a mixture weights function in order to get final values of μ, σ and α.
  • 5. The computer implemented method of claim 4, wherein the mixture weights function is determined by B⁡(Λ,w^,Λ^)=∑l=1m⁢Bl⁡(Λ,w^,Λ^)where Bl⁡(Λ,w^,Λ^)=∑k=1N⁢Alk⁡(12⁢ ⁢(∑i=1d⁢log⁢ ⁢σil)+log⁢ ⁢ρd⁢(αl)-(γd⁢(αl))αl/2⁢(∑i=1d⁢ ⁢(χik-μil)2σil)αl/2).
  • 6. The computer implemented method of claim 5, further comprising; storing final values of the parameters μ, σ and α.
  • 7. A speech recognition device, comprising:an input unit converting acoustic speech to digital signals; a central processing unit (CPU) connected to said input unit for receiving said digital signals, the CPU modeling said digital signals to a series of parameters, which can present a corresponding word by an index table, by iteratively refining parameter estimates of densities comprising mixtures of power exponential distributions whose parameters are means (μ), variances (σ), impulsivity numbers (α) and mixture weights (w); a memory unit connect to said CPU for storing and interchanging said parameters and said index table; and an output unit connected to said CPU for outputting said corresponding word, wherein the CPU iteratively refines parameter estimates by predetermining initial values of the parameters μ, σ and ω, iteratively deriving the parameters μ, σ and α such that for impulsivities equal to that of a speech Gaussian yields updating for an expectation-maximization process, and determining a non-decreasing value of a log likelihood of the parameters in order to get final values of the parameters μ, σ and α.
  • 8. The speech recognition device of claim 7, wherein said memory unit stores programs, said index table and final values of the parameters μ, σ and α.
  • 9. The speech recognition device of claim 7, wherein said memory unit is a Random Access Memory (RAM).
  • 10. The speech recognition device of claim 7, wherein said memory unit is a hard disk storage device.
  • 11. The speech recognition device of claim 7, wherein said input unit is a microphone connected to an audio adapter board plugged into a feature bus of a computer system.
  • 12. The speech recognition device of claim 11 wherein said audio adapter board includes an analog-to-digital converter (ADC) for sampling and converting acoustic signals generated by said microphone to said digital signals.
  • 13. The speech recognition device of claim 7, wherein said output unit is a monitor.
US Referenced Citations (15)
Number Name Date Kind
5193142 Zhao Mar 1993 A
5289562 Mizuta et al. Feb 1994 A
5450523 Zhao Sep 1995 A
5473728 Luginbuhl et al. Dec 1995 A
5522011 Epstein et al. May 1996 A
5715367 Gillick et al. Feb 1998 A
5778341 Zeljkovic Jul 1998 A
5799277 Takami Aug 1998 A
5839105 Ostendorf et al. Nov 1998 A
5895447 Ittycheriah et al. Apr 1999 A
5946656 Rahim et al. Aug 1999 A
6003002 Netsch Dec 1999 A
6009390 Gupta et al. Dec 1999 A
6021387 Mozer et al. Feb 2000 A
6269334 Basu et al. Jul 2001 B1
Non-Patent Literature Citations (1)
Entry
Kenny et al, “Separation of Non-Spontaneous and Spontaneous Speech”, IEEE 1998.