Nongaussian density estimation for the classification of acoustic feature vectors in speech recognition

Information

  • Patent Grant
  • 6269334
  • Patent Number
    6,269,334
  • Date Filed
    Thursday, June 25, 1998
    26 years ago
  • Date Issued
    Tuesday, July 31, 2001
    23 years ago
Abstract
A statistical modeling paradigm for automatic machine recognition of speech uses mixtures of nongaussion statistical probability densities which provides improved recognition accuracy. Speech is modeled by building probability densities from functions of the form exp(−tα/2) for t≧0 and α>0. Mixture components are constructed from different univariate functions. The mixture model is used in a maximum likelihood model of speech data.
Description




BACKGROUND OF THE INVENTION




1. Field of the Invention




The present invention generally relates to speech recognition systems and, more particularly, to the use of EM type algorithms for the estimation of parameters for a mixture model of nongaussian densities. The present invention was motivated by two objectives. The first was to study maximum likelihood density estimation methods for high dimensional data, and the second was the application of the techniques developed in large vocabulary continuous parameter speech recognition.




2. Background Description




Speech recognition systems requires modeling the probability density of feature vectors in the acoustic space of phonetic units. Purely gaussian densities have been know to be inadequate for this purpose due to the heavy tailed distributions observed by speech feature vectors. Se,, for example, Frederick Jelenik,


Statistical Methods for Speech Recognition


, MIT Press, 1997. As an intended remedy to this problem, practically all speech recognition systems attempt modeling by using a mixture model with gaussian densities for mixture components. Variants of the standard K-mean clustering algorithm are used for this purpose. The classical version (as described by John Hartigan in


Clustering Algorithms


, John Wiley & Sonse, 1975, and Anil Jain and Richard Dubes in


Algorithms for Clustering Data


, Prentice Hall, 1988) of the K-means algorithm can also be viewed as an special case of the EM algorithm (as described by A. P. Dempster, N. M. Laird and D. B. Baum in “Maximum likelihood from incomplete data via the EM algorithm”,


Journal of Royal Statistical Soc


., Ser. B, vol. 39, pp. 1-38, 1997) in the limiting case of gaussian density estimation with variance zero. See, for example, Christopher M. Bishop,


Neutral Networks for Pattern Recognition


, Cambridge University Press, 1997, and F. Marroquin and J. Girosi, “Some extensions of the K-means algorithm for image segmentation and pattern classification”,


MIT Artificial Intelligence Lab. A. I. Memorandum no.


1390, January 1993. The only known attempt to model the phonetic units in speech with nongaussian mixture densities is described by H. Ney and A. Noll in “Phoneme modeling using continuous mixture densities”,


Proceedings of IEEE Int. Conf. on Acoustics Speech and Signal Processing


, pp. 437-440, 1988, where laplacian densities were used in a heuristic based estimation algorithm.




SUMMARY OF THE INVENTION




It is therefore an object of this invention to provide a new statistical modeling paradigm for automatic machine recognition of speech.




According to this invention, novel mixtures of nongaussian statistical probability densities for modeling speech subword units (e.g., phonemes) and further subcategories thereof, including the transition and output probabilities, are used in a Hidden Markov generative model of speech. We model speech data by building probability densities from functions of the form exp(−t


α/2


) for t≧0, α>0. In this notation, the case α=2 corresponds to the gaussian density, whereas the laplacian case considered in Ney et al. corresponds to α=1.




Furthermore, we focus on a set of four different types of mixture components constructed from a different univariate function. For each of them, a mixture model is then used for a maximum likelihood model of speech data. It turns out that our iterative algorithm can be used for a range of values of α (opposed to fixed α=1 in Ney et al. or α=2 in standard speech recognition systems).




Our observation is that the distribution of speech feature vectors in the acoustic space are better modeled by mixture models with nongaussian mixture components. In particular, for speech α<1 seems more appropriate, see FIG.


1


. To wit, very similar distributions have been noted for the distribution of image gradients by Stuart Geman in “Three lectures on image understanding”, The Center For Imaging Science, Washington State University, video tabe, Sep. 10-12, 1997, and also by David Mumford in pattern theory, Lecture in Directors Series, IBM Yorktown Heights, Feb. 23, 1998.




The second point to be made is that from a practical standpoint estimation of densities in speech data is accomplished by all the difficulties characteristic of high dimensional density estimation problems. See David W. Scott,


Multivariate Density Extimation


, Wiley Interscience, 1992, and James R. Thompson and Richard A. Tapia,


Nonparametric Function Estimation


, modeling and simulation, SIAM Publications, 1997. Feature vectors of dimension fifty or more are typical in speech recognition systems, and consequently, the data can be considered to be highly sparse. In contrast, the literature (see Scott, supra) on multivariate density estimation puts high emphasis on “exploratory data analysis” the goal of which is to glean insight about the densities via visualization of the data. This is not feasible for dimensions of the order of fifty or more, even when projections on lower dimensional spaces are considered.




The classification/training step for speech recognition which we use can be cast in a general framework. We begin with a parametric family p(x/λ), x ε R


d


, λ ε Ω ⊂R


q


of probability densities on R


d


with parameters in the manifold Ω in R


q


. The method of classification used here begins with k finite subsets of T


1


, T


2


, . . . , T


k


of R


d


and consider the problem of deciding which of these sets given vector x ε R


d


lies. The method that we employ picks k probability densities p


1


=p(.|θ


1


), . . . , p


k


=p (.|θ


k


) from our family and associates with the subset T


l


the probability density p


1


, l=1, 2, . . . , k. Then, we say x ε R


d


belongs to the subset T


r


if r is the least integer in the set {1, 2, . . . , k} such that








p
r



(
x
)




max



{




p
l



(
x
)




:






1


l

k

}

.












To use this method we need to solve the problem of determining, for a given finite subset T⊂R


d


, a probability density p(.|θ) in our family. This is accomplished by maximum likelihood estimation (MLE). Thus, the likelihood function, for the data T is given by








L


(

λ
|
T

)


=




y

T




p
(

y
|
λ

)



,





λ

Ω

,










and a vector, θ ε Ω, is chosen which maximizes this function over all λ ε Ω (if possible). Generally, a maximum does not exist, (see V. N. Vapnik,


The Nature of Statistical Learning Theory


, Springer Verlag, 1995, p. 24), and thus, typically as iterative method is used to find a stationary point of the likelihood function. As we shall see, the iteration we use takes the form of a variation of the EM algorithm described by Dempster et al., supra, and Richard A. Redner and Homer Walker, “Mixture densities, maximum likelihood and the EM algorithm”, SIAM Review vol. 26, no. 2, April 1984.











BRIEF DESCRIPTION OF THE DRAWINGS




The foregoing and other objects, aspects and advantages will be better understood from the following detailed description of a preferred embodiment of the invention with reference to the drawings, in which:





FIG. 1

is a graph showing an optimum mixture component in logarithmic scale;





FIG. 2

is a graph showing a one-dimensional density profile for different values of α;





FIGS. 3A

,


3


B,


3


C, and


3


D are graphs of level curves of two-dimensional tensor product α-density for different values of α;





FIG. 4

is a graph of a clipped gaussian curve;





FIG. 5

is a graph of a gaussian curve with double exponential tails;





FIG. 6

is a graph demonstrating the four fixed points of the map V


3/2


;





FIG. 7

is a graph of W


3/2


(μ) and its minimum as a function of μ;





FIG. 8

is a graph of W


1/2


(μ) and its minimum as a function of μ;





FIG. 9

is a graph demonstrating the seven fixed points of the map V


½


;





FIG. 10

is a graph showing percent word error for different values of α; and





FIG. 11

is a flow diagram illustrating a computer implementation of the process according to the invention.











DE




TAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION We begin with a nonnegative function f defined on the positive real axis R


+


such that all of its moments











m
β

=




R
+





t
β



f


(
t
)





t




,




(
1
)













are finite for β≧−½. Here, R


+


denotes the positive real axis. We also use PD


d


to denote the space of all positive definite matrices of size d.




Lemma 1 We define constants







ρ
d

=



Γ


(

d
2

)



π

d
2







(

m

d
2


)


d
2




(

m


d
2

-
1


)



d
2

+
1














and







γ
d

=



m

d
2



m


d
2

-
1



.











Then for any vector μ in R


d


and any positive definite matrix Σ of size d, the function











p


(


x
|
μ

,
Σ

)


=


ρ
d



1


det





Σ





f


(




γ
d



(

x
-
μ

)


t




Σ

-
1




(

x
-
μ

)



)




,





x


R
d






(
2
)













is a probability density function with mean μ and covariance Σ.




Proof: The proof uses the following identity which holds for any non-negative integer k and positive constant δ











R
d






&LeftDoubleBracketingBar;
x
&RightDoubleBracketingBar;

k



f


(

δ



&LeftDoubleBracketingBar;
x
&RightDoubleBracketingBar;

2


)





x



=



π

d
2



Γ


(

d
2

)





δ

-


k
+
d

2





m


k
+
d
-
2

2




,










where









&LeftDoubleBracketingBar;
x
&RightDoubleBracketingBar;

2

=




l
=
1

d



x
l
2



,





x
=


(


x
1

,

x
2

,





,

x
d


)

.












the verification of this equation uses the fact that volume of the surface of unit sphere in R


d


is








2


π

d
2




Γ


(

d
2

)



.










Additionally, we may combine the method of tensor product with this lemma and construct other densities which we shall employ later. The proof of the next lemma is even simpler than the one above.




Lemma 2 Let x=(x


1


, x


2


), where x


i


ε R


d






i




, i=1,2. Suppose p


i


(x


i





i


, μ


i


), x


i


ε R


d






i




; i=1,2 have the form (2) above. Then






p


(



(


x
1

,



x
2

(
&AutoRightMatch;

|
μ

,
Σ

)

=


p
1



(



x
1

|

μ
1


,

Σ
1


)




p
2



(



x
2

|

μ
2


,

Σ
2


)




,






x
1



R

d
1



,






x
2



R

d
2















has mean μ=(μ


1


, μ


2


) and covariance






Σ
=


(




Σ
1



0




0



Σ
2




)

.











A special case of this construction is the density







p


(


x
|
μ

,
σ

)


=


ρ
1
d



1





i
=
1

d



σ
i









i
=
1

d



f


(


γ
1





(


x
i

-

μ
i


)

2


σ
i



)














defined for x=(x


1


, x


2


, . . . , x


d


) ε R


d


, where μ=(μ


1


, μ


2


, . . . , μ


d


) and σ=(σ


1


, σ


2


, . . . , σ


d


) are the mean and diagonal covariance, respectively.




Let us note some special cases.




EXAMPLE 1




For this example, we choose any α ε R


+


, define the function











f


(
t
)


=

exp


(

-

t

α
/
2



)



,





t


R
+






(
3
)













and note for β>−1 that










m
β

=

2


α

-
1





Γ


(



2

β

+
2

α

)


.






(
4
)













Consequently, the function












S
β



(


x
|
μ

,
Σ

)


=




ρ
d



(
α
)




det





Σ





exp


(

-


(



γ
d



(
α
)




(

x
-
μ

)


)


α
2



)




,





x


R
d






(
5
)













where








ρ
d



(
α
)


=


α
2




Γ


(

d
2

)



π

d
2







Γ


(


d
+
2

α

)



d
2




Γ


(

d
α

)




d
2

+
1














and








γ
d



(
α
)


=


Γ


(


d
+
2

α

)



Γ


(

d
α

)













is a probability density with mean μ and covariance Σ for any α>0. We refer to this as a spherical α-density.




EXAMPLE 2




As a special case of this construction, we choose d=1 and consider the univariate density




We use this univariate density and construct a multivariate density by the formula




where x=(x


1


, x


2


, . . . , x


d


) ε R


d


. We refer to this as a tensor product α-











    



1

σ





ρ
1



(
α
)




exp


(


-


γ
1



(
α
)





(



(

t
-
μ

)

α

σ

)


)



,





t


R
.






(
6
)









T
α

(


x
|
μ

,
σ

)

=


1


det





diag





σ











ρ
1



(
α
)


d







exp


(


-



γ
1



(
α
)



α
2








l
=
1

d




σ
l


-
α

/
2





&LeftBracketingBar;


x
l

-

μ
l


&RightBracketingBar;

α




)




,


















density.




It is interesting to note the limiting form of these distributions when α→





. (We refer to K. Fukunaga,


Statistical Pattern Recognition


, second edition, Academic Press, 1990, for a related discussion). Since lim


x−0


xΓ(x)=1, we conclude that








lim

α







γ
d



(
α
)



=

d

d
+
2












and also







lim

α








ρ
d



(
α
)





{

d



d
+
2

)


π


}


d
2





Γ


(


d
2

+
1

)


.












Hence, we obtain for almost all x ε R


d


that








lim

α







S
α

(


x
|
μ

,
Σ

)


=


S


(


x
|
μ

,
Σ

)











where









S


(


x
|
μ

,
Σ

)

:=


1


det





Σ






f




(



(

x
-
μ

)

t




Σ

-
1




(

x
-
μ

)



)




,





x


R
d












and for t ε R








f




(
t
)


:=



{

d


(

d
+
2

)


π


}


d
/
2




Γ


(


d
+
2

2

)








{




1
,





&LeftBracketingBar;
t
&RightBracketingBar;

<


d
=
2

2







0
,



otherwise















Similarly, for the tensor product construction, we have for any x ε R


d


that










lim

α
->








T
α

(
x

&RightBracketingBar;


μ


,
σ

)

=


1





i
=
1

d



σ
i







g




(


&LeftDoubleBracketingBar;


(

x
-
μ

)

/

σ


&RightDoubleBracketingBar;



)













where








&LeftDoubleBracketingBar;


(

x
-
μ

)

/

σ


&RightDoubleBracketingBar;



=


max

1

l

d





&LeftBracketingBar;


x
l

-

μ
l


&RightBracketingBar;

/


σ
l














and for t ε R








g




(
t
)


=


1

3

d
/
2









{




1
,





&LeftBracketingBar;
t
&RightBracketingBar;



3







0
,



otherwise















The behavior of S


α


and T


α


where α→0


+


are equally interesting. Both approach a “delta” function concentrated at their mean. The precise result we record for the spherical case states that













lim

α
->

0
+








R
d





f


(
x
)





S
α

(
x
&RightBracketingBar;


μ



,
Σ

)

=

f


(
μ
)






(
7
)













whenever f is continuous on the R


d


with the property that there exist constants M,N>0 and β ε[0,2], such that for ∥x∥>N







&LeftBracketingBar;

f


(
x
)


&RightBracketingBar;



M



&LeftDoubleBracketingBar;
x
&RightDoubleBracketingBar;


β
.













Observe that the value β=2 must be excluded since for the function








f


(
x
)


=



(

x
-
μ

)

t




Σ

-
1




(

x
-
μ

)




,





x


R
d












the left hand side of equation (7) is one, while the right hand side is zero. The verification of equation (7) follows from a computation of all the moments of S


α


. Specifically, we have the following fact.




Lemma 3 For any β ε (0,2) and α ε R


+


, there exist M,N>0 and γ ε (0,1)




such that













R
d






(



(

x
-
μ

)

t




Σ

-
1




(

x
-
μ

)



)


β
/
2





S
α

(
x
&RightBracketingBar;


μ


,
Σ

)




x




M






λ

1
/
α













whenever 0<α<N.




Proof: We proceed as in Lemma 1 and observe for any β>0 that













R
d






(



(

x
-
μ

)

t




Σ

-
1




(

x
-
μ

)



)


β
/
2





S
α

(
x
&RightBracketingBar;


μ


,
Σ

)




x


=


Γ


(


β
+
d

α

)





Γ


(


d
+
2

α

)



β
/
2





Γ


(

d
α

)



1
-

β
2















An application of Stirling's formula (see M. Abramowitz and I. Stegan,


Handbook of Mathematical Functions


, Dover Publications, New York, Ninth Dover printing, 1972, pp. 254, 6.1.38)







1
<



Γ


(

t
+
1

)


/


2

π





t

t
+
½




exp


(

-
t

)



<

exp


(


1
/
12


t

)



,





t

R











proves the result with






λ
:=



(

β
+
d

)


β
+
d





(

d
+
2

)



β


(

d
+
2

)


/
2




d

d


(

1
-

β
/
2


)















Note that using the same methods as above, we may conclude that








lim

α
->

0
+






α
2




γ
α
d



(
α
)




=



(

d
+
2

)


d
+
2










d
d













and









ρ
~

d



(
α
)


<


ρ
d



(
α
)


<


exp


(

α
/
12

)






ρ
~

d



(
α
)













where









ρ
~

d



(
α
)


:=


α

-
½





Γ


(

d
2

)



π


d
2

+
1









&AutoLeftMatch;


[


d

1
+

d
2




2



(

d
+
2

)


d
2




]


3
2


&AutoRightMatch;







&AutoLeftMatch;


[


(

d
+
2

)

d

]



d


(

d
+
2

)



2

α



&AutoRightMatch;












The statement for the tensor product case is exactly the same and is proved in the same way. The univariate α-densities with zero mean and unit variance a graphically shown in FIG.


2


. Note that the covariance matrix of the tensor product α-density is the diagonal matrix diag σ. Also, when α is equal to 2 and Σ equals diag σ, then both the spherical α-density and the tensor product α-density are identical with the gaussian density. However, it must be noted that the tensor product α-density is different from the spherical α-density for α≠1 even when Σ is diagonal. In fact, assuming zero means and unit covariances in equations (5) and 6), the contours of constant density for the spherical α-density in equation (5) is given




by









i
=
1

d



x
i
2











=constant, whereas contours of constant density for the tensor product α-density in equation (6) is given by









i
=
1

d



x
i
α











=constant. The former set of contors are plotted for different values of α in

FIGS. 3A

to


3


D thereby illustrating the difference between the two densities even for diagonal covariances.




The next example we shall consider is a clipped gaussian density.




EXAMPLE 3




For very ε>0 and t nonegative, we set








h


(


t


)=max(0,


t


−ε).






In this case, for every integer n, the moments of the function f=exp(−h) required for the spherical construction are given by the formula








m
β

=



ϵ
β


β
+
1


+


exp


(
ϵ
)






ϵ





t
β



exp


(

-
t

)





t






,





β
>

-
1


,





ϵ
>
0.











Moreover, for






r
=


[

d
2

]







(


i
.
e
.

,






the





greatest





integer







d
2



)












since




we get that




where we use the definition (x)


1


=Γ(x+1)/Γ(x−1+l). For an example of









ϵ





t

d
/
2




exp


(

-
t

)





t



=

{









l
=
0

r





(
r
)

l



ϵ

r
-
l













l
=
0

r





(

d
2

)

l



ϵ


d
2

-
l




+



(

d
2

)


r
+
1




π



exp


(
ϵ
)




(

1
-

erf


(

ϵ

)



)












m

d
2



=

{







l
=
1

r





(
r
)

l



ϵ

r
-
l













l
=
1

r





(

d
2

)

l



ϵ


d
2

-
l




+



(

d
2

)


r
+
1




π



exp


(
ϵ
)




(

1
-

erf


(

ϵ

)



)



















this density see FIG.


4


.




The final example we shall consider is a gaussian density with a double exponential tails introduced by P. Huber (see for instance D. M. Titterington, A. F. M. Smith and U. E. Markov,


Statistical Analysis of Finite Mixture Distributions


, Wiley Interscience, New York, 1985, p.23).




EXAMPLE 4




For every ε>0 and t nonegative, we set







h


(
t
)


=

{




t
,




t

ϵ








2



ϵ





t



-
ϵ

,




t

ϵ















The required moments are given by the formula







m

d
2


=


Γ


(


d
2

+
1

)


-



ϵ





t

d
2




exp


(

-
t

)





t



+


1


2

d
+
1




ϵ

d
2









l
=
0

d





(
d
)

l




ϵ

d
-
l


.















The corresponding density is a gaussian whose tails have been adjusted to have an exponential decay, see

FIG. 5

for its plot.




Maximum Likelihood Estimation




Let T={x


1


, x


2


, . . . , x


N


} R


d


be independent samples from a probability density function p. Then the likelihood function of the data is









k
=
1

N




p


(

x
k

)


.











Given a function f as above, we discuss the problem of determining the value of the mean μ and R


d


and a covariance matrix Σ which maximizes the likelihood function for the density











p
(
x
&RightBracketingBar;


μ

,
Σ

)

=


ρ
d



1


det











Σ





f


(




γ
d



(

x
-
μ

)


t




Σ

-
1




(

x
-
μ

)



)




,





t



R
d

.












It is convenient to express f in the form








f


(


t


)=exp(−


h


(


t


)),


t ε R








and consider the negative of the log-likelihood function











-
log






L

=



-
N






log






ρ
d


-


N
2


log





det












Σ

-
1



+




k
=
1

N



(




γ
d



(


x
k

-
μ

)


t




Σ

-
1




(


x
k

-
μ

)



)







(
8
)













where x


1


, x


2


, . . . , x


N


are the given data. We introduce the positive definite matrix Γ,







Γ
=


γ
d
½



Σ

-
½




,










the vector




and function F




Therefore, we can express the log-likelihood in the alternate form






φ
=


γ
d
½


Γ





μ





F
:=


F


(

Γ
,
φ

)


:=




-
N






log





det





Γ

+




k
=
1

N





h
(


&LeftDoubleBracketingBar;



Γ

x

k

-
φ

&RightDoubleBracketingBar;

2

)

.





-
log







L



=



-
N






log






ρ
d


+


N
2


log






γ
d


+

F
.














Thus, maximizing L over μ ε R


d


and Σ ε PD


d


is equivalent to minimizing F over Γ ε PD


d


and φ ε R


d


. Our first lemma identifies a class of univariate functions f, for which L achieves its maximum R


d


×PD


d


. To this end, it is convenient to introduce the linear subspace of R


d


defined by










M


(
T
)


:=


{





k
=
1

N




c
k




x
k

:




k
=
1

N



c
k





=
0

}

.





(
9
)













Lemma 4 Suppose there exist constants a, b and c, with b, c positive such that








h


(
t
)




a
+

b






t
c




,





t


R
+












and






dim M=d






Then L has a maximum on R


d


×PD


d


.




Proof: Since F is a continuous function of Γ and Φ (and F is ∞ when Γ is singular), it suffices to prove that F goes to infinity as (Γ,Φ)→∞. To this end we observe that the function




vanishes if and only if both Γ and μ are zero. We conclude that there exists









k
=
1

N




&LeftDoubleBracketingBar;



Γ

x

k

-
φ

&RightDoubleBracketingBar;

c











a positive constant b′ such that






F




-
N






log





det





Γ

+

a





N

+


b


(



&LeftDoubleBracketingBar;
Γ
&RightDoubleBracketingBar;

c

+


&LeftDoubleBracketingBar;
φ
&RightDoubleBracketingBar;

c


)












where ∥Γ∥ denotes the Frobenius norm of Γ. By the Hadamard inequality (see R. A. Horn and C. R. Johnson,


Matrix Analysis, Cambridge University Press,


1985, p. 477), we obtain that det Γ≦∥Γ∥


d


, and so






F




-
d






N





log






&LeftDoubleBracketingBar;
Γ
&RightDoubleBracketingBar;


+

a





N

+


b


(



&LeftDoubleBracketingBar;
Γ
&RightDoubleBracketingBar;

c

+


&LeftDoubleBracketingBar;
φ
&RightDoubleBracketingBar;

c


)












which proves the result.




The next lemma identifies a condition on h so that L has a unique maximum.




Lemma 5 Let H(t) :=h(t


2


), t ε R


+


and suppose that H is convex on R


+


. Then L has at most one maximum.




Proof: Recall that as a function of Γ ε PD


d


, det Γ is strictly log concave (see R. A. Horn et al., supra). Hence, the function F(Γ,φ) defined above is a strictly convex function of Γ and φ. This proves the result.




When h(t)=t, the corresponding spherical density is a gaussian distribution, and in this case, the MLE of Σ and μ are given by the formula







μ
=


1
N






k
=
1

N



x
k




;





Σ
=


1
N






k
=
1

N




(


x
k

-
μ

)





(


x
k

-
μ

)

t

.















Except for this case, the optimal value of Σ and μ must be found by some iterative method.




We begin our discussion of the problem with the univariate case of α-density, a case of practical importance to us here. Thus, we have scalar data {x


1


, x


2


, . . . , x


N


}, and the negative log-likelihood is








-
log






L

=


N





log







ρ
1



(
α
)



+


N
2


log





σ

+




γ
1



(
α
)



α
/
2




σ


-
α

/
2







k
=
1

N





&LeftBracketingBar;


x
k

-
μ

&RightBracketingBar;

α

.














Notice here that the minimum over μ ε R and σ ε R


+


can be performed separately. For this reason, we set






ω
=



min
μ




(


1
N






i
=
1

N




&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



)


1
α



=



(


1
N






i
=
1

N




&LeftBracketingBar;


x
i

-

μ
^


&RightBracketingBar;

α



)


1
α


.












Since the derivative of (−log L) with respect to σ is given by (at μ={circumflex over (μ)})
















σ




(


-
log






L

)


=


N

2

σ




(

1
-


α








γ
1



(
α
)



α
2




ω
α



σ

α
2




)












the unique value of Σ at which the minimum occurs is










σ
^

=


α

2
α





γ
1



(
α
)





ω
2

.






(
10
)













Note that the hypothesis of Lemma 5 is satisfied for α≧1. Since in this case {circumflex over (μ)} is unique, our computation confirms directly that −log L has a unique minimum when α≧1 (but not for 0<α<1, see below). Besides the case α=2, corresponding to the gaussian distribution, we note that when α=1 we have ρ


1


(α)=γ


1


(α)=2 and so






σ
=

2



(


1
N






i
=
1

N



&LeftBracketingBar;


x
i

-

μ
~


&RightBracketingBar;



)

2












where {circumflex over (μ)} is the median of the data.




In the general case, the minimum value of −log L has the form








-
log






L

=

N


{

κ
+

log





ω


}












where






κ
:=


α

-
1


+


(

1
-

α

-
1



)






log





α

+

log






Γ


(

α

-
1


)














is a constant independent of the data. Therefore, a comparison of the likelihoods of two data sets is equivalent to a comparison of their corresponding w values.




There remains the problem of determining {circumflex over (μ)}. Because of its role in the EM algorithm, see below, we accomplish this by the update formula










μ
^

=






i
=
1

N





&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;


α
-
2




x
i







i
=
1

N




&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;


α
-
2




.





(
11
)













For α<2, care must be taken in the use of this formula. If μ=x


l


for some l, l=1, . . . , N, then {circumflex over (μ)}=x


l


. Otherwise, division by zero does not occur in equation (11). The method always converges to the unique minimum ũ of W independent of the choice of the initial value used when α≧2. the reason for this is two-fold: first, |{circumflex over (μ)}|≦max{|x


i


|; i=1,2, . . . , N} and secondly, the function












W
α



(
μ
)


:=




i
=
1

N




&LeftBracketingBar;


x
i

-
μ

&RightBracketingBar;

α



,





μ

R





(
12
)













is a strictly convex function of μ. Thus, {circumflex over (μ)}=μ in equation (11) if and only if the derivative of the function of μ in equation (12) vanishes at {circumflex over (μ)}. For α<2, the issue of convergence of the iteration is delicate. We divide our comments into two cases. The first case we consider is 1<α2. The iteration will converge for a generically chosen initial data. We illustrate this phenomenon with an example. When α=3/2, N=3, x


1


=0, x


2


=¼, x


3


=1, the function in equation (12) shown in

FIG. 7

has its unique minimum at =0.32. As expected, this value of μ is a fixed point of the iteration of equation (11). In fact,

FIGS. 6 and 7

demonstrate that there is a value of μ close t 0.1 so that the updated value {circumflex over (μ)} is equal to ¼. Since this is also a fixed point of the mapping in equation (11), all further updates does not change the value of μ. Moreover, starting at any value of μ except for the (countable) pre-images of x


2


(which has zero as its only accumulation points) the iteration of equation (11) will converge geometrically fast to the minimum of equation (12).




We do not present a complete analysis of the convergence of this method here. The following observation provides some insight into what might be expected.




Lemma 6 Suppose 1<α<2, and let {μ


n


: n=1, 2, . . . } be a sequence generated by the iteration of equation (11), which converges to some μ. Either μ is equal the unique minimum of the function appearing in equation (12) or there exists an l in the set {1, 2, . . . , N} such that all the elements of the sequence except for a finite number equal x


l


.




Proof: If μis not in the set {x


j


: j=1, 2, . . . , N} then μ must be the unique minimum of the function in equation (12). In the case that μ=x


1


for some l=1,2, . . . , N, there exists a positive number ε such that whenever |x−x


l


|<ε it follows that |V


α


(x)−x


l


|≦|x−x


l


|, where V


α


(x) is defined to be











V
α



(
x
)


=






i
=
1

N





&LeftBracketingBar;


x
i

-
x

&RightBracketingBar;


α
-
2




x
i







i
=
1

N




&LeftBracketingBar;


x
i

-
x

&RightBracketingBar;


α
-
2




.





(
13
)













In other words, all the points x


1


, x


2


, . . . , x


N


are repelling fixed points of the map of equation (13). Assuming that this be the case, it follows that all but a finite number of this sequence must equal x


l


. Thus, the proof is completed by demonstrating the existence of the ε mentioned above. To this end, we merely observe that the right and left derivatives of Vα(x) at any data point x


l


, l=1,2, . . . , N is infinite.




Let is not comment on the case 0<α<1. In general, the iteration does converge to the minimum of the function W for most initial values. In this regard, we point out that W is strictly concave between each data point and has infinite left and right derivatives at the data points. Hence, there is a unique local maximum between each data point, but in general, there is more than one minimum of W. This means that, in general, the maximum of the likelihood occurs for more than one value of mean and variance.




Our computational experience seems to indicate that the iteration of equation (11) will converge to the data point within the between two consecutive local maxima of the function W, in which the initial value lies. We illustrate this with an example when α=0.5 and x


i


=i/10, i=1, . . . 10. The function W is shown in FIG.


8


and the function in equation (13) for α=½ is shown in FIG.


9


.




Other iterations besides equation (11) can be used to find {circumflex over (μ)} as well. For example, since the function appearing in equation (12) is convex, one can use a search method combined with Newton iteration to provide a fast globally convergent algorithm, locating the minimum of this function.




Returning to the general spherical model, we address the question of iterative methods for finding the MLE. To this end, we compute the variation of log L relative to μ










μ






log






L

=

2


γ
d




Σ

-
1




(




k
=
1

N





h




(



Q
(

x
k

&RightBracketingBar;


μ

,
Σ

)




(


x
k

-
μ

)



)













where











Q
(
x
&RightBracketingBar;


μ

,
Σ

)

:=




γ
d



(

x
-
μ

)


t




Σ

-
1




(

x
-
μ

)




,





x


R
d












and for the variation relative to Σ we have that













Σ

-
1








log






L

=




k
=
1

N




{



1
2


Σ

-


γ
d




h




(



Q
(

x
k

&RightBracketingBar;


μ

,
Σ

)




)



(


x
k

-
μ

)




(


x
k

-
μ

)

t




}

.










Thus, (μ,Σ) is a stationary point of L is and only if









μ
=







k
=
1

N




h




(



Q
(

x
k

&RightBracketingBar;


μ

,
Σ

)



)



x
k







k
=
1

N




h




(



Q
(

x
k

&RightBracketingBar;


μ

,
Σ

)



)






(
14
)













and











Σ
=



2


γ
d


N






k
=
1

N




h




(



Q
(

x
k

&RightBracketingBar;


μ

,
Σ

)





)



(


x
k

-
μ

)





(


x
k

-
μ

)

t

.





(
15
)













These formulas suggest the iteration scheme in which the updates {circumflex over (μ)}and {circumflex over (Σ)} for the mean μ and the variance σ are respectively given by










μ
^

=







k
=
1

N




h




(



Q
(

x
k

&RightBracketingBar;


μ

,
Σ

)



)



x
k







k
=
1

N




h




(



Q
(

x
k

&RightBracketingBar;


μ

,
Σ

)



)










Σ
^

=



2


γ
d


N






k
=
1

N




h




(



Q
(

x
k

&RightBracketingBar;


μ

,
Σ

)






)



(


x
k

-
μ

)





(


x
k

-
μ

)

t

.











which are related to the EM algorithm (see A. P. Dempster, N. M. Laird and d. B. Baum, “Maximum likelihood from incomplete data via the EM algorithm”,


Journal of Royal Statistical Soc.,


Ser. B, vol. 39, pp.1-38, 1977, and Richard Redner and Homer Walker, “Mixture densities, maximum likelihood and the EM algorithm,


SIAM Review


, vol. 26, no. 2, April 1984), discussed below.




Before getting to this matter, we comment on the convergence of this iteration. All our remarks concern the univariate case. In this case, we rewrite the iteration in terms of the parameters μ and σ corresponding to scalar data {x


1


, x


2


, . . . , x


N


}; thus










μ
^

=





k
=
1

N





h


(




γ
1



(


x
k

-
μ

)


2

σ

)



x
k







k
=
1

N




h




(




γ
1



(


x
k

-
μ

)


2

σ

)








(
16
)













and










σ
^

=



2


γ
1


N






k
=
1

N





h


(




γ
1



(


x
k

-
μ

)


2

σ

)




(


x
k

-
μ

)

2








(
17
)













In general, this iteration does not converge. To see this, we specialize it to the case of Example 1 and observe that in this case, it has the form










σ
^

=


A


(
μ
)


/

σ
β






(
18
)













where








A


(
μ
)


:=





αγ
1



(
α
)



α
/
2


N




W
α



(
μ
)




,





β
:=


α
/
2

-
1


,










and










μ
^

=



V
α



(
μ
)


.





(
19
)













We have already pointed out that the iteration for the mean μ converges independent of the initial guess (see discussion after equation (11)) when α≧2. To simplify our exposition, we consider the case that the initial value in the iteration for the mean is chosen to be the minimum of the function appearing in equation (12). In this case, the updated value of the mean and hence also the constant appearing in equation (18) does not change. Let us call this positive constant A. Also, for simplicity, we choose α=4. Thus, the iteration equation (18) takes the form It follows directly, that the solution of this iteration is given by the formula









σ

n
+
1


=

A

σ
n
2



;





n
=
1


,
2
,







σ

n
+
1


=



A

1
3




(


σ
1


A

1
3



)





(

-
1

)

n



2
n




,





n
=
1

,
2
,











The value of σ we seek is A





, (see equation (10)). However, this iteration fails to produce this value except when σ


1


is chosen to be A





. Of course, in this simple example, it is not recommended to use the update formula (18), since optimum value of a can be directly obtained (see equation (10)). This example does suggest a means to modify the iteration of equation (17) under certain conditions on the function h. We present this observation in the following lemma.




LEMMA 7




Suppose the following four conditions hold.




The hypothesis of Lemma 5 holds.




There are nonnegative constants a, b, c such that 0<h′(t)≦a+bf; tεR


+


.




β:=


lim




f−0+


fh′(t−1)>0




The data set {x


k


: k=1, 2, . . . , N} consists of at least two points. Then there exists a positive constant κ such that, if σ is at most κ, then {circumflex over (σ)} defined by the formulas







μ
^

=





k
=
1

N





h


(




γ
1



(


x
k

-
μ

)


2

σ

)



x
k







k
=
1

N




h


(




γ
1



(


x
k

-
μ

)


2

σ

)


















σ
^


1
+
c


=



2


γ
1


N






k
=
1

N





h


(




γ
1



(


x
k

-
μ

)


2

σ

)




(


x
k

-
μ

)

2



σ
c














is also at most κ. Moreover, this iteration converges whenever the initial value of σ is at most κ.




Proof: Set M=max{|xi|: i=1, 2, . . . , N} and first observe that |{circumflex over (μ)}|≦M. Similarly, we obtain that








σ
^


1
+
c




u
+

v






σ
c













where μ and ν are positive constants depending on a, b, c, M but not on σ or N. Set






κ
=

max



{


2





v

,


(

u
v

)


1
c



}

.












We claim that if σ≦κ, then {circumflex over (σ)}≦κ. If {circumflex over (σ)}≦σ, there is nothing to prove. When {circumflex over (σ)}≧σ, then the above inequality implies that








σ
^


1
+
c




u
+

v








σ
^

c

.













When








σ
^




(

u
v

)


1
c



,










then again there is nothing to prove. For








σ
^




(

u
v

)


1
c



,










we have that b{circumflex over (σ)}


c


≧μ and so








2





v






σ
^




u
+

v







σ
^

c





u
+

v






σ
c






σ
^


1
+
c



;










that is, {circumflex over (σ)}≦2ν. Hence, under all circumstances we conclude that {circumflex over (σ)}≦κ.




Now, let {(μ


n


, σ


n


): n=1, 2, . . . , } be a sequence produced by this iteration, where σ


1


is at most κ. Then by our remarks so far, both the sequence of means and variances have a subsequence which converge to some limit, σ


opt


, respectively. From the variances update formula and conditions three and four, we have that










σ
opt

1
+
v


=







lim

n
->



_



σ
n

1
+
c









=







2


γ
1


N




lim

n
->








k
=
1

N





h




(




γ
1



(


x
k

-
μ

)


2

σ

)





(


x
k

-
μ

)

2



σ
n











=








2


γ
1


N






k
=
1

N




(


x
k

-

μ
opt


)



2

c

+
2




>
0.














Hence, we conclude that σ


opt


>0. This means that μ


opt


, σ


opt


are stationary points of the likelihood function. However, the first condition guarantees the uniqueness of the stationary point. Therefore, the iterate converges as long as σ


1


is at most κ.




MIXTURE MODELS




In this section we describe the probability densities which we shall use in our classification experiments. They are built up from mixture models based on the four densities described earlier. Constructing mixture models from a given parametric family is a general procedure. Specifically, starting with a parametric family p(·|λ), λεΩ⊂R


q


of probability densities on R


d


, a mixture model with m mixtures with mixture weights w


i


; i=1, 2, . . . , m has the form













P
(
x

&RightBracketingBar;


ω

,
Λ

)

:=




i
=
1

m






ω

i





p


(
x

&RightBracketingBar;



λ
i




)

,









i
=
1

m



ω

i







=
1

,






ω

i







0

,










where Λ=(λ


1


, λ


2


, . . . , λ


m


) is chosen from Ω


m


=Ω×Ω×. . . ×Ω, m times and ω=(ω


1


, ω


2


, . . . , ω


m


) is a vector with nonnegative components which sum to one. The densities p(·|λ


1


), p(·|λ


2


), . . . , p(·|λ


m


) are called the mixture components P. Note the linear dependency of the mixture model P on the mixture weights.




Given data vectors {x


k


: k=1, 2, . . . , N} in R


d


, the goal here is to choose the parameter ΛεΩ


m


and the mixture weights ω=(ω


1


, ω


2


, . . . , ω


m


) to maximize the log-likelihood function













log





L

=




k
=
1

N





P
(

x
k


&RightBracketingBar;


ω



,
Λ

)

.




(
20
)













To this end, we first compute the variation of the log-likelihood relative to the mixture weight ω


l


, l=1, 2, . . . , m which is given by











log






L




ω
l



=




k
=
1

N







p
(

x
k


&RightBracketingBar;



λ
l


)






P
(

x
k


&RightBracketingBar;


ω

,
Λ

)




,





l
=
1

,
2
,





,

m
.











Keeping in mind that the mixture weights sum to unity, we are led to the equation











1
N






k
=
1

N







p
(

x
k


&RightBracketingBar;



λ
l


)






P
(

x
k


&RightBracketingBar;


ω

,
Λ

)




,





l
=
1

,
2
,





,

m
.





(
21
)













This equation for the mixture weights holds independent of the form of the mixture components and suggests the importance of the posterior probabilities
















P
l

(
x

&RightBracketingBar;


ω

,
Λ

)

=






ω
l



p
(
x


&RightBracketingBar;



λ
l


)






P
(
x

&RightBracketingBar;


ω

,
Λ

)



,





l
=
1

,
2
,





,
m
,





x


R
d






(
22
)













Thus, equation (21) becomes














1
N






k
=
1

N






P
l

(
x

&RightBracketingBar;


ω



,
Λ

)

=

ω
l


,





l
=
1

,
2
,





,
m




(
23
)













and by the definition of the posterior probability we have that
















l
=
1

m






P
l

(
x

&RightBracketingBar;


ω


,
Λ

)

=
1

,





x


R
d






(
24
)













To proceed further, we specify the mixture components to be of the spherical type as in equation (2). In this case,












log







p
(
x

&RightBracketingBar;


μ

,
Σ

)

=



1
2


log





det






Σ

-
1



-

h


(




Q
(
x

&RightBracketingBar;


μ

,
Σ

)




)

+

log






ρ
d



,





x


R
d












where












Q
(
x

&RightBracketingBar;


μ

,
Σ

)

:=




γ
d



(

x
-
μ

)


t




Σ

-
1




(

x
-
μ

)




,





x


R
d












Hence, we conclude that



















μ


log








p
(
x

&RightBracketingBar;


μ

,
Σ

)

=

2


γ
d




h


(

Q
(
x




&RightBracketingBar;


μ

,
Σ

)

)




Σ

-
1




(

x
-
μ

)



,





x


R
d












and




















Σ

-
1




log








p
(
x

&RightBracketingBar;


μ

,
Σ

)

=



1
2


Σ

-


γ
d




h


(

Q
(
x





&RightBracketingBar;


μ

,
Σ

)

)



(

x
-
μ

)




(

x
-
μ

)

t


,





x



R
d

.












Thus, for l=1, 2, . . . , m, the stationary equation for the m mixture means are











μ
l

=










k
=
1

N






P
l

(

x
k


&RightBracketingBar;


ω


,
ψ

)




h




(




Q
(

x
k


&RightBracketingBar;



μ
l


,

Σ
l


)



)



x
k










k
=
1

N






P
l

(

x
k


&RightBracketingBar;


ω


,
ψ

)




h




(




Q
(

x
k


&RightBracketingBar;



μ
l


,

Σ
l


)



)



,




(
25
)













and for the variance










Σ
l

=

2


γ
d












k
=
1

N






P
l

(

x
k


&RightBracketingBar;


ω


,
ψ

)




h




(




Q
(

x
k


&RightBracketingBar;



μ
l


,

Σ
l


)



)



(


x
k

-

μ
l


)




(


x
k

-

μ
l


)

t








k
=
1

N






P
l

(

x
k


&RightBracketingBar;


ω


,
ψ

)







(
26
)













where Ψ=(μ


1


, Σ


1


, . . . , μ


m


, Σ


m


) is the parameter vector of all the means and matrices. These equations suggest the following update formulas for the means












μ
^

l

=










k
=
1

N






P
l

(

x
k


&RightBracketingBar;


ω


,
ψ

)




h




(




Q
(

x
k


&RightBracketingBar;



μ
l


,

Σ
l


)



)



x
k










k
=
1

N






P
l

(

x
k


&RightBracketingBar;


ω


,
ψ

)




h




(




Q
(

x
k


&RightBracketingBar;



μ
l


,

Σ
l


)



)



,




(
27
)













and for the variance











Σ
^

l

=

2


γ
d












k
=
1

N






P
l

(

x
k


&RightBracketingBar;


ω


,
ψ

)




h




(




Q
(

x
k


&RightBracketingBar;



μ
l


,

Σ
l


)



)



(


x
k

-

μ
l


)




(


x
k

-

μ
l


)

t








k
=
1

N






P
l

(

x
k


&RightBracketingBar;


ω


,
ψ

)







(
28
)













with l=1, 2, . . . , m.




Since the problem under consideration is an example of an incomplete data estimation problem (see, for example, the recent book of Christopher M. Bishop,


Neural Networks for Pattern Recognition,


Cambridge University Press, 1997, for details), the general philosophy of the EM algorithm as described by A. P. Dempster et al., supra, applies. We briefly review the details of the EM algorithm here as it applies to our context. In this regard, we find the point of view of L. E. Baum, Ted Petrie, George Soules, and Norman Weiss in their article “A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains”,


The annals of Math. Stat.,


vol. 41, no. 1, pp. 164-171, 1970, particularly illuminating. We adopt the notation from that article and let X be a totally finite measure space with measure μ. Let Θ be a subset of some euclidian space and for every θεΘ, we suppose that p(·, θ) is a positive real valued function on X. Let










P


(
θ
)


=



X




p


(

x
,
θ

)






μ


(
x
)









(
29
)













and










Q


(

θ
,

θ
_


)


=



X




p


(

x
,
θ

)







log






p


(

x
,

θ
_


)







μ


(
x
)



.







(
30
)













Using the concavity of the logarithm, it was shown in L. E. Baum, et al., supra, that P(θ)≦P({overscore (θ)}) whenever Q(θ,θ)≦Q(θ,{overscore (θ)}). Let us suppose that we are given N functions of the form









P
l



(
θ
)


=



X





p
l



(

x
,
θ

)






μ


(
x
)






,





l
=
1

,
2
,





,

N
.











We form from these functions the generalized likelihood function







L


(
θ
)


=




i
=
1

N




P
l



(
θ
)













and observe that







L


(
θ
)


=




X
N






p
N



(

x
,
θ

)







μ
N



(
x
)















where









p
N



(

x
,
θ

)


=




i
=
1

N




p
i



(


x
i

,
θ

)




,





x
=


(


x
1

,

x
2

,





,

x
N


)



X
N



,






X
N

=

X
×
X
×

×
X


,





N





times

,










and









μ


(
x
)



=




μ


(

x
1

)







μ


(

x
2

)


















μ


(

x
N

)



.












Therefore, we see that L(θ) is the type of function considered in equation (29). Moreover, the corresponding function Q(θ,{overscore (θ)}) in equation (29) in this case becomes










Q


(

θ
,

θ
_


)


=



X






l
=
1

N






p
l



(

x
,
θ

)




P
l



(

θ
_

)




log







p
l



(

x
,

θ
_


)







μ


(
x
)



.








(
31
)













To apply these remarks to the mixture model example, we choose X={1, 2, . . . , m} and dμ to be the counting measure on X. Also, we choose











p
l



(

i
,
θ

)


=


ω
i




p


(

x
l



&RightBracketingBar;



λ
i



)

;





i

X


,





l
=
1

,
2
,





,
N










and θ=(ω,Λ) so that











P
l



(
θ
)


=



P


(

x
l



&RightBracketingBar;


ω


,
Λ

)

,





l
=
1

,
2
,





,

N
.











Thus, we see that the generalized likelihood function agrees with the mixture model likelihood and, moreover, the function in equation (31) becomes

















l
=
1

N






i
=
1

m






P
i

(

x
l


&RightBracketingBar;


ω



,
Λ

)


log









(



ω
i

_



p
(

x
l




&RightBracketingBar;




λ
i

_


)


)

.




(
32
)













Thus, our objective is to maximize the quantity given by equation (32) with respect to the parameters {overscore (ω)}, {overscore (μ)} and {overscore (Σ)} subject to the constraint that the components of the mixture vector {overscore (ω)} are nonnegative and add to unity. Using the same computation as in the derivative of the stationary equations (25) and (26), we see stationary points of the function (32) are described by the following formulas for i=1, 2, . . . , m












μ
^

i

=










k
=
1

N




P
i



(

x
k

&RightBracketingBar;


ω


,
ψ

)




h




(




Q
(

x
k


&RightBracketingBar;




μ
^

i


,


Σ
^

i


)



)



x
k










k
=
1

N




P
i



(

x
k

&RightBracketingBar;


ω


,
ψ

)




h




(




Q
(

x
k


&RightBracketingBar;




μ
^

i


,


Σ
^

i


)



)



,




(
33
)








Σ
^

i

=

2


γ
d












k
=
1

N




P
i



(

x
k

&RightBracketingBar;


ω


,
ψ

)




h




(




Q
(

x
k


&RightBracketingBar;




μ
^

i


,


Σ
^

i


)



)



(


x
k

-

μ
i


)




(


x
k

-

μ
i


)

t








k
=
1

N




P
i



(

x
k

&RightBracketingBar;


ω


,
ψ

)







(
34
)













and














ω
^

i

=


1
N






k
=
1




N







P
i

(

x
k

&RightBracketingBar;


ω




,
ψ

)

.




(
35
)













Let us provide conditions in which the update equations (33) to (35) have a unique solution. Clearly, equation (34) determines the new mixture weight components from the old values of the mixture weights, means and covariances. The equations (33) to (35) may have multiple solutions. However, any such solution provides a stationary point for the function







G


(
Ψ
)


=




i
=
1

m




J
i



(



μ
~

i

,


Σ
~

i


)













where









Ψ
=

(



μ
~

1

,


Σ
~

1

,


μ
~

2

,


Σ
~

2

,





,


μ
~

m

,


Σ
~

m


)





(
36
)













and















J
i



(


μ
~

,

Σ
~


)


:=






-

1
2







k
=
1

N






P
i

(

x
k


&RightBracketingBar;


ω




,
ψ

)






log





det



Σ
~


-
1



+


















k
=
1

N




P
i



(

x
k

&RightBracketingBar;


ω


,
ψ

)



h
(



Q
(

x
k

&RightBracketingBar;




μ
~

i


,


Σ
~

i


)


)





.













Note that each summand in equation (36) only depends on {tilde over (μ)}


i


, {tilde over (Σ)}


i


and not {tilde over (μ)}


j


, {tilde over (Σ)}


j


for any j≠i. This means that the minimization can be done for each i, i=1, 2, . . . , m separately. For each i, the i-th function J


i


has a similar form as the negative log-likelihood of equation (8). Since the constants P


i


(x


k


|ω, Ψ) are all nonnegative (and are independent of {tilde over (μ)}


i


, {tilde over (Σ)}


i


) we conclude that if the function h satisfies the hypothesis of Lemmas 4 and 5 and the (potentially) reduced data set








T
i

:=


{



x
k

:



P
i

(

x
k

&RightBracketingBar;


ω


,
Ψ

)

>
0


}










satisfies the condition of Lemma 4. The J


i


has a unique minimum. We state this observation formally below.




LEMMA 8




Given means {μ


1


, μ


2


, . . . , μ


m


} and variances {Σ


1


, Σ


2


, . . . , Σ


m


} and mixture weights ω=(ω


1


, ω


2


, . . . , ω


m


), let P


i


(x|ω,Ψ), i=1, 2, . . . , m, xεR


d


be corresponding posterior probabilities for spherical mixture models of equation (24) based on the function h. Suppose h satisfies the condition of Lemmas 4 and 5 and each i=1, 2, . . . , m, the data sets









T
i

=


{



x
k

:



P
i

(

x
k

&RightBracketingBar;


ω


,
Ψ

)

>
0


,





k
=
1

,
2
,





,
m

}










satisfy the condition of Lemma 4. Then the maximization step of the EM algorithm as given by equations (33) to (35) have a unique solution, {overscore (μ)}


1


, {overscore (μ)}


2


, . . . , {overscore (μ)}


m


, and {overscore (Σ)}


1


, {overscore (Σ)}


2


, . . . , {overscore (Σ)}


m


.




Proof: We already commented on the uniqueness. The existence follows the proof of Lemma 4.




Note that in the gaussian case, h′(t)=1 for tεR


+


, and as a consequence, the right hand side of equation (33) involve “old” values only. Having computed {circumflex over (μ)}


i


from equation (33), the right hand side of equations (34) and (35) can be conveniently computed next. Thus, these provide a set of computable iterations. However, in the nongaussian case, the right hand side of equation (33) depends on the new values. The same comment applies for equation (34) as well. This makes the iterations more complicated in the nongaussian case. However, one strategy is to iterate equation (33) alone with a fixed value of Σ, and when the iteration has produced satisfactory value for the new value of the mean, one can proceed to equation (31) and iterate with this new value of it fixed until satisfactory values for the new covariances are obtained. Other variants of this strategy can be thought of. In all likelihood, such iterative strategies may not increase likelihood, and, therefore, violate the EM principle. Given this dilemma, we take the simplest approach and merely iterate each of the equations (33) and (34) once to obtain new values for the means and the covariances as in equations (27) and (28).




It is important to extend the discussion of this section to cover the densities appearing in Lemma 2. Keeping in mind the notation used in this lemma the stat nary equations for the means and covariances respectively become for l=1, 2, . . . , rn, r=1, 2











μ
r
l

=










k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)




h


(



Q
(

x
r
k

&RightBracketingBar;



μ
r
l


,

Σ
r
l


)


)



x
k










k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)




h


(



Q
(

x
k

&RightBracketingBar;



μ
r
l


,

Σ
r
l


)


)



,




(
37
)













and










Σ
r
l

=

2


γ
d












k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)




h


(



Q
(

x
r
k

&RightBracketingBar;



μ
r
l


,

Σ
r
l


)


)



(


x
k

-

μ
r
l


)




(


x
k

-

μ
r
l


)

t








k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)







(
38
)













where











P

l





r


(
x
&RightBracketingBar;


ω

,
ψ

)

=






p
(

x
k

&RightBracketingBar;



μ
r
l


,

Σ
r
l


)





P
(
x
&RightBracketingBar;


ω

,
ψ

)


.











The special cases which we shall deal with always make use of covariance matrices that are diagonal. If the means and variances of the mixture components are μ


1


, μ


2


, . . . , μ


m


and σ


1


, σ


2


, . . . , σ


m


, we have for l=1, . . . , m; r=1, . . . , d the tensor produce densities the iterations and where ψ=(μ


1


, σ


1


, μ


2


, σ


2


, . . . , μ


m


, σ


m


) is the vector of all means and variances.




The forms of the update formulas used in our numerical experiment








μ
r
l

=








k
=
1

N





P

l





r


(
x
&RightBracketingBar;


ω


,
ψ

)




h




(


γ
1



(



(


x
r
k

-

μ
r
l


)

2


σ
r
l


)


)




x
r
k









k
=
1

N





P

l





r


(
x
&RightBracketingBar;


ω


,
ψ

)




h




(


γ
1



(



(


x
r
k

-

μ
r
l


)

2


σ
r
l


)


)





,






σ
r
l

=

2


γ
1










k
=
1

N





P

l





r


(
x
&RightBracketingBar;


ω


,
ψ

)




h




(


γ
1



(



(


x
r
k

-

μ
r
l


)

2


σ
r
l


)


)





(


x
r
k

-

μ
r
l


)

2








k
=
1

N





P

l





r


(
x
&RightBracketingBar;


ω


,
ψ

)














are shown below.




TENSOR PRODUCT α-DENSITY




For l=1, 2, . . . , m and r=1, 2, . . . , d, we have:







μ
r
l

=








k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)




&LeftBracketingBar;


x
r
k

-

μ
r
l


&RightBracketingBar;


α
-
2




x
r
k









k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)




&LeftBracketingBar;


x
r
k

-

μ
r
l


&RightBracketingBar;


α
-
2














and







σ
r
l

=





γ
1



(
α
)




[





α





k
=
1

N





P

l





r


(

x
k

&RightBracketingBar;


ω



,
ψ

)




&LeftBracketingBar;


x
r
k

-

μ
r
l


&RightBracketingBar;

α








k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)


]



2
α


.











Finally, for the clipped gaussian density case we have:




SPHERICAL α-DENSITY




For l=1, 2, . . . , m and r=1, 2, . . . , d,







μ
r
l

=







sume






k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)




(




s
=
1

d





σ
s

-
1




(


x
x
k

-

μ
s
l


)


2


)



α
-
1

2




x
r
k









k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)




(




s
=
1

d





σ
s

-
1




(


x
s
k

-

μ
s
l


)


2


)



α
-
2

2














and the variances







σ
r
l

=

α








γ
d



(
α
)



α
/
2
















k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)




(




s
=
1

d





σ
s

-
1




(


x
s
k

-

μ
s
l


)


2


)



α
-
2

2





(


x
r
k

-

μ
r
l


)

2








k
=
1

N




P

l





r




(

x
k

&RightBracketingBar;


ω


,
ψ

)













CLIPPED GAUSSIAN DENSITY












μ
l

=













k


G
l






P

l





r


(

x
k

&RightBracketingBar;


ω

,
ψ

)



x
k













k


G
l






P

l





r


(

x
k

&RightBracketingBar;


ω

,
ψ

)







σ
l

=

2







γ
d



(
ϵ
)
















k


G
l






P

l





r


(

x
k

&RightBracketingBar;


ω

,
ψ

)







(


x
k

-

μ
l


)




(


x
k

-

μ





l


)

t













k


G
l






P

l





r


(

x
k

&RightBracketingBar;


ω

,
ψ

)













where











G
l

:=

{



x
r

:



Q


(

x
r



&RightBracketingBar;



μ
l



,

σ
l




)


ϵ

}

.










In this formula we set h′(t)=0,0≦t≦ε, and h′(t)=1,t>ε.




GAUSSIAN DENSITY WITH DOUBLE EXPONENTIAL TAIL












μ
l

=

















k


G
l






P

l





r


(

x
k

&RightBracketingBar;


ω

,
ψ

)



x
k


+

















k


G
l






P

l





r


(

x
k

&RightBracketingBar;


ω

,
ψ

)



1
2



ϵ




(




s
=
1

d





σ
s

-
1




(


x
s
k

-

μ
s
l


)


2


)


-

1
2


















k


G
l






P

l





r


(

x
k

&RightBracketingBar;


ω

,
ψ

)







σ
l

=

2







γ
d



(
ϵ
)

















k


G
l






P

l





r


(

x
k

&RightBracketingBar;


ω

,
ψ

)







(


x
k

-

μ
l


)




(


x
k

-


μ





l


)

t













k


G
l






P

l





r


(

x
k

&RightBracketingBar;


ω

,
ψ

)


.












NUMERICAL EXPERIMENT




The density estimation schemes described here were used to classify acoustic vectors with the objective of using the maximum likelihood classification scheme in automatic recognition of continuous speech. In this section we report only on our numerical experience with mixture models having spherical α densities as mixture components. We then iterate with this initialization a variant of the EM algorithm corresponding to m=1 as described above.




The invention was implemented on an IBM RS/6000 workstation running the AIX operating system (IBM's version of UNIX); however, the invention may be implemented on a variety of hardware platforms including personal computers (PCs), such as IBM's PS/2 PCs, mini computers, such as IBM's AS/400 computers, or main frame computers, such as IBM's ES/9000 computers.

FIG. 11

is a flow diagram illustrating the logic of the implementation. The process starts with the input of acoustic data at input block


1201


. The input acoustic data is clustered in function block


1202


prior to prototype initialization in function block


1203


. At this point, the process enters a loop beginning with function block


1204


in parametrization of density function form using, for example, equations (2) and (5). As test is made in decision block


1205


to determine the acceptability of the density functional form. If the test is negative, the density is rejected in function block


1206


, and then in function block


1207


, the specification of nonlinear functional equations for new values of weights, means and variances is made. A test is made in decision block


1208


to determine if the update values meet prescribed tolerance. If not, the update values are rejected in function block


1209


, and the process loops back to function block


1206


; otherwise, the update values are accepted in function block


1210


. At this point, the process loops back to function block


1204


for the next iteration. When the test in decision block


1205


is positive, the density is accepted in function block


1211


, and the final density functional form is stored for decoding in function block


1212


.




DESCRIPTION OF THE SPEECH DATA




The density estimation scheme described here was used to classify acoustic vectors associated with speech waveforms with the objective of incorporating the rest is into a large vocabulary automatic continuous speech recognizer. Digitized speech sampled at a rate of 16 KHz was considered. The training corpus of speech data consisted of the 35 hours of


Wall Street Journal


read speech consisting of 284 speakers. A frame consists of a segment of speech of duration 25 msec, and produces a 39 dimensional acoustic cepstral vector via the following process, which is standard in speech recognition literature. Alternative schemes are also used for speech recognition, but we do not use them in the present example. Frames are advanced every 10 msec to obtain succeeding acoustic vectors.




First, magnitudes of discrete Fourier transform of samples of speech data in a frame are considered in a logarithmically warped frequency scale. Next, these amplitude values themselves are transformed to a logarithmic scale, and subsequently, a rotation in the form of discrete cosine transform is applied. The latter two steps are motivated by logarithmic sensitivity of human hearing to frequency and amplitude. The first 13 components of the resulting vector are retained. The differences between the corresponding components of this 13 dimensional vector and the vector preceding it, as well the vector succeeding, it, are then appended to itself to obtain the 39 dimensional cesptral acoustic vector.




As in supervised learning tasks, we assume that these vectors are labeled according to the basic sounds they correspond to. In fact, the set of 46 phonemes or phones are subdivided into a set of 126 different variants. They are further subdivided into more elemental sounds or allophones by using the method of decision trees depending on the context in which they occur, (see, for example, Frederick Jelenik,


Statistical Methods for Speech Recognition,


MIT Press, 1997, L. R. Bahl, P. V. Desouza, P. S. Gopalkrishnan, and M. A. Picheny, “Context dependent vector quantization for continuous speech recognition”,


Proceedings of IEEE Int. Conf. on Acoustics Speech and Signal Processing,


pp. 632-635, 1993, and Leo Breiman,


Classification and Regression Trees,


Wadsworth International, Belmont, Calif., 1983, for more details). The resulting tree, in our experiment, had a total of approximately 3500 leaves, which determine the class labels of the acoustic vectors mentioned above.




INITIALIZATION




For all the models we consider here, the variant of the EM algorithm we use are initialized in the following manner. The vectors corresponding to a given leaf are grouped into m groups according to the way they are labeled. There is no effort to cluster them as a means to enhance performance. In each group, we find the maximum likelihood estimator for the mixture component densities being used. This is done by using the same variant of the EM iteration we use; i.e., for the whole set of vectors, we set m=1 in this iteration. The maximum likelihood iteration is always initialized independent of the component densities being used at the mean and variance of the vectors for the group whose maximum likelihood estimator we are trying to find.




CLASSIFICATION EXPERIMENT




The density estimation schemes were first tested by using all the vectors corresponding to a certain phone, labeled say AA


1


consisting of a total of approximately 41,000 vectors each of 13 dimensions. The decision tree generates a set of 15 leaves for this phone. The parameters μ, Σ, and the mixture weights ω corresponding to each of these classes (i.e., the leaves) are obtained by using the proposed density estimation scheme for a given value of α. The effectiveness of proposed scheme was first tested by computing the percentage of correctly classified vectors based on the estimated probability densities for the classes (leaves in our case). A vector was assigned to a class if the likelihood for the vector computed from the estimated probability densities was the largest. A subset of 1000 vectors from each leaf that were used for estimating the probability densities, were also used for this purpose. The percentage of correctly classified vectors for different values of a are shown in Table 1 in the case of spherical α-densities.

















TABLE 1









Leaf No.




α = 0.5




α = 1.0




α = 2.0




α = 3.0




α4.0




























0




928




932




936




935




919






1




738




748




765




732




742






2




886




892




898




874




865






3




887




898




882




880




871






4




872




891




902




894




897






5




671




680




698




711




689






6




857




863




870




871




862






7




763




759




741




727




714






8




734




745




741




727




714






9




839




839




836




843




823






10 




789




800




792




777




776






11 




837




851




876




861




855






12 




422




437




476




458




442






13 




816




806




817




812




792






14 




689




673




669




670




671






% Overall




78.19




78.76




79.33




78.65




77.80














Similar experiments were also used to investigate the behavior of our EM-type iterative scheme for estimating parameter vectors describing densities. For this purpose, the percentage of correctly classified vectors for each leaf is tabulated in Table 2 as a function of number of iterations for gaussian mixture components.

















TABLE 2









Leaf No.




Iter. 5




Iter. 10




Iter. 20




Iter. 40




Iter. 80




























0




936




940




947




947




943






1




765




768




792




800




796






2




898




919




906




907




908






3




882




877




894




906




908






4




902




888




902




902




904






5




698




722




734




744




738






6




870




872




875




881




892






7




741




760




756




771




772






8




741




730




740




743




745






9




836




835




834




848




843






10 




792




810




810




799




791






11 




876




893




881




881




882






12 




476




513




523




535




538






13 




817




827




846




845




858






14 




669




682




703




709




704






% Overall




79.3




80.2




81.0




81.5




81.5














SPEECH RECOGNITION EXPERIMENT




Similar experiments were performed on the entire Wall Street Journal data base. Here, the entire set of (approximately 3500) leaves were each modeled by a mixture of nongaussian densities by estimating the ωs, μs and Σs associated with them. The system was tested on acoustic vectors collected from a set of 10 test speakers, each uttering 42 sentences. For every test vector presented, a likelihood rank was assigned to each of the 3500 leaves, which can be interpreted as probability of that elemental sound. The IBM speech decoder used the ranks of these leaves to produce the decoded word string. The percentage word error rates are tabulated in Table 3 for a range of values of α, and are graphically shown in FIG.


10


.




















TABLE 3









value of α




0.25




0.375




0.50




0.75




1.00




2.00




3.00




4.00











% word error




8.50




8.18 




8.38




8.24




8.56




8.80




8.47




8.92














The fact that a value of a less than 2 (i.e., a nongaussian density) provides fewer recognition errors is clear from this plot. In fact, relative improvements of about 7% from the gaussian is obtained.




The experiment demonstrates the value of nongaussian models for mixture model density estimation of acoustics feature vectors used in speech recognition. This suggests the potential for improved recognition accuracy when nongaussian parametric models are used.




While the invention has been described in terms of a single preferred embodiment, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and scope of the appended claims.



Claims
  • 1. A computer implemented process for automatic machine recognition of speech comprising the steps of:inputting acoustic data; modeling input acoustic data using mixtures of nongaussian statistical probability densities constructed from a univariate function; using a maximum likelihood model of speech data, iteratively generating values of mixture weights, means and variances until an acceptable density is found; and storing a final density function form for decoding.
  • 2. The computer implemented process for automatic machine recognition of speech recited in claim 1 wherein the mixtures of nongaussian statistical probability densities of exponential type exp(−h(t)).
  • 3. The computer implemented process for automatic machine recognition of speech recited in claim 2 wherein h(t)=t{fraction (x/2)}, where α is a non-negative real number.
  • 4. The computer implemented process for automatic machine recognition of speech recited in claim 1 wherein multivariate densities are used.
  • 5. The computer implemented process for automatic machine recognition of speech recited in claim 4 wherein the multivariate densities are a tensor product construction.
  • 6. The computer implemented process for automatic machine recognition of speech recited in claim 4 wherein the multivariate densities are a spherical construction.
  • 7. The computer implemented process for automatic machine recognition of speech recited in claim 4 wherein the multivariate densities are a clipped Gaussian density.
  • 8. The computer implemented process for automatic machine recognition of speech recited in claim 4 wherein the multivariate densities are a Gaussian density with a double exponential tail.
US Referenced Citations (12)
Number Name Date Kind
4783804 Juang et al. Nov 1988
5148489 Erell et al. Sep 1992
5271088 Bahler Dec 1993
5473728 Luginbuhl et al. Dec 1995
5694342 Stein Dec 1997
5706402 Bell Jan 1998
5737490 Austin et al. Apr 1998
5754681 Watanabe et al. May 1998
5790758 Streit Aug 1998
5839105 Ostendorf et al. Nov 1998
5857169 Seide Jan 1999
5864810 Digalakis et al. Jan 1999
Non-Patent Literature Citations (10)
Entry
Godsill et al, “Robust Noise Reduction For Speech and Audio Signals”, IEEE, pp. 625-628, 1996.*
Laskey, “A Bayesian Approach to Clustering and Classification”, IEEE pp. 179-183, 1991.*
Tugnait, “Parameter Identifiability of Multichannel ARMA Models of Linear Non-Gaussian Signals Via cumulant Matching”, IEEE, IV 441-444, 1994.*
Frangoulis, “Vector Quantization of the Continuous Distributions of an HMM Speech Recogniser base on Mixtures of Continuous Distributions”, IEEE, pp. 9-12, 1989.*
Pham et al, “Maximum Likelihood Estimation of a Class of Non-Gaussian Densities with Application to Lp Deconvolution”, IEEE transactions on Acoustics, Speech, and Signal Processing, vol. 37, #1, Jan. 1989.*
Basu et al, “Maximum Likelihood Estimates for Exponential Type Density Families”, Acoustics, Speech, and Signal Processing, Mar. 1999.*
Beadle et al, “Parameter Estimation for Non-Gaussian Autoregressive Processes”, IEEE, pp. 3557-3560, 1997.*
Young et al, “The HTK Book”, pp. 3-44, Entropic Cambridge Research Laboratory, Dec. 1997.*
Kuruoglu et al, “Nonlinear Autoregressive Modeling of Non-Gaussian Signals Using Lp Norm Techniques”, IEEE, pp. 3533-3536, 1997.*
Zhuang et al, “Gaussian Mixture Density Modeling, Decomposition, and Applications”, IEEE Transactions on Image Processing, vol. 5, #9, pp. 1293-1302, Sep. 1996.