Penalized maximum likelihood estimation methods, the baum welch algorithm and diagonal balancing of symmetric matrices for the training of acoustic models in speech recognition

Information

  • Patent Grant
  • 6374216
  • Patent Number
    6,374,216
  • Date Filed
    Monday, September 27, 1999
    25 years ago
  • Date Issued
    Tuesday, April 16, 2002
    22 years ago
Abstract
A nonparametric family of density functions formed by histogram estimators for modeling acoustic vectors are used in automatic recognition of speech. A Gaussian kernel is set forth in the density estimator. When the densities are found for all the basic sounds in a training stage, an acoustic vector is assigned to a phoneme label corresponding to the highest likelihood for the basis of the decoding of acoustic vectors into text.
Description




BACKGROUND OF THE INVENTION




1. Field of the Invention




The present invention generally relates to methods of speech recognition and, more particularly, to nonparametric density estimation of high dimensional data for use in training models for speech recognition.




2. Background Description




In the present invention, we are concerned with nonparametric density estimation of high dimensional data. The invention is driven by its potential application to training speech data where traditionally only parametric methods have been used. Parametric models typically lead to large scale optimization problems associated with a desire to maximize the likelihood of the data. In particular, mixture models of gaussians are used for training acoustic vectors for speech recognition, and the parameters of the model are obtained by using K-means clustering and the EM algorithm, see F. Jelinek, Statistical Methods for Speech Recognition, The MIT Press, Cambridge Mass., 1998. Here we consider the possibility of maximizing the penalized likelihood of the data as a means to identify nonparametric density estimators, see I. J. Good and R. A. Gaskin, “Nonparametric roughness penalties for probability densities,”


Biometrika


58, pp. 255-77, 1971. We develop various mathematical properties of this point of view, propose several algorithms for the numerical solution of the optimization problems we encounter, and we report on some of our computational experience with these methods. In this regard, we integrate within our framework a technique that is central in many aspects of the statistical analysis of acoustic data, namely the Baum Welch algorithm, which is especially important for the training of Hidden Markov Models, see again the book by F. Jelinek, cited above.




Let us recall the mechanism in which density estimation of high dimensional data arises in speech recognition. In this context, a principal task is to convert acoustic waveforms into text. The first step in the process is to isolate important features of the waveform over small time intervals (typically 25 mls). These features, represented by a vector xεR


d


(where d usually is 39) are then identified with context dependent sounds, for example, phonemes such as “AA”, “AE”, “K”, “H”. Strings of such basic sounds are then converted into words using a dictionary of acoustic representations of words. For example, the phonetic spelling of the word “cat” is “K AE T”. In an ideal situation the feature vectors generated by the speech waveform would be converted into a string of phonemes “K . . . K AE . . . AE T . . . T” from which we can recognize the word “cat” (unfortunately, a phoneme string seldom matches the acoustic spelling exactly).




One of the important problems associated with this process is to identify a phoneme label for an individual acoustic vector x. Training data is provided for the purpose of classifying a given acoustic vector. A standard approach for classification in speech recognition is to generate initial “prototypes” by K-means clustering and then refine them by using the EM algorithm based on mixture models of gaussian densities, cf F. Jelinek, cited above. Moreover, in the decoding stage of speech recognition (formation of Hidden Markoff Models) the output probability density functions are most commonly assumed to be a mixture of gaussian density functions, cf. L. E. Baum and J. A. Eagon, “An inequality with applications to statistical estimation for probabilistic functions of Markov processes and to a model of ecology,”


Bull. Amer. Math. Soc.


73, pp. 360-63, 1967; L. A. Liporace, “Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree,”


IEEE Trans. on Information Theory


5, pp. 729-34, 1982; R. A. Gopinath, “Constrained maximum likelihood modeling with gaussian distributions,” Broadcast News Transcription and Understanding Workshop, 1998.




SUMMARY OF THE INVENTION




According to this invention, we adopt the commonly used approach to classification and think of the acoustic vectors for a given sound as a random variable whose density is estimated from the data. When the densities are found for all the basic sounds (this is the training stage) an acoustic vector is assigned the phoneme label corresponding to the highest scoring likelihood (probability). This information is the basis of the decoding of acoustic vectors into text.




Since in speech recognition x is typically a high dimensional vector and each basic sound has only several thousand data vectors to model it, the training data is relatively sparse. Recent work on the classification of acoustic vectors, see S. Basu and C. A. Micchelli, “Maximum likelihood estimation for acoustic vectors in speech recognition,”


Advanced Black


-


Box Techniques For Nonlinear Modeling: Theory and Applications,


Kluwer Publishers (1998), demonstrates that mixture models with non-gaussian mixture components are useful for parametric density estimation of speech data. We explore the use of nonparametric techniques. Specifically, we use the penalized maximum likelihood approach introduced by Good and Gaskin, cited above. We combine the penalized maximum likelihood approach with the use of the Baum Welch algorithm, see L. E. Baum, T. Petrie, G. Soules and N. Weiss, “A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains,”


The Annals of Mathematical Statistics


41, No. 1, pp. 164-71, 1970; Baum and Eagon, cited above, often used in speech recognition for training Hidden Markoff Models (HMMs). (This algorithm is a special case of the celebrated EM algorithm as described, e.g., A. P. Dempster, N. M. Liard and D. B. Baum, “Maximum likelihood from incomplete data via the EM algorithm,”


Journal of Royal Statistical Soc.


39(B), pp. 1-38, 1977.)




We begin by recalling that one of the most widely used nonparametric density estimators has the form












f
n



(
x
)


=


1

nh
d










Z
n





k


(


x
-

x
i


h

)





,

x


R
d






(
1
)













where Z


n


={1, . . . , n}, k is some specified function, and {x


i


:iεZ


n


} is a set of observations in R


d


of some unknown random variable, cf. T. Cacoullos, “Estimates of a multivariate density,”


Annals of the Institute of Statistical Mathematics


18, pp. 178-89, 1966; E. Parzen, “On the estimation of a probability density function and the mode,”


Annals of the Institute of Statistical Mathematics


33, pp. 1065-76, 1962; M. Rosenblatt, “Remarks on some nonparametric estimates of a density function,”


Annals of Mathematical Statistics


27, pp. 832-37, 1956. It is well known that this estimator converges almost surely to the underlying probability density function (PDF) provided that the kernel k is strictly positive on R


d


, ∫


R






d




k(x)dx=1, h→0, nh→∞, and n→∞. The problem of how best to choose n and h for a fixed kernel k for the estimator (1) has been thoroughly discussed in the literature, cf. L. Devroye and L. Györfi, Nonparametric Density Estimation, The L


1


View, John Wiley & Sons, New York, 1985.




In this invention, we are led, by the notion of penalized maximum likelihood estimation (PMLE), to density estimators of the form











f


(
x
)


=







Z
n






c
i



k


(

x
,

x
i


)





,

x


R
d






(
2
)













where k(x, y), x, yεR


d


is the reproducing kernel in some Hilbert space H, cf S. Saitoh, Theory of Reproducing Kernels and its Applications,


Pilman Research Notes in Mathematical Analysis,


Longman Scientific and Technical, Essex, UK, 189,1988.




Among the methods we consider, the coefficients in this sum are chosen to maximize the homogeneous polynomial

















(
Kc
)


:=







Z
n









(




j


Z
n






K
ij



c
j



)



,

c
=


(


c
1

,





,

c
n


)

T


,




(
3
)













over the simplex








S




n




={c:cεR




+




n




, e




T




c=


1},  (4)






where e=(1, . . . ,1)


T


εR


n


,







R




+




n




={c:c


=(


c




1




, . . . ,c




n


)


T




, c




i


≧0,


iεZ




n


},  (5)




the positive orthant, K is the matrix








K={K




ij


}


i,jεZ






n






={k


(


x




i




,x




j


)}


i,jεZ






n




  (6)






and accomplish this numerically by the use of the Baum Welch algorithm, cf. L. E. Baum, T. Petrie, G. Soules and N. Weiss, cited above; L. E. Baum and J. A. Eagon, cited above. A polynomial in the factored form (3) appears in the method of deleted interpolation which occurs in language modeling, see L. R. Bahl, P. F. Brown, P. V. de Souza, R. L. Mercer, and D. Nahamoo, “A fast algorithm for deleted interpolation,”


Proceedings Eurospeech


3, pp. 1209-12, 1991. In the context of geometric modeling they have been called lineal polynomials, see A. S. Cavaretta and C. A. Micchelli, “Design of curves and surfaces by subdivision algorithms,” in


Mathematical Methods in Computer Aided Geometric Design,


T. Lyche and L. Schumaker (eds.), Academic Press, Boston, 1989, 115-53, and C. A. Micchelli, Mathematical Aspects of Geometric Modeling,


CBMSF


-


NSF Regional Conference Series in Applied Mathematics


65, SIAM Philadelphia, 1995. A comparison of the Baum Welch algorithm to the degree raising algorithm, see C. A. Micchelli and A. Pinkus, “Some remarks on nonnegative polynomials on polyhedra” in Probability, Statistics and Mathematics: Papers in Honor of Samuel Karlin, Eds. T. W. Anderson, K. B. Athreya and D. L. Iglehart,


Academic Press,


San Diego, pp. 163-86, 1989, which can also be used to find the maximum of a homogeneous polynomial over a simplex, will be made. We also elaborate upon the connection of these ideas to the problem of the diagonal similarity of a symmetric nonsingular matrix with nonnegative elements to a doubly stochastic matrix, see M. Marcus and M. Newman, “The permanent of a symmetric matrix,” Abstract 587-85,


Amer. Math. Soc. Notices


8, 595; R. Sinkhorn, “A relationship between arbitrary positive matrices and doubly stochastic matrices,”


Ann. Math. Statist.


38, pp. 439-55, 1964. This problem has attracted active interest in the literature, see M. Bacharach, “Biproportional Matrices and Input-Output Change,” Monograph 16, Cambridge University press, 1970; L. M. Bergman, “Proof of the convergence of Sheleikhovskii's method for a problem with transportation constraints,”


USSR Computational Mathematics and Mathematical Physics


1(1), pp. 191-204, 1967; S. Brualdi, S. Parter and H. Schneider, “The diagonal equivalence of a non-negative matrix to a stochastic matrix,”


J. Math. Anal. and Appl.


16, pp. 31-50, 1966; J. Csima and B. N. Datta, “The DAD theorem for symmetric non-negative matrices,”


Journal of Combinatorial Theory


12(A), pp. 147-52, 1972; G. M. Engel and H. Schneider, “Algorithms for testing the diagonal similarity of matrices and related problems,”


SIAM Journal of Algorithms in Discrete Mathematics


3(4), pp.429-38, 1982; T. E. S. Raghavan, “On pairs of multidimensional matrices,”


Linear Algebra and Applications


62, pp. 263-68, 1984; G. M. Engel and H. Schneider, “Matrices diagonally similar to a symmetric matrix,”


Linear Algebra and Applications


29, pp. 131-38, 1980; J. Franklin and J. Lorenz, “On the scaling of multidimensional matrices,”


Linear Algebra and Applications


114/115, pp. 717-35, 1989; D. Hershkowitz, U. G. Rothblum and H. Schneider, “Classification of nonnegative matrices using diagonal equivalence,”


SIAM Journal on Matrix Analysis and Applications


9(4), pp. 455-60, 1988; S. Karlin and L. Nirenberg, “On a theorem of P. Nowosad,”


Mathematical Analysis and Applications


17, pp.61-67, 1967; A. W. Marshall and I. Olkin, “Scaling of matrices to achieve specified row and column sums,”


Numerische Mathematik


12, pp. 83-90, 1968; M. V. Menon and H. Schneider, “The spectrum of a nonlinear operator associated with a matrix,”


Linear Algebra and its Applications


2, pp. 321-34, 1969; P. Novosad, “On the integral equation Kf=1/f arising in a problem in communication,”


Journal of Mathematical Analysis and Applications


14, pp. 484-92, 1966; T. E. S. Raghavan, “On pairs of multidimensional matrices,”


Linear Algebra and Applications


62, pp.263-68, 1984; U. G. Rothblum, “Generalized scaling satisfying linear equations,”


Linear Algebra and Applications


114/115, pp. 765-83, 1989; U. G. Rothblum and H. Schneider, “Scalings of matrices which have prescribed row sums and column sums via optimization,”


Linear Algebra and Applications


114/115, pp.737-64, 1989; U. G. Rothblum and H. Schneider, “Characterization of optimal scaling of matrices,”


Mathematical Programming


19, pp. 121-36, 1980; U. G. Rothblum, H. Schneider and M. H. Schneider, “Scaling matrices to prescribed row and column maxima,”


SIAM J. Matrix Anal. Appl.


15, pp. 1-14, 1994; B. D. Saunders and H. Schneider, “Flows on graphs applied to diagonal similarity and diagonal equivalence for matrices,”


Discrete Mathematics


24, pp. 205-20, 1978; B. D. Saunders and H. Schneider, “Cones, graphs and optimal scaling of matrices,”


Linear and Multilinear Algebra


8, pp. 121-35, 1979; M. H. Schneider, “Matrix scaling, entropy minimization and conjugate duality. I Existence conditions,”


Linear Algebra and Applications


114/115, pp. 785-813, 1989; R. Sinkhorn, cited above; R. Sinkhorn and P. Knopp, “Concerning nonnegative matrices and doubly stochastic matrices,”


Pacific J. Math.


212, pp. 343-48, 1967, and has diverse applications in economics, operations research, and statistics.




Several of the algorithms described here were tested numerically. We describe their performance both on actual speech data and data generated from various standard probability density functions. However, we restrict our numerical experiments to scalar data and will describe elsewhere statistics on word error rate on the Wall Street Journal speech data base, as used in S. Basu and C. A. Micchelli, cited above.











BRIEF DESCRIPTION OF THE DRAWINGS




The foregoing and other objects, aspects and advantages will be better understood from the following detailed description of a preferred embodiment of the invention with reference to the drawings, in which:





FIG. 1

is a convergence plot of the iteration methods given by Penalized MLE and Baum-Welch;





FIG. 2A

is a plot of the Baum Welch estimator for the multiquadric and the cosine matrix;





FIG. 2B

is a plot of the degree raising estimator for the multiquadric and the cosine matrix;





FIG. 3

is a plot of the PMLE, the Parzen estimator and the actual probability density for n=500, h=0.3 and the gaussian kernel;





FIG. 4

is a plot of the Baum Welch density estimator, the Parzen estimator and the actual probability density for n=500, h=0.3 and the gaussian kernel;





FIG. 5A

is a plot of the Baum Welch estimator for n=500, h=0.3 and the gaussian kernel;





FIG. 5B

is a plot of the Parzen estimator for n=500, h=0.3 and the gaussian kernel;





FIG. 6A

is a plot of the Baum Welch estimator and the actual probability density for n=500, h=0.6 and the gaussian kernel;





FIG. 6B

is a plot of the Baum Welch estimator and the actual probability density for n=2000, h=0.3 and the gaussian kernel;





FIG. 7A

is a plot of the Baum Welch estimator for n=2000, h=0.3 and the spline kernel f


i


for i=1;





FIG. 7B

is a plot of the Baum Welch estimator for n=2000, h=0.3 and the spline kernel f


i


for i=2;





FIG. 8A

are plots of the histogram, the Baum Welch estimator and the Parzen estimator for leaf


1


of ARC AA_


1


, dimension


0


;





FIG. 8B

are plots of the histogram, the Baum Welch estimator and the Parzen estimator for leaf


2


of ARC AA_


1


, dimension


0


;





FIG. 8C

are plots of the histogram, the Baum Welch estimator and the Parzen estimator for leaf


7


of ARC AA_


1


, dimension


0


;





FIG. 8D

are plots of the histogram, the Baum Welch estimator and the Parzen estimator for leaf


11


of ARC AA_


1


, dimension


0


;





FIG. 8E

are plots of the histogram, the Baum Welch estimator and the Parzen estimator for leaf


5


of ARC AA_


1


, dimension


25


; and,





FIG. 8F

are plots of the histogram, the Baum Welch estimator and the Parzen estimator for leaf


2


of ARC AA_


1


, dimension


25


.











DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION




Penalized Maximum Likelihood Estimation




Let x


1


, . . . ,x


n


be independent observations in R


d


from some unknown random variable with probability density function (PDF)ƒ. The likelihood function of the data










L


(
f
)


:=







Z
n









f


(

x
i

)







(
7
)













is used to assess the value of a specific choice for ƒ. In a parametric statistical context a functional form of ƒ is assumed. Thus, we suppose that ƒ=ƒ(•, a) where a is an unknown parameter vector that determines ƒ. For example, ƒ may be a mixture model of gaussian PDFs with unknown means, covariances and mixture weights. The parametric Maximum Likelihood Estimator (MLE) point of view tries to maximize the likelihood function over a. Such a data dependent choice of a is then used to specify the density ƒ. If no information on ƒ is known (or assumed) then it is clear that the likelihood function can be made arbitrarily large with a PDF that is concentrated solely at the data points x


i


, iεZ


n


. When such density functions are used for classification algorithms the phenomenon of over training results. Thus, only actual training data can be classified and no others. To remedy this problem it has been suggested that one penalizes the likelihood function for oscillations in its derivatives, see I. J. Good and R. A. Gaskin, cited above. It is this point of view which we study here for the purpose of classification of acoustic vectors in speech recognition.




Let us recall the setup for penalized likelihood estimation. We let H be a Hilbert space of real-valued functions on R


d


for which point evaluations is a continuous linear functional. In other words, H is a reproducing kernel Hilbert space, cf. S. Saitoh, cited above. Therefore, there exists a real-valued k(x,y), x, yεR


d


such that for every xεR


d


the function k(x,•) is in H and for every ƒ in H we have






ƒ(


x


)=<


k


(


x,


•), ƒ>  (8)






where <•,•> represents the inner product on H. Recall that for any x


1


, . . . ,x


n


in R


d


the matrix








K={k


(


x




i




,x




j


)}


i,jεZ






n




  (9)






is symmetric and positive semi-definite. Moreover, when the point evaluation functional at the data x


1


, . . . ,x


n


are linearly independent this matrix is positive definite.




Corresponding to this Hilbert space H and data x


1


, . . . ,x


n


the penalized likelihood function is defined to be










P


(
f
)


=


(







Z
n









f


(

x
i

)



)







-
1

/
2



&LeftBracketingBar;
&RightBracketingBar;


f



&LeftBracketingBar;
&RightBracketingBar;

2








(
10
)













where ∥−∥ is the norm on H. Maximizing this function over H for specific Hilbert spaces has been given in I. J. Good and R. A. Gaskin, cited above; J. R. Thompson and R. Tapia, Nonparametric Probability Density Estimation, The Johns Hopkins University Press, Baltimore, 1978; J. R. Thompson and R. Tapia, Nonparametric Function Estimation,


Modeling and Simulation,


SIAM, Philadelphia, 1970. For example, the PMLE for scalar data relative to a Sobolev norm has been obtained in J. R. Thomopson and R. Tapia, cited above.




Since our motivation in this paper comes from speech recognition, the value of n is typically 5,000 and the dimension d is 39. Moreover, density estimators are needed for as many as 4,000 groups of vectors, cf. S. Basu and C. A. Micchelli, cited above; F. Jelinek, cited above. Although the ideas from C. A. Micchelli and F. I. Utreras, “Smoothing and interpolation in a convex subset of a Hilbert Space,”


SIAM Journal of Scientific and Statistical Computing


9, pp. 728-46, 1988; C. A. Micchelli and F. I. Utreras, “Smoothing and interpolation in a convex subset of a semi-Hilbert Space,”


Mathematical Modeling and Numerical Methods


4, pp. 425-40, 1991, should be helpful to solve the large scale optimization problem of maximizing P over all PDFs in a suitably chosen Hilbert space, this seems to be computationally expensive. Thus, our strategy is to remove some of the constraints on ƒ and maximize the absolute value of P(ƒ) over all ƒεH. As we shall see, this is an easier task. After determining the parametric form of such an ƒ, we will then impose the requirement that it be a PDF.




Let us begin by first recalling that the penalized maximum likelihood does indeed have a maximum over all of H. To this end, we first point out that there exists a positive constant C such that for all ƒ in H






|


P


(ƒ)|≦


C∥ƒ∥




n




e




−½∥ƒ∥






2




.  (11)






Consequently, we conclude that the function P is bounded above on H by some positive constant B. Let {ƒ


l


:lεN}, N={1,2, . . . } be any sequence of functions in H such that lim


l→∞


P(ƒ


l


)=B. Then, for all but a finite number of elements of this sequence, we have that











&LeftBracketingBar;
&RightBracketingBar;



f
l



&LeftBracketingBar;
&RightBracketingBar;




4




(


Cn
!

B

)


1
/
n


.






(
12
)













Therefore some subsequence of {ƒ


l


:lεN} converges weakly in H to a maximum of P.




Theorem 1: Suppose H is a reproducing Hilbert space with reproducing kernel k(•, •). If |P(ƒ)|=max{|P(g)|:gεH} for some f in H then there exist constants c


i


, iεZ


n


such that for all xεR


d













f


(
x
)


=







Z
n










c
i




K


(

x
,

x
i


)


.







(
13
)













Proof: Let y


i


=ƒ(x


i


), iεZ


n


. Since ƒ maximizes |P(h)|, hεH, we conclude that y


i


≠0 for iεZ


n


. Let g be any other function in H such that g(x


i


)=y


i


, iεZ


n


. By the definition of ƒ we have that




 |


g


(


x




1


) . . .


g


(


x




n


)|


e




−½∥g∥






2




≦|ƒ(


x




1


) . . . ƒ(


x




n


)|


e




−½∥ƒ∥






2








from which we conclude that






∥ƒ∥=min{∥


g∥:g


(


x




i


)=


y




i




, iεZ




n




, gεH}.








The fact that ƒ has the desired form now follows from a well-known analysis of this problem. We recall these details here. For any constants a


1


, . . . ,a


n


we have that










&LeftBracketingBar;







Z
n






a
i



y
i



&RightBracketingBar;

=





&LeftBracketingBar;


<







Z
n






a
i



k


(


x
i

,
·

)





,

g
>


&RightBracketingBar;














&LeftBracketingBar;
&RightBracketingBar;









Z
n






a
i



k


(


x
i

,
·

)




&LeftBracketingBar;
&RightBracketingBar;



&LeftBracketingBar;
&RightBracketingBar;


g


&LeftBracketingBar;
&RightBracketingBar;

















which implies that











&LeftBracketingBar;
&RightBracketingBar;


g


&LeftBracketingBar;
&RightBracketingBar;




max



{




&LeftBracketingBar;


a
T


y

&RightBracketingBar;



&LeftBracketingBar;
&RightBracketingBar;









Z
n






a
i



k


(


x
i

,
·

)




&LeftBracketingBar;
&RightBracketingBar;





:
a

=



(


a
1

,





,

a
n


)

T



R
n



}

.






(
14
)













To achieve equality above we choose constants c


1


, . . . ,c


n


so that the {tilde over (ƒ)} function defined by









f
~



(
x
)


=







Z
n






c
i



k


(

x
,

x
i


)





,

x


R
n












satisfies the equations y


i


=ƒ(x


i


), iεZ


n


. Therefore, we have that






∥{tilde over (ƒ)}


2




=c




T




Kc








where K={k(x


i


,x


j


)}


i,jεZ






n




and c=(c


1


, . . . ,c


n


)


T


. Since Kc=y, where y=(y


1


, . . . ,y


n


)


T


, we also have that









c
T


y



&LeftBracketingBar;
&RightBracketingBar;









Z
n






c
i



K


(


x
i

,
·

)




&LeftBracketingBar;
&RightBracketingBar;





=


&LeftBracketingBar;
&RightBracketingBar;



f
~




&LeftBracketingBar;
&RightBracketingBar;

.












Although the PMLE method allows for a nonparametric view of density estimation it is interesting to see its effect on a standard parametric model, for instance a univariate normal with unknown mean and variance relative to the Sobolev norm on R. To this end, recall the m-th Sobolev norm is defined to be






∥ƒ∥


m




2


=∫


R





(m)


(


t


)|


2




dt.








For the normal density with mean μ and variance σ


2


, given by








f


(
t
)


=


1


2

πσ







-



(

t
-
μ

)

2


2


σ
2







,

t

R











we get that






∥ƒ∥


m




2


=2ρ


m


σ


−2m−1








where







ρ
m

=



2



-
2


m

-
3


π




Γ


(

m
+

1
2


)


.












Then the PMLE estimates for the mean and variance are given by







μ
^

=


1
n









Z
n





x
i













and






{circumflex over (σ)}=


v








where v is the unique positive root of the equation








v


2

m

-
1




(


v
2

-

S
2


)


=




2

m

+
1

n



ρ
m












and







S
2

=


1
n









Z
n







(


x
i

-

μ
^


)

2

.













Note that v is necessarily greater than S, but as n→∞ it converges to S.




Penalized Maximum Likelihood Estimators




In the previous section we provided a justification for our density estimator to have the form











f


(
x
)


=







Z
n






c
i



k


(


x
i

,
x

)





,

x


R
d






(
15
)













where k(x,y), x,yεR


d


is a reproducing kernel for a Hilbert space of functions on R


d


. In general, this function is neither nonnegative on R


d


nor does it have integral one. We take the point of view that in applications the kernel will be chosen rather than the Hilbert space H. Thus, we will only consider kernels which are nonnegative, that is, k(x,y)≧0, x,yεR


d


.




We note in passing that there are noteworthy examples of Hilbert spaces which have nonnegative reproducing kernels. For example, given any polynomial








q


(


t


)=


q




O




+q




1




t+ . . . +q




m




t




m




, tεR


  (16)






which has a positive leading coefficient and only negative zeros it can be confirmed that the Hilbert space of functions ƒ on R with finite norm











&LeftDoubleBracketingBar;
f
&RightDoubleBracketingBar;

2

=




j
=
0

m




q
j





R





&LeftBracketingBar;


f

(
j
)




(
t
)


&RightBracketingBar;

2




t









(
17
)













is a reproducing kernel Hilbert space with a nonnegative reproducing kernel.




With a given nonnegative kernel we now view the density ƒ in equation (15) as being parametrically defined and now discuss the problem of determining its coefficients c


1


, . . . ,c


n


. Our first choice is to determine these parameters by substituting the functional form of (15) into the penalized maximum likelihood function (10) and maximize the resulting expression. To this end, we let








b




i


=∫


R






d






k


(


x,x




i


)


dx, iεZ




n








and introduce the set








S




n


(


b


)={


c:cεR




n




, b




T




c=


1}.






We recall that the n×n matrix








K={k


(


x




i




,x




j


)


i,jεZ






n




}






is symmetric, positive definite, and has nonnegative elements. Under these conditions our concern is to maximize the function











P
K



(
c
)


=


Π


(
Kc
)







-

1
2




c
T


Kc







(
18
)













over the set S


n


(b) where for any x=(x


1


, . . . ,x


n


)


T


εR


n


we set








Π


(
x
)


=




i


Z
n





x
i



,










We also use








&LeftDoubleBracketingBar;
x
&RightDoubleBracketingBar;

2

=




i


Z
n





x
i
2












for the euclidean norm of x. We adopt the convention that multiplication/division of vectors is done component-wise. In particular, when all the components of a vector y=(y


1


, . . . ,y


n


)


T


are nonzero we set







x
y

:=


(



x
1


y
1


,





,


x
n


y
n



)

.











For the vector e/y we use the shorthand notation y


−1


, set








x·y


:=(


x




1




y




1




, . . . ,x




n




y




n


)


T








for multiplication of vectors and use the notation S


n


for the set S


n


(e) which is the standard simplex in R


n


. If ƒ is a scalar valued function and x a vector all of whose coordinates lie in its domain, we set ƒ(x):=(ƒ(x


1


), . . . ,ƒ(x


n


))


T


. When we write x≧y we mean this in a coordinate-wise sense.




Note that P


K


(c)=P(ƒ) where ƒ is the function in equation (15) and P is the penalized likelihood function (10). Also, we use the notation S


n


for the set S


n


(e) which is the standard simplex in R


n


.




Theorem 2: Let K be a positive definite matrix with nonnegative elements. Then the function P


K


has a unique maximum on any closed convex subset C of R


+




n


.




Proof: The existence of a maximum is argued just as in the case of the existence of the maximum of the penalized likelihood function P in equation (10) over the Hilbert space H. To demonstrate that the maximum is unique we let P





be the maximum value of P


K


on C. Our hypothesis ensures that P





is positive. Suppose the function P


K


attains P





at two distinct vectors c


1


and c


2


in C. Consequently, the vectors Kc


1


and Kc


2


are in intR


+




n


. Therefore, for any 0≦t≦1 the vector K(tc


1


+(1−t)c


2


) is likewise in intR


+




n


. A direct computation of the Hessian of P


K


at x, denoted by ∇


2


log P


K


(x) verifies, for any vector x,yεR


n


with Kxε(R\{0})


n


that








y
T





2




P
K



(
x
)




y

=



-


&LeftDoubleBracketingBar;

Ky
Kx

&RightDoubleBracketingBar;

2


-


y
T


Ky


<
0.











Choosing y=c


1


−c


2


and c=c


2


we conclude from the above formula that the function log P


K


(tc


1


+(1−t)c


2


) is strictly concave for 0≦t≦1. But this contradicts the fact that its value at the endpoints are the same.




Remark: Observe that without the condition that K is nonsingular P


K


will in general have more than one maximum. For example, when n=2, b=(1,1)


T


and






K
=

(



1


1




1


1



)











P


K


is everywhere equal to e


−½


on S


2


(b). In general, the proof above reveals the fact that if P


K


achieves its maximum at two vectors c


1


and c


2


in C then K(c


1


−c


2


)=0.




If the maximum of P


K


on S


n


(b) occurs in the interior of S


n


(b) at the vector c then it is uniquely determined by the stationary equation








K


(


c


−(


Kc


)


−1


)=λ


b








where λ is a scalar given by the equation






λ=


c




T




Kc−n.








In the case that K is a diagonal matrix, that is, K=diag(k


11


, . . . ,k


nn


) where k


ii


>0, iεZ


n


the maximum of P


K


on S


n


(b) occurs at the vector c whose coordinates are all positive and given by the equation








c
i

=


λ
+



λ
2

+

4


k
ii






2


k
ii




,

i


Z
n












where λ is the unique real number that satisfies the equation









1
=




i


Z
n







λ
+



λ
2

+

4


k
ii






2


k
ii



.






(
19
)













We remark that the right hand side of equation (19) is an increasing function of λ which is zero when λ=−∞ and infinity when λ=∞.




Theorem 3: Suppose K is an n×n positive definite matrix with nonnegative elements such that Ke=e. Then the vector







1
n


e










is the unique maximum of P


K


on S


n


.




Proof: For any cεS


n


the geometric inequality implies that








{


P
K



(
c
)


}


1
/
n





1
n




T




Kcⅇ


-

1

2

n





c
T


Kc


.












Moreover, since e


T


Kc=e


T


c=1 the Cauchy Schwarz inequality implies that 1≦nc


T


Kc. Therefore









{


P
K



(
c
)


}


1
/
n





1
n





-

1

2


n
2







=



{


P
K



(


1
n


e

)


}


1
/
n


.











We now consider the following problem. Starting with a vector c=(c


1


, . . . c


n


) in intS


n


(b) which is not the maximum of P


K


we seek a vector v=(v


1


, . . . ,v


n


)


T


in intS


n


(b) such that P


K


(v)>P


K


(c). That is, “updating” c with v will increase P


K


and therefore eventually converge to the maximum of P


K


on S


n


(b). To this end, we consider the quantity log P


K


(v)−log P


K


(c). We shall bound it from below by using Jensen's inequality which ensures for any vector aεintR


+




n


and zεS


n


that log z


T


a≧z


T


log a. To make use of this fact we define for any fixed iεZ


n


, vectors z


i


εS


n


, and aεintR


+




n


the equations









(

z
i

)

j

=



K
ij



c
j




(
Kc
)

i



,

j


Z
n












and






a
=


v
c

.











Also, we set






y
=




i


Z
n





z
i












and note that the vector







1
n


y










is in intS


n


since K is a nonsingular matrix with nonnegative elements and the vector c is in R


+




n


. Therefore, we get that











log







P
K



(
v
)



-

log







P
K



(
c
)




=









i


Z
n





log








(
Kv
)

i



(
Kc
)

i




-


1
2



v
T


Kv

+


1
2



c
T


Kc








=









i


Z
n





log







(

z
i

)

T


a


-


1
2



v
T


Kv

+


1
2



c
T


Kc
















y
T


log





a

-


1
2



v
T


Kv

+


1
2



c
T



Kc
.
















This inequality suggests that we introduce the auxiliary function








W


(


v


)=


y




T


log


v−


½


v




T




Kv, vε


int


R




+




n








and rewrite the above inequality in the form






log


P




K


(


v


)−log


P




K


(


c


)≧


W


(


v


)−


W


(


c


).






This function W is strictly log concave on intR


+




n


and tends to minus infinity on its boundary. Therefore it has a unique maximum in intS


n


(b). Using the principle of Lagrange multipliers there exists a constant γ such that the vector v at which W takes its maximum in intS


n


(b) is characterized by the equation








y/v−Kv−γb=


0.






Taking the inner product of both sides of this equation with v and using the fact that vεintS


n


(b) yields the equation






γ=


n−v




T




Kv.








We formalize these remarks in the following theorem.




Theorem 4: Let K be a symmetric positive definite matrix with nonnegative elements and bεintR


+




n


. Given any vector cεintS


n


(b) there exists a unique vector vεintS


n


(b) satisfying the equations








v·Kv=K


((


Kc


)


−1




·c−γv·b


  (20)






where






γ=


n−v




T




Kv.


  (21)






This vector has the property that P


K


(v)>P


K


(c) as long as c is not the unique maximum of P


K


on S


n


(b).




The function H defined by the equation H(c)=v, vεintS


n


(b) maps S


n


(b) into itself. By the uniqueness of v satisfying (20) we see that H is continuous. The mapping H has a continuous extension S


n


(b) to and by the Browder fixed point theorem has a fixed point in S


n


(b) at some vector u. This vector satisfies the equation








u·Ku=K


((


Ku


)


−1





u−γu·b.








These equations by themselves do not define the unique maximum of P


K


on S


n


(b). If we call this vector c then it is the unique solution of the equations








K


((


Kc


)


−1




−c


)=γ


b−z,








where γ is a scalar given by the equation






γ=


n−c




T




Kc








and z is a vector in R


+




n


satisfying the equation z·c=0. To derive this fact we recall that a concave function ƒ has its maximum on S


n


(b) at x if and only if there is a γεR and azεR


+




n


such that z


T


x=0 and ∇ƒ(x)=γb−z. Specializing this general fact to the case at hand verifies the above fact. In general the maximum of P


K


may occur on the boundary of S


n


(b).




The iteration embodied in the above result is quite appealing as it guarantees an increase in the penalized likelihood function P


K


at each step unless we have reached its maximum on S


n


(b). However, to compute the updated vector seems computationally expensive and so we consider other methods for maximizing P


K


over S


n


(b).




To this end, we recall that whenever a function F is a homogenous polynomial with positive coefficients then the Baum Welch algorithm says that the update formula






v
=


c
·



F


(
c
)







(

b
·
c

)

T





F


(
c
)















increases F on S


n


(b), i.e. F(v)>F(c) whenever cεS


n


(b) is not the maximum of F on S


n


(b), see L. E. Baum and J. A. Eagon, cited above. Since P


K


is not a polynomial, applying the Baum Welch iteration to it will generally not increase it. One modification of the Baum Welch iteration which we have some computational experience with starts with a positive parameter σ and defines






v
=



c
·

K


(



(
Kc
)


-
1


-

σ





e


)




n
-

σ






c
T


Kc



.











This update formula is inexpensive to use and our numerical experiments indicate it gives good results provided that σ is chosen with care.




Using the strong duality theorem for convex programs, cf. S. Whittle, “Optimization under Constraints,”


Wiley Interscience,


1971, the dual minimization problem for the primal convex program of minimizing −log P


K


over the domain








W:=S




n


(


b


)∩


A




−1


(int


R




+




n


)






is






min{−log


P




K


(


y


):


yεW


}=max{θ(


u,t


):


uεR




+




n




, tεR}








where






θ(


u,t


):=min{½


x




T




Ax−u




T




x+t


(


b




T




x−


1):


xεA




−1


(int


R




+




n


)}.






However, the dual problem does not seem to offer any advantage over the primal. Interior point methods seem to have a strong potential to handle the primal problem when K is a large sparse matrix. It is interesting to note that our primal problem falls into the problem class studied in K. Tok, “Primal-dual path-following algorithms for determinant maximization problems with linear matrix inequalities,” Computational Optimization and Applications, Kluwer Academic Publishers, Boston.




We end this section with a number of comments on the problem of maximizing P


K


over the positive orthant R


+




n


. This problem has some unexpected connections with the problem of the diagonal similarity of a symmetric matrix with nonnegative elements to a doubly stochastic matrix. We record below the information we need about this problem.




The following lemma is well known, cf. M. Marcus and M. Newman, cited above; R. Sinkhorn, cited above, and is important to us. Recall that an n×n matrix A is said to be strictly copositive provided that x


T


Ax>0 for all xεR


+




n


\{0}, cf. C. A. Micchelli and A. Pinkus, cited above, and references therein.




Lemma 1: Let A be an n×n symmetric strictly copositive matrix. For any vector yεintR


+




n


there exists a unique vector xεintR


+




n


such that








x·Ax=y.


  (22)






Proof: First we prove the existence of a vector x which satisfies (22). To this end, we follow A. W. Marshall and I. Olkin, cited above, and consider the set








H




n


(


y


):={


u:uε


int


R




+




n


, Π(


u




y


)=1,}






where u


y


=(u


1




y






1




, . . . ,u


n




y






n




)


T


, y=(y


1


, . . . ,y


n


)


T


, and u=(u


1


, . . . ,u


n


)


T


. By hypothesis, there exists some ρ>0 such that for all uεR


+




n


the inequality u


T


Au≧ρu


T


u holds. Thus, the function ½u


T


Au takes on its minimum value on H


n


(y) at some z. Therefore, by Lagrange multipliers there is a constant μ such that








Az=μy/z.








Since 0<z


T


Az=(e


T


y)μ we see that μ>0 and so the vector we want is given by x=z/{square root over (μ)}. This establishes the existence of x. Note that the vector






u
=

x


(

Π


(

x
y

)


)


1



T


y














the unique vector in H


n


(y) which minimizes ½u


T


Au on H


n


(y).




The uniqueness of x in (22) is established next. Suppose there are two vectors z


1


and z


2


which satisfy equation (22). Let B be an n×n matrix whose elements are defined to be








B
ij

=



z
i
2


y
i




A
ij



z
j
2



,
i
,

j



Z
n

.












Then (22) implies that







Be=e,


  (23)




and








Bv=v




−1


,  (24)






when v:=z


1


/z


2


. Let M:=max{v


j


:jεZ


n


}=v


r


and m:=min{v


j


:jεZ


n


}=v


s


for some r, sεZ


n


. From (23) and (24) we obtain that















1
m

=


1

v
s


=



(
Bv
)

s


M






(
25
)













and also















1
M

=


1

v
r


=



(
Bv
)

r



m
.







(
26
)













Thus we conclude that Mm=1 and from (25) that (B(v−Me))


s


=0. Since B


ss


>0 we get m=v


s


=M. Therefore, m=M=1 which means v=e; in other words, z


1


=z


2


.




We shall apply this lemma for the Baum Welch update for the problem of maximizing the function P


K


given in (18) over the positive orthant R


+




n


, but first we observe the following fact.




Theorem 5: Let K be a symmetric positive definite matrix with nonnegative entries. Then P


K


achieves its (unique) maximum on R


+




n


at the (unique) vector x=(x


1


, . . . x


n


)


T


in intR


+




n


which satisfies the equation








x·Kx=e.


  (27)






Proof: Let c be any vector in R


n


such that Kcε(R\{0})


n


. Since









P




K


(


c


)=


P




K


(


c


)


K


(


c


−(


Kc


)


−1


),






we see that the vector which satisfies (27) is a stationary point of P


K


. We already proved that P


K


is log concave and so the result follows.




Note that when K satisfies the hypothesis of Theorem 3 the vector x in equation (27) is e and moreover it furnishes the maximum of P


K


on S


n


. In the next result we clarify the relationship between the minimum problem in Lemma 1 and the problem of maximizing P


K


over R


+




n


. For this purpose, we use the notation H


n


for the set H


n


(e).




Theorem 6: Suppose that K is an n×n symmetric positive definite matrix with nonnegative elements. Let








p




K


:=max{


P




K


(


c


):


cεR




+




n


}






and








q




k


:=min{½


u




T




Ku:uεH




n


}.






Then







p
K

=



(


2


q
K



n








)


n
/
2


.











Proof: Let x be the unique vector in the interior of R


+




n


such that








x·Kx=e.








Since x is a stationary point of P


K


we have that







p
K

=


Π


(
Kx
)








-

1
2




x
T


Kx


.












Moreover, the vector






u
=

x


(

Π


(
x
)


)


1
/
n













in H


n


is the unique solution of the minimum problem which defines the constant q


K


. Thus,







2


q
K


=



u
T


Ku

=

n


(

Π


(
x
)


)


2
/
n














and







p
K

=






-
n

/
2



Π


(
x
)



.











Eliminating Π(x) from these equations proves the result.




This result justifies the exchange in the max and min in the following result. To this end, we introduce the semi-elliptical region








E




K




n




={c:cεR




+




n




, c




T




Kc


=1}.






Theorem 7: Suppose K is an n×n symmetric positive definite matrix with nonnegative elements. Then






max{min{


u




T




Kc:uεH




n




}:cεE




K




n


}=min{max{


u




T




Kc:uεH




n




}:cεE




K




n


}.






To prove this fact we use the following consequence of the arithmetic geometric inequality.




Lemma 2: For any xεR


+




n


we have that








(

Π


(
x
)


)


1
/
n


=

min



{


1
n



u
T



x
:

u


H
n




}

.












Proof: By the arithmetic geometric inequality we have












1
n



u
T


x




(




i


Z
n






u
i



x
i



)


1
/
n



=


Π


(
x
)



1
/
n






(
28
)













provided that Π(u)=1. This inequality is sharp for the specific choice u=Π(x)


1/n


x


−1


.




Proof of Theorem 7: First, we observe that










p
K

1
/
n


=

max


{


max


{



Π


(
Kc
)



1
n








-

1

2

n





c
T


Kc


:

c



t



E
K
n





}


:

t


R
+



}








=

max



{




-

t

2

n







t
:

t


R
+





}

·
max



{



Π


(
Kc
)



1
/
n


:

c


E
K
n



}








=


1


n










max



{


min


{


u
T



Kc
:

u


H
n




}


:

c


E
K
n



}

.















Next, we observe by the Cauchy Schwarz inequality that












2


q
K



n









=


1


n










min


{




u
T


Ku


:

u


H
n



}








=


1


n










min



{


max


{


u
T



Kc
:

c


E
K
n




}


:

u


H
n



}

.















We remark that (27) implies that xε{square root over (n)}E


K




n


when K satisfies the conditions of Theorem 5. Let us show directly that this is the case for any maxima of P


K


on R


+




n


even when K is only symmetric semi-definite. This discussion shall lead us to a Baum Welch update formula for maximizing P


K


on R


+




n


even when K is only symmetric semi-definite.




Suppose v is any maximum of P


K


in R


+




n


. Then for any t≧0








P




K


(


tv


)=


t




n


Π(


Kv


)


e




−t






2






γ




≦P




K


(


v


)=Π(


Kv


)


e




−γ








γ:=½v


T


Kv. Thus the function h(t):=P


K


(tv), tεR


+


achieves its maximum on R


+


at t=1. However, a direct computation confirms that its only maximum occurs at








n

2

y



.










Thus, we have confirmed that vε{square root over (n)}E


K




n


. Let










c
^

=

v

n






(
29
)













and observe that for every vector cεE


K




n






 Π(


Kc


)≦Π(





).




Thus, the vector ĉ maximizes the function M


K


(c):=Π(Kc) over E


K




n


. Conversely, any vector c defined by equation (29), which maximizes M


K


over E


K




n


, maximizes P


K


over R


+




n


. Thus it suffices to consider the problem of maximizing the homogenous polynomial M


K


over the semi-elliptical set E


K




n


.




Before turning our attention to this problem, let us remark that the function M


K


has a unique maximum on E


K




n


whenever K is a positive definite matrix with nonnegative elements. To see this, we observe that the Hessian of log M


K


at any vector cεE


K




n


with KcεintR


+




n


is negative definite. This fact is verified by direct computation as in the proof of Theorem 2, see also Theorem 10. Thus, M


K


has a unique maximum on the convex set








Ê




K




n




={c:cεR




+




n




, c




T




Kc≦


1}.






Suppose that the maximum Of M


K


in Ê


K




n


occurs at u while its maximum on E


K




n


is taken at v. Then, by the homogeneity of M


K


we have that








M




K


(


v


)≦


M




K


(


u


)≦(


u




T




Ku


)


n/2




M




K


(


v


)≦


M




K


(


v


).






Hence u


T


Ku=1 and v must be the maximum of M


K


in Ê


K




n


as well. That is, u=v and v is unique. We now study the problem of maximizing M


K


over E


K




n


in the following general form.




To this end, let G be any function of the form








G


(
x
)


=




i


Z
+
n






g
i



x
i




,

x


E
K
n












where g


i


, iεZ


+




n


are nonnegative scalars. We require that the sum be convergent in a neighborhood of E


K




n


. Also, there should be at least one jεintZ


+




n


such that g


j


>0. Certainly the function M


K


has all of these properties when K is a nonsingular matrix with nonnegative elements, which covers the case that interests us.




For xεE


K




n


we have that








x·∇G


(


x


)≧


jg




j




x




j


,






and so x·∇G(x)εintR


+




n


. Consequently, the vector z, defined by the equations






z
=


x
·



G


(
x
)






x
T





G


(
x
)















is intS


n


. Using Lemmna 1 there is a unique vector y in the interior of intE


K




n


such that y·Ky=z. We claim that








G


(


x


)≦


G


(


y


).  (30)






To confirm this we consider the function








Q


(
v
)


:=








Z
+
n







(


i
T


log





v

)



g
i



x
i




x
T





G


(
x
)






=


z
T


log





v



,

v










int







E
K
n

.













Since zεintR


+




n


, Q has a maximum in int E


K




n


. Suppose that the maximum of Q occurs at wεintE


K




n


. Thus, by Lagrange multipliers there is a constant ρ such that







z
w

=

ρ






Kw
.












Since wεE


K




n


and zεS


n


we have ρ=1. Thus, we see that w=y is the unique maximum of Q. Hence we have shown that








Q


(


y


)≧


Q


(


x


)  (31)






where equality is strict in (31) unless y=x.




Next, we use Jensen's inequality to conclude that










log



G


(
y
)



G


(
x
)




=





log


{







Z
+
n







y
i


x
i






g
i



x
i



G


(
x
)





}




















Z
+
n






(

log



y
i


x
i



)





g
i



x
i



G


(
x
)











=








(


Q


(
y
)


-

Q


(
x
)



)



x
T





G


(
x
)





G


(
x
)



.














Hence, we have established (30) and moreover have demonstrated that equality holds in (30) if and only if y=x. That is, if and only if xεE


K




n


satisfies the equation






Kx
=





G


(
x
)





x
T





G


(
x
)





.











This is the equation for the stationary values for G over the set E


K




n


. In the case that interests us, namely, for G=M


K


, the stationary equations have a unique solution in E


K




n


at the maximum of M


K


. For this case we summarize our observations in the next result.




Theorem 8: Suppose K is a symmetric positive definite matrix with nonnegative elements. Given an xε{square root over (n)}intE


K




n


choose yε{square root over (n)}intE


K




n


such that








y


·(


Ky


)=


x·K


((


Kx


)


−1


).






Then P


K


(y)>P


K


(x) unless x=y.




Using this result we generate a sequence of vectors x


k


ε{square root over (n)}intE


K




n


,kεN by the equation








x




k+1




·Kx




k+1




=x




k




·K


((


Kx




k


)


−1


),  (32)






so that the sequence {x


k


:kεN} either terminates at a finite number of steps at the vector satisfying (27) or converges to it as k→∞. Moreover, in the latter case P


K


(x


k


) monotonically increases to the maximum of P


K


in R


+




n


. This theorem gives a “hill climbing” iterative method of the type used to train Hidden Markov Models, cf. L. E. Baum, T. Petrie, G. Soules, and N. Weiss, cited above; F. Jelinek, cited above. However, the equation (32) must be solved numerically. Therefore, it is much simpler to use equation (27) directly to find the maximum of P


K


on the set R


+




n


.




A naive approach to find the unique solution to (27) is to define vectors c


r


, rεN by the iteration








c




r+1


=(


Kc




r


)


−1




, rεN.


  (33)






In general this iteration will diverge. For example, when K=4I and the initial vector is chosen to be c


1


=e, then for all rεN we have that c


r


=2


1+(−1)






r




e which obviously does not converge to the maximum of P


K


, which occurs at the vector whose coordinates are all equal to a half.




To avoid this difficulty we consider the iteration











c

r
+
1


=



c
r


Kc
r




,

r










N
.






(
34
)













Note that iteration (34) maps intR


+




n


into itself. Therefore, we always initialize the iteration with a vector in intR


+




n


. The thought which led us to equation (34) was motivated by the apparent deficiencies of the iteration (33). Later we realized the connection of equation (34) to the Sinkhorn iteration, see R. Sinkhorn, cited above; R. Sinkhorn and P. Knopp, cited above, which we now explain.




We can rewrite this vector iteration as a matrix equation. To this end, we set








K




ij




r




:=c




i




r




K




ij




c




j




r




, i,jεZ




n




, rεN.








Then equation (34) implies that








K
ij

r
+
1


=


K
ij
r




s
i
r



s
j
r





,

,

j


Z
n


,

r

N











where








s
i
r

:=




j


Z
r





K
ij
r



,





Z
n

.












Thus, the components of the vectors c


r


, rεN generate a sequence of positive definite matrices K


r


, rεN, initialized with the matrix K, by a balanced column and row scaling. This iteration preserves symmetry of all the matrices, in contrast to the method of R. Sinkhorn, cited above; R. Sinkhorn and P. Knopp, cited above.




Next, we define for rεN, w


r


=c


r


·(Kc


r


), m


r


=min{w


i




r


:iεZ


n


}, M


r


=max{w


i




r


:iεZ


n


} and observe the following fact.




Lemma 3: Let K be an n×n matrix with nonnegative entries. Then the sequence M


r


/m


r


, rεN is nondecreasing. Moreover, if k


ij


>0 for i,jεZ


n


then it is strictly decreasing.




Proof: For any rεN we have the equations













w

r
+
1


=


c

r
+
1


·

Kc

r
+
1









=



c
r




c
r

·

Kc
r




·

K


(


c
r




c
r

·

Kc
r




)









=



c
r



w
r



·

K


(


c
r



w
r



)










(
35
)













from which it follows that













w

r
+
1









1


m
r







c
r



w
r



·

Kc
r
















1


m
r






w
r










(
36
)













and also










w

r
+
1





1


M
r







w
r


.






(
37
)













In particular, we get











M

r
+
1






M
r


m
r




,




(
38
)




















m

r
+
1






m
r


M
r







(
39
)













and we conclude that the ratio M


r


/m


r


is nondecreasing in r. Furthermore, if all the components of the vector c


r


and elements of the matrix K are positive we see that the sequence is decreasing.




Note that the sequence of vectors c


r


, rεN given by (34) are bounded independent of r if k


ii


>0, iεZ


n


. Specifically, we set κ=min{k


ii


:iεZ


n


} and observe for rεN that c


r


≦κ


−½


e. In addition, the maximum norm satisfies the bound ∥c


r








≧ρ


−½


where ρ is the largest eigenvalue of K. To see this we recall that







ρ
=

max


{



min


{





(
Kx
)

i


x
i


:




Z
n



,


x
i


0


}


:
x

=



(



x

1
,









,

x
n


)

T




R
+
n


\


{
0
}




}



,










cf. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, U.K. 1985, page 504, from which the claim follows by induction.




The iteration (34) can be accelerated by computing intermediate vectors v


r


, rεN








c
i

r
+
1


=



c
i
r


v
i
r




,




Z
n












where








v
i
k

=





j
=
1


i
-
1









k
ij



c
j

k
+
1




+




j
=
1

n








k
ij



c
j
k





,





Z
n

.












Maximum Likelihood Estimation




In this section we consider another method for determining the constants c


1


, . . . , c


n


for the density estimator given in equation (15). Here we take a parametric density estimation perspective. Thus, we study the problem of maximizing the likelihood function












Z
n









f


(

x
i

)












over the simplex S


n


where ƒ has the functional form (15). In other words, we desire to maximize L


K


(c)=Π(Kc) for all cεS


n


. A problem of this type arises in the method of detailed interpolation which is widely used in language modeling for speech recognition, see L. R. Bahl, P. F. Brown, P. V. de Souza, R. L. Mercer, D. Nahamoo, cited above. In fact, in this application the matrix K has more rows than columns. With this in mind we study the problem of maximizing L


K


in greater generality than dealt with so far. To distinguish this case from the one considered in the previous section we adopt a slightly different notation in this section. We begin with a n×k matrix A with nonnegative entries and consider the problem of maximizing the homogeneous polynomial of total degree at most n given by








M




A


(


x


)=Π(


Ax


),


xεR




k


  (40)






over the simplex








S




k




={x:xεR




+




k




, e




T




x


=1}.






Note that even when A is a nonsingular square matrix the likelihood function M


A


may take on its maximum on the boundary of S


n


. For example, corresponding to the matrix






A
=

(



1


3




1


1



)











the maximum of M


A


occurs at c=(0,1)


T


εS


2


.




The fact that M


A


is a homogenous polynomial suggests that we use the Baum Welch algorithm to maximize it over S


k


. To this end, we specialize the Baum Welch algorithm to our context.




Theorem 9: Suppose A is an n×k matrix with nonzero rows and nonnegative elements. For every cεintS


k


the vector ĉ whose coordinates are defined by the equation







c
^

=


c
n

·


A
T



(


(
Ac
)


-
1


)













is likewise in intS


k


. Moreover M


A


(ĉ)≧M


A


(c) where strict inequality only holds unless M


A


takes on its minimum on S


k


at the vector c.




Proof: Since M


A


is a homogenous polynomial of degree n with nonnegative coefficients we can appeal to a result by L. E. Baum and J. A. Eagon, cited above, that states that the vector c whose components are defined by the equation







c
^

=


c
·




M
A



(
c
)






nM
A



(
c
)













has all the required properties. We leave it to the reader to verify that c has the desired form.




Under the conditions of the above theorem we point out that the Baum Welch algorithm globally converges.




Theorem 10: Let A be an n×k matrix of rank k satisfying the hypothesis of Theorem 9. Then M


A


has a unique maximum on S


k


and the Baum Welch algorithm with any initial vector in int S


k


converges to this maximum.




Proof: The main point of the proof is the observation that M


A


is log concave since for any vectors c, y in R


k


with Acε®\{0})


n


we have that











y
T





2


log








M
A



(
c
)



y

=


-

&LeftBracketingBar;
&RightBracketingBar;




Ay
Ac





&LeftBracketingBar;
&RightBracketingBar;

2

.






(
41
)













As in the case of our study of the penalized likelihood function P


K


we give a sufficient condition on an n×n matrix A such that M


A


achieves its maximum on S


n


at the vector







1
n



e
.











Theorem 11: Suppose A is an n×n symmetric matrix with nonnegative elements such that Ae=μe for some positive number μ. Then M


A


has its unique maximum on S


n


at the vector







1
n



e
.











Proof: By the arithmetic geometric inequality we have for any cεS


n


that









{


M
A



(
c
)


}


1
/
n





1
n



e
T


Ac


=


μ
n

=



{


M
A



(


1
n


e

)


}


1
/
n


.












In the spirit of the previous section, we describe a problem which is due to the problem of maximizing M


A


over S


k


. To present the result we introduce for every ρεR


+


the set








G




ρ




n




={u:uεR




+




n


, Π(


u


)≧1,


e




T




u≦ρ},








and for ρ=∞ we denote this set by G


n


. We also need the set








S




ρ




k




={c:cεS




k




, Ac≧ρe}.








Note that both of the sets G


ρ




n


and S


ρ




k


are nonempty convex and compact sets when ρ is not infinite.




Theorem 12: Let A be an n×k matrix with nonnegaitve elements such that each column has at least one nonzero element. Then










max


{



M
A



(
c
)


:

c


S
k



}


=


1
n


min



{



&LeftBracketingBar;
&RightBracketingBar;



A
T


u



&LeftBracketingBar;
&RightBracketingBar;




:

u


G
n



}

.






(
42
)













For this proof we use the min max theorem of von Neuman, cf. C. Berge and A. Ghouile-Houri, Programming, Games and Transportation Networks, John Wiley, New York, pp. 65-66, 1965, which we repeat here for the convenience of the reader.




Theorem 13: Let U and V be compact nonempty convex sets in R


n


and R


k


respectively. If ƒ(x,y) is a function on R


n


×R


k


that is upper semi-continuous and concave with respect to x and lower semi-continuous and convex with respect to y then






max


xεU


min


yεV


ƒ(


x,y


)=min


yεV


max


xεU


ƒ(


x,y


).






Proof of Theorem 12: We first prove the theorem under the hypothesis that all the elements of A are positive. Therefore, for all E sufficiently small






max{


M




A


(


c


):


cεS




k


}=max{


M




A


(


c


):


cεS




ε




k


}.






Hence, by Lemma 2 we have that








(

max


{



M
A



(
c
)


:

c


S
k



}


)


1
/
n


=


1
n


max



{


min


{


u
T



Ac
:

u


G
n




}


:

c


S

k



}

.












Using the fact that A(S


ε




k


) is a bounded subset of int R


+




n


we get that the right hand side of the above equation equals







1
n


max


{


min


{


u
T



Ac
:

u


G
ρ
n




}


:

c


S

k



}











for all ρ sufficiently large. By the von Neuman minmax theorem this quantity equals







1
n


min



{


max


{



(


A
T


u

)

T



c
:

c


S

k




}


:

u


G
ρ
n



}

.











Observe that for a fixed u the maximum over c in S


k


occurs at a vector all of whose coordinates are zero except for one coordinate. Since A has all positive elements this vector will be in S


ε




k


for all ε sufficiently small. Hence the right hand side of the above equation equals







1
n


min



{



&LeftBracketingBar;
&RightBracketingBar;



A
T


u



&LeftBracketingBar;
&RightBracketingBar;




:

u


G
ρ
n



}

.











But for ρ sufficiently large this equals







1
n


min



{



&LeftBracketingBar;
&RightBracketingBar;



A
T


u



&LeftBracketingBar;
&RightBracketingBar;




:

u


G
n



}

.











This establishes the result for the case that A has all positive elements. Obviously, any A that satisfies the hypothesis of the Theorem can be approximated arbitrarily closely by matrices with all positive elements. Since both sides of equation (42) are continuous functions of A the result follows.




Degree Raising Instead of Baum Welch




In this section we compute the maximum of M


A


over the simplex S


k


using the notion of degree raising, cf. C. A. Micchelli, cited above. This method provides an interesting alternative to the Baum Welch algorithm described in the previous section. The method proceeds in two steps. The first step of this alternate procedure requires writing M


A


as a linear combination of monomials. Thus, there are constants {a


i


:iεZ


n




k


} such that









M
A



(
x
)


=







Z
n
k






(



n




i



)



a
i



x
i




,

x


R
k












where







Z
n
k

:=


{



i
:
i

=


(


i
1

,





,

i
k


)

T


,


&LeftBracketingBar;
i
&RightBracketingBar;

=





j
=
1

k







i
j


=
n



}

.











The sequence {a


i


:iεZ


n




k


} can be computed iteratively in the following way. For mεZ


n


we define sequences {b


i




m


:iεZ


m




k


} by the formula











j


Z
m










(
Ax
)

j


=







Z
m
k






b
i
m



x
i




,

x



R
k

.












Thus, for mεZ


n











b
j

m
+
1


=




j
=
1

k








A

m
+

1

j





b

j
-

e
j


m




,

j


Z

m
+
1

k












where e


1


, . . . , e


k


are the coordinate vectors in R


k


defined by








e




r




j





jr




,j,rεZ




k


.










b




i




1




=A




1i




, iεZ




k


.






In particular, we have that








b
i
n

=


(



n




i



)



a
i



,





Z
n
k

.












Note that by the binomial theorem the quantity









M




A








=max{|


M




A


(


x


)|:


xεS




k


}






does not exceed






max{|


a




i




|:iεZ




n




k


}






so our “first guess” for ∥M


A








is max{|a


i


|:iεZ


n




k


}. Moreover, by the arithmetic geometric inequality the function x


i


, xεR


k


has its maximum on Z


n




k


at i/n, and so our “first guess” for the maximum of M


A


will be x=j/n where








j


=argmax{|


a




i




|:iεZ




n




k


}.






To improve this initial guess, we use degree raising.




For any nonnegative integer r, M


A


is also a homogenous polynomial of degree n+r. Hence, there are constants {a


i




r


:iεZ


n+r




k


} such that









M
A



(
x
)


=







Z

n
+
r

k






(




n
+
r





i



)



a
i
r



x
i




,

x


R
k












It is proved in C. A. Micchelli and A. Pinkus, cited above, that the sequence max{|a


i




r


|:iεZ


n+r




k


} is nondecreasing in rεN and converges to ∥M


A








at the rate O(1/r). Moreover, if j


r


εZ


n+r




k


is defined by








j




r


=argmax{|


a




i




r




|:iεZ




n+r




k


}






then j


r


/(n+r) is an approximation to






argmax{|


M




A


(


x


)|:


xεS




k


}.






The sequence {a


i




r


:iεZ


n+r




k


} can be recursively generated by the formula








a
i

r
+
1


=


1

n
+
r
+
1







j
=
1

k








i
j



a

i
-

e
j


r





,

i
=



(


i
1

,





,

i
k


)

T



Z

n
+
r
+
1

k













where a


i




0


=a


i


,iεZ


r




k


, cf. C. A. Micchelli and A. Pinkus, cited above. From this formula we see that the sequence max{|a


i




r


|:iεZ


n+r




k


} is nonincreasing as stated above as r?∞ converges to ∥M


A








from above while the Baum Welch algorithm generates a nondecreasing sequence which converges to ∥M


A





28


from below.




We remark in passing that the sequence {a


i


:iεZ


n




k


} can also be expressed in terms of the permanent of certain matrices formed from A by repeating its rows and columns, cf. I. P. Goulden and D. M. Jackson, Combinatorial Enumeration, John Wiley, New York, 1983; C. A. Micchelli, cited above.




Density Estimation with Baum Welch, Degree Raising, and Diagonal Scaling—A Numerical Comparison




We begin with the kernel











k


(

x
,
y

)


=

1




(

1
+


&LeftBracketingBar;
&RightBracketingBar;


x

-
y

)




&LeftBracketingBar;
&RightBracketingBar;

2


)

2



,
x
,

y


R
d






(
43
)













which is positive definite for all d. We restrict ourselves to R


2


and choose data arranged equally spaced on the unit circle x


k


=(cos 2 πk/n, sin 2 πk/n)


T


, kεZ


n


. Note that the row sums of the matrix






K
=

(




k


(


x
1

,

x
1


)








k


(


x
1

,

x
n


)


















k


(


x
n

,

x
1


)








k


(


x
n

,

x
n


)





)











are all the same. In our computation, we use the matrix A=cK where c is chosen so that Ae=e. Therefore, by Theorem 11 the Baum Welch algorithm given by the iteration











c

(

k
+
1

)


=



c

(
k
)


n



A


(


(

Ac
k

)


-
1


)




,

k

N





(
44
)













converges to ĉ=e/n. Similarly, the penalized likelihood iteration











c

(

k
+
1

)


=



c
k


Ac
k




,

k

N





(
45
)













will converge to ĉ.




Our numerical experience indicates that the performance of these iterations for moderate values of n up to about 100 is insensitive to the initial starting vector. Therefore, as a means of illustration we chose n=4 and randomly pick as our starting vector c


1


=(0.3353, 0.1146, 0.2014, 0.3478)


T


. We plot in

FIG. 1

for both iterations above the error ε


k


=∥c


k


−e/n∥


2


on a log scale. One can see that the convergence is exponential as the curves plotted are straight lines.




In our next example, we compute the maximum of the homogenous polynomial M


A


on S


3


where A is an n×3 matrix. Such a problem arises in language modeling where n typically falls in the range from one million to eight hundred million, see L. R. Bahl, P. F. Brown, P. V. de Souza, R. L. Mercer and D. Nahamoo, cited above. In our numerical example we restrict ourselves to n=20, and consider the multiquadric matrix








A


={(1


+|i−j|




2


)


½


}


iεZ






n






,jεZ






3










and the cosine matrix






A
=



{


cos
2



(


2


π


(

i
+
j

)



n

)


}






Z
n


j



Z
3



.











In each case, we plot in

FIGS. 2A and 2B

log|M


A


(c


k


)|,kεN for c


k


generated by the Baum Welch algorithm and also degree raising. Note the monotone behavior of the computed values. In the Baum-Welch case


21


we choose c=e/3 as our initial vector. The degree raising iteration


22


does not require an initial vector. Note that in the first graph in

FIG. 2A

degree raising converges in one step, while in the second case shown in

FIG. 2B

Baum Welch does much better than degree raising.




We now turn our attention to univariate density estimation. We compare the Parzen estimator to those generated by the Baum Welch algorithm applied to maximizing M


K


on S


n


and the PML estimator obtained by maximizing P


M


on R


+




n


, rescaled to S


n


. Although we have examined several typical density functions including the chi-squared, uniform and logarithmic densities, we restrict our discussion to two bimodal densities generated from a mixture of two gaussians.




Recall that the Parzen estimator is given by








1
nh






i
=
1

n







hk


(


x
-

x
i


h

)




,

x


R
.












In our numerical examples we choose n=500, h=0.3, and k(x)=1/{square root over (2π)}e


−x






2






/2


,xεR. In

FIG. 3

the actual bimodal density is denoted by reference numeral


31


while the Parzen estimator is denoted by reference numeral


32


and the PMLE estimator is denoted by reference numeral


33


.




In

FIG. 4

, we display the Baum Welch estimator. It is interesting to note that the Baum Welch estimator, denoted by reference numeral


41


, fits the peak of the true density, denoted by reference numeral


42


, much better than the Parzen estimator, denoted by reference numeral


43


, but it also displays more of an oscillatory behavior at


41




a.






In

FIGS. 5A and 5B

, we choose a bimodal density with more discernible humps and compare graphically the behavior of the Baum Welch estimates, denoted by reference numeral


51


, to the Parzen estimator, denoted by the reference numeral


52


.




All the methods above are sensitive to the kernel k, the bin size h and the sample size n. For example, choosing h to be 0.6, but keeping the same value n=500 and a gaussian kernel the new Baum Welch estimator, denoted by reference numeral


61


, with the actual probability density, denoted by the reference numeral


62


, appears in

FIGS. 6A and 6B

.




Increasing the sample size to n=2000 while maintaining the value h=0.3 and a gaussian kernel the Baum Welch estimator takes the form in

FIGS. 6A and 6B

.




Changing the kernel k produces more dramatic alterations in the density estimator. For example we consider two spline kernels k


i


(x,y)=ƒ


i


(x−y/h),x,yεR where








f
i



(
x
)


=

{






b


(


a
2

-

x
2


)


i

,





for






&LeftBracketingBar;
x
&RightBracketingBar;


<
a







0
,









otherwise



















for i=1,2, where the constants a and b are chosen so that










-








f
i



(
x
)









x



=





-








f
i





(
x
)




x



=
1.











The corresponding Baum Welch estimators, denoted by reference numeral


71


, for h=0.3 and n=500 are displayed in

FIGS. 7A and 7B

. These graphs seem to indicate that the gaussian kernel works best.




We now apply the Baum Welch and the Parzen estimator to speech data and graphically compare their performance. The speech data is taken from the Wall Street Journal database for the sound AA (as in the word absolve—pronounced AE B Z AA L V). To explain what we have in mind we recall how such data is generated.




Digitized speech sampled at a rate of 16 KHz is considered. A frame consists of a segment of speech of duration 25 msec, and produces a 39 dimensional acoustic cepstral vector via the following process, which is standard in speech recognition literature. Frames are advanced every 10 msec to obtain succeeding acoustic vectors.




First, magnitudes of discrete Fourier transform of samples of speech data in a frame are considered in a logarithmically warped frequency scale. Next, these amplitude values themselves are transformed to a logarithmic scale, and subsequently, a rotation in the form of discrete cosine transform is applied. The first 13 components of the resulting vector are retained. First and the second order differences of the sequence of vectors so obtained are then appended to the original vector to obtain the 39 dimensional cepstral acoustic vector.




As in supervised learning tasks, we assume that these vectors are labeled according to their corresponding basic sounds. In fact, the set of 46 phonemes are subdivided into a set of 126 different variants each corresponding to a “state” in the hidden Markov model used for recognition purposes. They are further subdivided into more elemental sounds called allophones or leaves by using the method of decision trees depending on the context in which they occur, see, e.g., F. Jelinek, cited above; L. R. Bahl, P. V. Desouza, P. S. Gopalkrishnan and M. A. Picheny, “Context dependent vector quantization for continuous speech recognition,”


Proceedings of IEEE Int. Conf. on Acoustics Speech and Signal Processing,


pp. 632-35, 1993; Leo Breiman, Classification and Regression Trees, Wadsworth International, Belmont, Calif., 1983, for more details.




Among these most elemental of sounds known as leaves or allophones we picked five distinct leaves and two distinct dimensions all chosen from the vowel sound AA's first hidden Markov models state AA_


1


. The result of generating a histogram with 200 bins, the Baum Welch estimator and the Parzen estimator for six distinct choices of leaf, dimension pairs are displayed in

FIGS. 8A

to


8


F. The Parzen estimator and the Baum Welch estimator both used the choice h=2.5n


−⅓


and a gaussian kernel. The values of n was prescribed by the individual data sets and the values were 3,957, 4,526, 2,151, 4,898 and 1,183 for respectively leaves


1


,


2


,


5


,


7


and


11


. The columns in

FIG. 8

are respectively the density estimators for the histogram, the Baum Welch and the Parzen estimator. It can be seen from these examples that the Baum Welch estimator capture finer details of the data than does the Parzen estimator for the same value of h.




While the invention has been described in terms of preferred embodiments, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and scope of the appended claims.



Claims
  • 1. A computer implemented method for machine recognition of speech, comprising the steps of:inputting acoustic data; forming a nonparametric density estimator fn⁡(x)=∑ⅈ∈Zn⁢ci⁢k⁡(x,xi),x∈Rd,where⁢ ⁢Zn={1,2,…⁢ ,n},k⁡(x,y) is some specified positive kernel function, ci≥0,ⅈ∈Zn,∑i=1n⁢ ⁢ci=1 are parameters to be chosen, and {xi}iεZn is a given set of training data; setting a kernel for the estimator; selecting a statistical criterion to be optimized to find values for parameters defining the nonparametric density estimator; and iteratively computing the density estimator for finding a maximum likelihood estimation of acoustic data.
  • 2. The computer implemented method for machine recognition of speech as recited in claim 1, wherein the statistical criterion to be optimized is selected from a penalized maximum likelihood criteria or a maximum likelihood criteria.
  • 3. The computer implemented method for machine recognition of speech as recited in claim 2, wherein the maximum likelihood criteria is selected.
  • 4. The computer implemented method for machine recognition of speech as recited in claim 3, wherein a process of degree raising of polynomials is used for global maximization.
  • 5. The computer implemented method for machine recognition of speech as recited in claim 3, wherein the step of iteratively computing employs a process of diagonal balancing of matrices to maximize the likelihood.
  • 6. The computer implemented method for machine recognition of speech as recited in claim 5, wherein a form of the iteratively computing is ĉ=c/nAT((Ac)−1), where c=(c1, c2, . . . , cn), ci≧0, ∑i=1n⁢ ⁢ci=1is an initial choice for the parameters, ĉ is the updated parameter choice, and K={k(xi,xj)}i,jεZn, A=cK, where c is chosen such that Ae=e for e=(1, 1, . . . ,1).
  • 7. The computer implemented method for machine recognition of speech as recited in claim 2, wherein the penalized maximum likelihood criteria is selected.
  • 8. The computer implemented method for machine recognition of speech as recited in claim 7, wherein the step of iteratively computing employs a process of diagonal balancing of matrices to maximize the penalized likelihood.
  • 9. The computer implemented method for machine recognition of speech as recited in claim 8, wherein the step of iteratively computing the density estimator uses an update of parameters given as a unique vector vεintSn(b) satisfying v·Kv=K((Kc)−1)·c−γv·b, where bi=∫Rdk(x,xi)dx, b=(b1, b2, . . . ,bn), Sn(b)={c:cεRd, bTc=1} and γ=n−vTKv.
  • 10. The computer implemented method for machine recognition of speech as recited in claim 9, wherein the update parameter is given as v=c·K⁡((Kc)-1-σ⁢ ⁢e)n-σ⁢ ⁢cT⁢Kc,where σ>0 is a parameter chosen to yield a best possible performance.
  • 11. The computer implemented method for machine recognition of speech as recited in claim 1, wherein the kernel is a Gaussian kernel.
  • 12. The computer implemented method for machine recognition of speech as recited in claim 1, wherein the kernel is given by the formula k⁡(x,y)=1(1+&LeftBracketingBar;&RightBracketingBar;⁢x-y⁢&LeftBracketingBar;&RightBracketingBar;2)2,x,yεRd, where k(x,y), x,yεRd is a reproducing kernel for a Hilbert space of functions on Rd.
  • 13. The computer implemented method for machine recognition of speech as recited in claim 1, wherein c=en⁢ ⁢and⁢ ⁢k⁡(x,y)=1h⁢k⁡(x-yh).
  • 14. The computer implemented method for machine recognition of speech as recited in claim 1, further comprising the step of assigning the maximum likelihood estimation to a phoneme label.
  • 15. The computer implemented method for machine recognition of speech as recited in claim 1, wherein the non-parametric density estimator has the form fn⁡(x)=1nhd⁢∑ⅈ∈Zn⁢k⁡(x-xih),xεRd, where Zn={1, . . . ,n}, k is some specified function, and {xi:iεZn} is a set of observations in Rd of some unknown random variable.
US Referenced Citations (2)
Number Name Date Kind
5893058 Kosaka Apr 1999 A
6148284 Saul Nov 2000 A
Non-Patent Literature Citations (12)
Entry
Feng et al (“Application of Structured Composite Source Models to Problems in Speech Processing,” Proceedings of the 32nd Midwest Symposium on Circuits and Systems, pp. 89-92, 14-16 Aug. 1989.).*
Fonollosa et al (“Application of Hidden Markov Models to Blind Channel Characterization and Data Detection,” 1994 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. IV/185-IV188, Apr. 19-22, 1994).*
Kogan (“Hidden Markov Models Estimation via the Most Informative Stopping Times for Viterbi Algorithm,” 1995 IEEE International Symposium on Information Theory Proceedings, p. 178, Sep. 17-22, 1995).*
L. Liporace, “Maximum Likelihood Estimation for Multivariate Observations of Markov Sources”, IEEE Transactions of Information Theory, vol. IT-28, No. 5, Sep. 1982.
L. Baum et al., “A Maximum Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains”, The Annals of Mathematical Statistics, 1970, vol. 41, No. 1, 164-171.
B. W. Silverman, “On the Estimation of a Probability Density Function by the Maximum Penalized Likelihood Method”, The Annals of Mathematical Statistics, 1982, vol. 10, No. 4, 795-810.
A.P. Dempster et al., “Maximum Likelihood from Incomplete Data via the EM Algorithm”, Journal of Royal Statistical Society, 39(B), pp. 1-38, 1977.
A. Marshall et al., “Scaling of Matrices to Achieve Specified Row and Column Sums”, Numerische Mathematik 12, 83-90 (1968).
R. Brualdi, et al., “The Diagonal Equivalence of a Nonnegative Matrix to a Stochastic Matrix”, Journal of Mathematical Analysis and Applications 16, 31-50 (1966).
L.E. Baum et al., “An Inequality with Applications to Statistical Estimation for Probabilistic Functions of Markov Processes and to a Model of Ecology”, Bull. Amer. Math. Soc. 73, pp. 360-363, 1967.
S. Basu et al., “Maximum Likelihood Estimation for Acoustic Vectors in Speech Recognition”, Advanced Black-Box Techniques for Nonlinear Modeling: Theory and Applications, Kulwer Publishers (1998).
R. Sinkhorn, “A Relationship Between Arbitrary Positive Matrices and Doubly Stochastic Matrices”, Ann. Math. Statist., 38, pp. 439-455, 1964.