Embedded bayesian network for pattern recognition

Information

  • Patent Application
  • 20040131259
  • Publication Number
    20040131259
  • Date Filed
    January 06, 2003
    21 years ago
  • Date Published
    July 08, 2004
    20 years ago
Abstract
A pattern recognition procedure forms a hierarchical statistical model using a hidden Markov model and a coupled hidden Markov model. The hierarchical statistical model supports a pa20 layer having multiple supernodes and a child layer having multiple nodes associated with each supernode of the parent layer. After training, the hierarchical statistical model uses observation vectors extracted from a data set to find a substantially optimal state sequence segmentation.
Description


FIELD OF THE INVENTION

[0001] The present invention relates to computer mediated pattern detection. More particularly, the present invention relates to improved Bayesian networks for classifying data.



BACKGROUND

[0002] Bayesian networks such a represented by the hidden Markov model (HMM) and coupled Hidden Markov (CHMM) models have long been used to model data for the purposes of pattern recognition. Any discrete time and space dynamical system governed by a such a Bayesian network emits a sequence of observable outputs with one output (observation) for each state in a trajectory of such states. From the observable sequence of outputs, the most likely dynamical system can be calculated. The result is a model for the underlying process. Alternatively, given a sequence of outputs, the most likely sequence of states can be determined.


[0003] For example, one dimensional HMMs have been widely used in speech recognition to model phonemes, words, or even phrases, and two dimensional HMMs have been used for image processing tasks. One of the important characteristics of a HMM is its ability to cope with variations in feature space, allowing data modeling with variations along different dimensions. Coupled hidden Markov models can be similarly employed, since they correspond to a generalization of a HMM. A CHMM may comprise a collection of HMMs, each of which corresponds to a data channel.







BRIEF DESCRIPTION OF THE DRAWINGS

[0004] The inventions will be understood more fully from the detailed description given below and from the accompanying drawings of embodiments of the inventions which, however, should not be taken to limit the inventions to the specific embodiments described, but are for explanation and understanding only.


[0005]
FIG. 1 schematically illustrates a data classification system;


[0006]
FIG. 2 generically illustrates an embedded hidden Markov model-coupled hidden Markov model (HMM-CHMM) structure;


[0007]
FIG. 3 is a flow diagram illustrating training of an embedded HMM-CHMM; and


[0008]
FIG. 4 generically illustrates an embedded coupled hidden Markov model-hidden Markov model (CHMM-HMM) structure.







DETAILED DESCRIPTION

[0009]
FIG. 1 generally illustrates a system 10 for data analysis of a data set 12 using an embedded Bayesian network that includes a hidden Markov model (HMM) and a coupled hidden Markov model (CHMM). A embedded Bayesian network is used because it has good generalization performance even for high dimensional input data and small training sets.


[0010] The data set 12 can include static or video imagery 14 containing objects to be identified or classified, including but not limited to textual characters, ideographs, symbols, fingerprints, or even facial imagery 15. In addition, non-image data sets such as bioinformatic databases 16 containing, for example, gene or protein sequences, DNA microarray data, sequence data, phylogenetic information, promoter region information; or textual, linguistic, or speech analysis data suitable for machine learning/identification 18 can be used. The same data set can be optionally used both to train and classify data with the appropriate training module 20 and classification module 22.


[0011] The processing procedure for system 10 maybe performed by a properly programmed general-purpose computer alone or in connection with a special purpose computer. Such processing may be performed by a single platform or by a distributed processing platform. In addition, such processing and functionality can be implemented in the form of special purpose hardware, custom application specific integrated circuits (ASICs), configurable FPGA circuits, or in the form of software or firmware being run by a general-purpose or network processor. Data handled in such processing or created as a result of such processing can be stored in any memory as is conventional in the art. By way of example, such data may be stored in a temporary memory, such as in the RAM of a given computer system or subsystem. In addition, or in the alternative, such data may be stored in longer-term storage devices, for example, magnetic disks, rewritable optical disks, and so on. For purposes of the disclosure herein, a computer-readable media may comprise any form of data storage mechanism, including such existing memory technologies as well as hardware or circuit representations of such structures and of such data.


[0012]
FIG. 2 generically illustrates a logical structure 30 of an embedded hidden Markov model-coupled hidden Markov model (HMM-CHMM). As seen in FIG. 2, HMM-CHMM is a hierarchical statistical model that includes a HMM parent layer 32 (collectively formed from nodes 33) and a CHMM child layer 34 (collectively formed from nodes 35). The child layer 34 associates one CHMM node 35 to each node 33 in the parent layer 32, and the parameters of the individual CHMMs remain independent from each other. Instead, the parameters of each child layer CHMM depend upon the state of the connected parent node 33. Typically, for multidimensional data sets, the HMM in the parent layer 32 is associated with at least one dimension, and the CHMM child layers are associated with data in an orthogonal dimension with respect to the parent layer.


[0013] Formally defined, the elements of an embedded HMM-CHMM have:an initial super state probability π0,0 and a super state transition probability from super state j to super state i, a0 , where super state refers to the state of the parent layer 32 HMM node 33.


[0014] For each super state k the parameters of the corresponding CHMM are defined to have an initial state probability in a channel of




c=
1, . . . C11,0k,c;



[0015] a state transition probability from state sequence j to state:




i


c


,a


1,i




C




|j


k,c
;



[0016] and an observation probability:




b


t




0




,t




1




k,c
(jc).



[0017] In a continuous mixture with Gaussian components, the probability of the observation vector O is given by:
1bk,c(jc)=m=1Mjk,cωj,mk,cN(O,μj,mk,c,Uj,mk,c)whereμj,mk,candUj,mk,c


[0018] are the mean and covariance matrix of the mth mixture of the Gaussian mixture corresponding to the jth state in the cth channel,




M


j


k,c




[0019] is the number of mixtures corresponding to the jth state of the cth channel, and


ωj,mk,c


[0020] is a weight associated with the corresponding mixture.


[0021] Observation sequences are used to form observation vectors later used in training and classifying. For example, the observation sequence for a two-dimensional image maybe formed from image blocks of size Lx×Ly that are extracted by scanning the image from left-to-right and top-to-bottom. Adjacent image blocks may be designed to have an overlap by Py rows in the vertical direction and Px columns in the horizontal direction. In one possible embodiment, with blocks size of Ly=8 rows and Lx=8 columns, a six DCT coefficients (a 3×2 low-frequency array) may be employed to create the overlap.


[0022] The resulting array of observation vectors may correspond to size of T0×T1, where T0 and T1 are the number of observation vectors extracted along the height (H) and the width (W) of the image, respectively. T0 and T1 may be computed accordingly as:
2T0=H-LyLy-Py+1,T1=W-LxLx-Px+1


[0023] Consecutive horizontal and vertical observation vectors may also be grouped together to form observation blocks. This may be used as a way to consolidate local observations and at the same time to reduce the total amount of observations. In practice, this data grouping serves application needs and improve recognition efficiency.


[0024] To compute the number of observation blocks, denote the number of observation blocks in the vertical and horizontal direction be T00 and T10, respectively. Then,
3T00=1T10=T1C1


[0025] In addition, denote the number of observation vectors in the horizontal and vertical direction within each observation block by T01 and T11, respectively, where
4T01=T1T11=C1


[0026] Furthermore, denote Ot0,t1,c as the t1th observation vector corresponding to the cth channel within the observation block t0.


[0027] Although any suitable state sequence segmentation can be used, a modified Viterbi algorithm for the HMM-CHMM is preferred. Application of this modified Viterbi algorithm determines the optimal state and super state segmentation of the observation sequence. The best super state probability for the observation block t0 given super state i of super channel s, is denoted as Pt0 (i). Corresponding optimal state and optimal state sequence βt0,t1,c (i) may then be computed for each super observation. The following states are first initialized:


δ(i)=π0,0(i)Pt0(i)


ψ0(i)=0


[0028] The following states are then recursively determined:
5δt0(i)=maxj{δt0-1(j)a0,i|jPt0(i)}ψt0(i)=argmaxj{δt0-1(j)a0,i|jPt0(i)}


[0029] The termination condition is then computed:




P
=maxiT0(i)}



αT0=arg maxiT0(i)}


[0030] Based on the computed termination condition, a backtracking operation is performed:


αT00,t+1T0+1)




q


t




0




,t




1




,c


0
t0





q


t




0




,t




1




,c


1
t0,t1,ct0)



[0031]
FIG. 3 is a flow diagram 40 illustrating training of an embedded HMM-CHMM based on the Viterbi algorithm, according to embodiments of the present invention. To train an HMM-CHMM based on given training data, observation vectors are first extracted from the training data set and organized in observation blocks (module 42). These observation blocks are uniformly segmented (module 44), replaced by an optimal state segmentation algorithm (module 46), have their model parameters estimated (module 48), and observation likelihood determined (module 50). As will be appreciated, the training may be iterative, with each training data set used individually and iteratively to update model parameters until the observation likelihood computed is smaller than a specified threshold.


[0032] More specifically, the training data set may be segmented along a first dimension according to the number of super states, into a plurality of uniform segments each of which corresponding to a super state. Based on the uniform segmentation at the super layer, the observation vectors within each uniform segment may then be uniformly segmented according to the number of channels and number of states of each child CHMM.


[0033] The density function of each state (including both super states as well as child states) may be initialized before the training takes place. For example, if Gaussian mixture model is adopted for each state, Gaussian parameters for each of the mixture component may need to be initialized. Different approaches may be employed to achieve the initialization of model parameters. For example, one embodiment may be implemented where the observation sequence assigned to each channel c and state j, and super state k and super channel s may be assigned to Mjk,c clusters using, for example, the K-means algorithm.


[0034] During the process of training, the original uniform segmentation is updated based on the optimal state segmentation using the Viterbi algorithm or other suitable algorithms. To update the density function of a state, particular relevant parameters to be updated may be determined prior to the update operation.


[0035] The selection of a Gaussian mixture component for each state j channel c and super state k is also required. One exemplary criterion to make the selection may correspond to assigning the observation to Ot0,t1,c(r) from the rth training sample in the training set to the Gaussian component for which the Gaussian density function
6N(Ot0,t1,c(r);μj,mk,c,Uj,mk,c)


[0036] is the highest.


[0037] The parameters are then estimated using, for example, an extension of the segmental K-means algorithm. In particular, the estimated transition probability
7a0,ic|j


[0038] between super states is and j may be obtained as follows:
8a0,i|j=rt0t1t0(r)(i,j)rt0t1lt0(r)(i,l)


[0039] where ∈t0(r) (i, l) may equal to one if a transition from super state l to the super state i occurs for the observation block (t0) and zero otherwise. The estimated transition probabilities
9a1,ic|jk,c


[0040] from embedded state sequence j to the embedded state ic in channel c of super state k may then be obtained as follows,
10a1,ic|jk,c=rt0t1θt0,t1(r)(k,c,ic,j)rt0t1lθt0,t1(r)(k,c,ic,l)


[0041] where θt0,t1(r) (s, k, c, ic, l) may be one if in the observation block (t0) from the rth training sample a transition from state sequence j to state ic in channel c occurs for the observation to Ot0,t1,c(r) and zero otherwise.


[0042] The parameters of the selected Gaussian mixture component may also be accordingly updated. The involved Gaussian parameters may include a mean vector
11μj,m,ak,c


[0043] covariance matrix
12Uj,mk,c


[0044] of the Gaussian mixture, and the mixture coefficients
13ωj,mk,c


[0045] for mixture m of state j channel c and super state k. The updated Gaussian parameters may be obtained according to the following formulations:
14μj,mk,c=rt0t1ψt0,t1(r)(k,c,j,m)Ot0,t1,c(r)rt0t1ψt0,t1(r)(k,c,j,m)Uj,mk,c=rt0t1ψt0,t1(r)(k,c,j,m)(Ot0,s,t1,cr-μj,mk,c)(Ot0,t1,c(r)-μj,mk,c)Trt0t1ψt0,t1(r)(k,c,j,m)ωj,mk,c=rt0t1ψt0,t1(r)(k,c,j,m)rt0t1m=1Mψt0,t1(r)(k,c,j,m)


[0046] where ψt0,t1(r) (k, c, j, m) may equal to one if the observation Ot0,t1,c(r) is assigned to super state k, state j in channel c and mixture component m, and zero otherwise.


[0047] The update of parameters based on a training sample may be carried out iteratively This may be necessary because the Viterbi algorithm may yield different optimal segmentation during each iteration before convergence. Between two consecutive iterations, if the difference of observation likelihood computed with the Viterbi algorithm is smaller than a specified threshold, the iteration may be terminated. The HMM-CHMM corresponds to a complexity of quadratic with respect to the number of states in the model. In addition, HMM-CHMM may be efficiently implemented in a parallel fashion.


[0048] An alternative logical structure that includes an embedded CHMM-HMM (in contrast to an HMM-CHMM) is generically illustrated by FIG. 4. As seen in that Figure, a logical structure 60 of an embedded hidden Markov model-coupled hidden Markov model. As seen in FIG. 4, the CHMM-HMM is a hierarchical statistical model that includes a CHMM parent layer 62 (collectively formed from nodes 63) and a HMM child layer 64 (collectively formed from nodes 65). The child layer 64 associates one HMM node 65 to each node 63 in the parent layer 62, and the parameters of the individual HMMs remain independent from each other. Instead, the parameters of each child layer HMM depend upon the state of the connected parent node 63. Typically, for multidimensional data sets, the CHMM in the parent layer 62 is associated with at least one dimension, and the HMM child layers are associated with data in an orthogonal dimension with respect to the parent layer.


[0049] Formally defined, the elements of an embedded CHMM-HMM have: an initial super state probability π0,0s in super channel s and a super state transition probability from super state sequence j to super state i in super channel s, α0,i|js where super state refers to the state of the parent layer 32 CHMM node 33.


[0050] For each super state k the the super channel s the parameters of the corresponding HMM are defined so that the initial state probability is


π1,0s,k


[0051] the state transition probability from state j to state i is




a


1,i|j


s,k,c




[0052] and the observation probability is:




b


t




0




,t




1




s,k
(j)



[0053] In a continuous mixture with Gaussian components, the probability of the observation vector O is given by.
15bs,k(j)=m=1MjS,Cωj,ms,kN(O,μj,ms,k,Uj,ms,k)whereμj,ms,kandUj,ms,k


[0054] are the mean and covariance matrix of the mth mixture of the Gaussian mixture corresponding to the jth state and mth mixture,




M


j


s,k




[0055] is the number of mixtures corresponding to the jth state and,


ωj,ms,k


[0056] is a weight associated with the corresponding mixture.


[0057] Observation sequences are used to form observation vectors later used in training and classifying. For example, the observation sequence for a two-dimensional image may be formed from image blocks of size Lx×Ly that are extracted by scanning the image from left-to-right and top-to-bottom. Adjacent image blocks may be designed to have an overlap by Py rows in the vertical direction and Px columns in the horizontal direction. In one possible embodiment, with blocks size of Ly=8 rows and Lx=8 columns, a six DCT coefficients (a 3×2 low-frequency array) may be employed to create the overlap.


[0058] The resulting array of observation vectors may correspond to size of T0×T1, where T0 and T1 are the number of observation vectors extracted along the height (H) and the width (W) of the image, respectively.


[0059] Consecutive horizontal and vertical observation vectors may also be grouped together to form observation blocks. This maybe used as a way to consolidate local observations and at the same time to reduce the total amount of observations. In practice, this data grouping serves application needs and improve recognition efficiency.


[0060] To compute the number of observation blocks, denote the number of observation blocks in the vertical and horizontal direction be T00 and T10, respectively. Then,




T


0


0


=C


0






T


1


0


=T


1




[0061] In addition, denote the number of observation vectors in the horizontal and vertical direction within each observation block by T01 and T11, respectively, where
16T01=T1C0T11=1


[0062] Furthermore, denote Ot0,s,t1 as the t1th observation vector corresponding to the observation block (t0, s).


[0063] Although any suitable state sequence segmentation can be used, a modified Viterbi algorithm for the HMM-CHMM is preferred. Application of this modified Viterbi algorithm determines the optimal state and super state segmentation of the observation sequence. The best super state probability for the observation block (t0,s) given super state is of super channel s, is denoted as Pt0s(is). Corresponding optimal state and optimal state sequence βt0,t1,s(is) may then be computed for each super observation. The following states are first initiazed:
17δ0,0(i)=Πsπ0,0s(is)Pt0,s(is)ψ0,0(i)=0


[0064] The following states are then recursively determined:
18δ0,t0(i)=maxj{δ0,t0-1(j)sa0,is,|js-1,js,js+1sPt0,s(is)}ψ0,t0(i)=argmaxj{δ0,t0-1(j)sa0,is|js-1,js,js+1sPt0,s(is)}


[0065] The termination condition is then computed:




P
=maxiT0(i)}



T0,1, . . . , αT0,S}=arg maxi{δT0(i)}


[0066] Based on the computed termination condition, a backtracking operation is performed:


T0,1, . . . , αT0,S}=ψ0,t+1T0+1,1, . . . , αT0+1,S)




q


t




0




,s,t




1




0
t0,s





q


t




0




,s,t




1




1
t0,s,t1t0,s)



[0067] Training of an embedded CHMM-HMM based on the Viterbi algorithm is substantially similar to that illustrated with respect to training of a HMM-CHMM as seen in FIG. 3. To train an CHMM-HMM based on given training data, observation vectors are first extracted from the training data set and organized in observation blocks. These observation blocks are segmented uniformly and at consecutive iterations through an optimal state segmentation algorithm, have their model parameters estimated and observation likelihood determined. As will be appreciated, the training may be iterative, with each training data set used individually and iteratively to update model parameters until the observation likelihood computed is smaller than a specified threshold.


[0068] More specifically, the training data set maybe segmented along a first dimension into S super channels. Then within each of such super channels, training data may further be uniformly segmented, according to the number of super states in each super channel, into a plurality of uniform segments each of which corresponding to a super state. Based on the uniform segmentation at the super layer, the observation vectors within each uniform segment may then be uniformly segmented according to the number of states of each child HMM. The density function of each state (including both super states as well as child states) may be initialized before the training takes place. For example, if Gaussian mixture model is adopted for each state, Gaussian parameters for each of the mixture component may need to be initialized. Different approaches may be employed to achieve the initialization of model parameters. For example, one embodiment may be implemented where the observation sequence assigned to state j, super state k and super channel s may be assigned to Mjs,k clusters using, for example, the K-means algorithm.


[0069] During the process of training, the original uniform segmentation is updated based on the optimal state segmentation using the Viterbi algorithm or other suitable algorithms. To update the density function of a state, particular relevant parameters to be updated may be determined prior to the update operation. Depending on the density function used for each state, the selection may be carried out accordingly.


[0070] The selection of a Gaussian mixture components for each state j and super state k and super channel s is also required. One exemplary criterion to make the selection may correspond to assigning the observation Ot0,s,t1(r) from the rth training sample in the training set to the Gaussian component for which the Gaussian density function
19N(Ot0,s,t1(r);μj,ms,k,Uj,ms,k)


[0071] is the highest.


[0072] The parameters are then estimated using, for example, an extension of the segmental K-means algorithm. In particular, the estimated transition probability a′0,is|js between super states is and the super state sequence j may be obtained as follows:
20a0,is|js=rt0t1t0(r)(s,is,j)rt0t1lt0(r)(s,is,l)


[0073] where εt0(r) (s, is, l) may equal to one if a transition from state sequence l to the super state is in super channel s occurs for the observation block (t0, s) and zero otherwise. The estimated transition probabilities between embedded states a′l,i|js,k may then be obtained as follows,
21a1,i|js,k=rt0t1θt0,t1(r)(s,k,i,l)rt0t1lθt0,t1(r)(s,k,i,l)


[0074] where θt0,t1(r)(s, k, i, l) may be one if in the observation block (t0, s) a transition from state j to state is is channel c occurs for the observation Ot0,s,t1(r) and zero otherwise. The parameters of the selected Gaussian mixture component may also be accordingly updated. The involved Gaussian parameters may include a mean vector μ′j,ms,k, a covariance matrix U′j,ms,k of the Gaussian mixture, and the mixture coefficients ω′j,ms,k for mixture m of state j in super state k and super channel s. The updated Gaussian parameters may be obtained according to the following formulations:
22μj,ms,k=rt0t1ψt0,t1(r)(s,k,j,m)×Ot0,s,t1(r)rt0t1ψt0,t1(r)(s,k,j,m)Uj,ms,k=rt0t1ψt0,t1(r)(s,k,j,m)(Ot0,s,t1r-μj,ms,k)(Ot0,s,t1(r)-μj,ms,k)Trt0t1ψt0,t1(r)(s,k,j,m)ωj,ms,k=rt0t1ψt0,t1(r)(s,k,j,m)rt0t1m=1Mψt0,t1(r)(s,k,j,m)


[0075] where ψt0,t1(r) (s, k, j, m) may equal to one if the observation Ot0,s,t1(r) is assigned to super state k in super channel s, state j and mixture component m, and zero otherwise.


[0076] The update of parameters based on a training sample maybe carried out iteratively. This may be necessary because the Viterbi algorithm may yield different optimal segmentation during each iteration before convergence. Between two consecutive iterations, if the difference of observation likelihood computed with the Viterbi algorithm is smaller than a specified threshold, the iteration may be terminated. The CHMM-HMM corresponds to a complexity of quadratic with respect to the number of states in the model. In addition, CHMM-HMM may be efficiently implemented in a parallel fashion.


[0077] As will be understood, reference in this specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the invention. The various appearances “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments.


[0078] If the specification states a component, feature, structure, or characteristic “may”, “might”, or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the element. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element.


[0079] Those skilled in the art having the benefit of this disclosure will appreciate that many other variations from the foregoing description and drawings may be made within the scope of the present invention. Accordingly, it is the following claims, including any amendments thereto, that define the scope of the invention.


Claims
  • 1. A pattern recognition method, comprising: forming a hierarchical statistical model using a hidden Markov model and a coupled hidden Markov model, the hierarchical statistical model supporting a parent layer having multiple supernodes and a child layer having multiple nodes associated with each supernode of the parent layer, training the hierarchical statistical model using observation vectors extracted from a data set, and finding a substantially optimal state sequence segmentation for the hierarchical statistical model.
  • 2. The method according to claim 1, wherein the parent layer is formed from the hidden Markov model and the child layer is formed from the coupled hidden Markov model.
  • 3. The method according to claim 1, wherein the parent layer is formed from the coupled hidden Markov models and the child layer is formed from the hidden Markov models.
  • 4. The method according to claim 1, wherein the hierarchical statistical model is applied to two dimensional data, with the parent layer describing data in a first direction and the child layer describing data in a second direction orthogonal to the first direction.
  • 5. The method according to claim 1, wherein the hierarchical statistical model defines an initial super state probability in a super channel.
  • 6. The method according to claim 1, wherein the hierarchical statistical model defines super state transition probability from a sequence of states in the supper channel.
  • 7. The method according to claim 1, wherein the hierarchical statistical model defines observation likelihood given a state of a channel.
  • 8. The method according to claim 1, wherein training further comprises collecting the training samples; and training the hierarchical statistical model using the training samples.
  • 9. The method according to claim 1, wherein the optimal state sequence segmentation is the Viterbi algorithm for the embedded coupled hidden Markov model/hidden Markov model.
  • 10. The method according to claim 1, wherein the optimal state sequence segmentation is the Viterbi algorithm for the embedded hidden Markov model/coupledhidden Markov model
  • 11. The method according to claim 1, wherein the data set includes two-dimensional data.
  • 12. A pattern recognition method, comprising: forming a hierarchical statistical model using a hidden Markov model and a coupled hidden Markov model, the hierarchical statistical model supporting a parent layer having multiple supernodes and a child layer having multiple nodes associated with each supernode of the parent layer, and finding a substantially optimal state sequence segmentation for the hierarchical statistical model using a Viterbi based algorithm.
  • 13. The method according to claim 12, wherein the parent layer is formed from the hidden Markov model and the child layer is formed from the coupled hidden Markov model.
  • 14. The method according to claim 12, wherein the parent layer is formed from the coupled hidden Markov models and the child layer is formed from the hidden Markov models.
  • 15. An article comprising a storage medium having stored thereon instructions that when executed by a machine result in: forming a hierarchical statistical model using a hidden Markov model and a coupled hidden Markov model, the hierarchical statistical model supporting a parent layer having multiple supernodes and a child layer having multiple nodes associated with each supernode of the parent layer, training the hierarchical statistical model using observation vectors extracted from a data set, and finding a substantially optimal state sequence segmentation for the hierarchical statistical model.
  • 16. The article comprising a storage medium having stored thereon instructions of claim 15, wherein the parent layer is formed from the hidden Markov model and the child layer is formed from the coupled hidden Markov model.
  • 17. The article comprising a storage medium having stored thereon instructions of claim 15, wherein the parent layer is formed from the coupled hidden Markov models and the child layer is formed from the hidden Markov models.
  • 18. The article comprising a storage medium having stored thereon instructions of claim 15, wherein the hierarchical statistical model is applied to two dimensional data, with the parent layer describing data in a first direction and the child layer describing data in a second direction orthogonal to the first direction.
  • 19. The article comprising a storage medium having stored thereon instructions of claim 15, wherein the hierarchical statistical model defines an initial super state probability in a super channel.
  • 20. The article comprising a storage medium having stored thereon instructions of claim 15, wherein the hierarchical statistical model defines super state transition probability from a sequence of states in the supper channel.
  • 21. The article comprising a storage medium having stored thereon instructions of claim 15, wherein the hierarchical statistical model defines observation likelihood given a state of a channel.
  • 22. The article comprising a storage medium having stored thereon instructions of claim 15, wherein training further comprises collecting the training samples; and training the hierarchical statistical model using the training samples.
  • 23. The article comprising a storage medium having stored thereon instructions of claim 15, wherein the optimal state sequence segmentation is the Viterbi algorithm for the embedded coupled hidden Markov model/hidden Markov model.
  • 24. The article comprising a storage medium having stored thereon instructions of claim 15, wherein the optimal state sequence segmentation is the Viterbi algorithm for the embedded hidden Markov model/coupledhidden Markov model
  • 25. The article comprising a storage medium having stored thereon instructions of claim 15, wherein the data set includes two-dimensional data.
  • 26. An article comprising a storage medium having stored thereon instructions that when executed by a machine result in: forming a hierarchical statistical model using a hidden Markov model and a coupled hidden Markov model, the hierarchical statistical model supporting a parent layer having multiple supernodes and a child layer having multiple nodes associated with each supernode of the parent layer, and finding a substantially optimal state sequence segmentation for the hierarchical statistical model using a Viterbi based algorithm.
  • 27. The article comprising a storage medium having stored thereon instructions of claim 26, wherein the parent layer is formed from the hidden Markov model and the child layer is formed from the coupled hidden Markov model.
  • 28. The article comprising a storage medium having stored thereon instructions of claim 26, wherein the parent layer is formed from the coupled hidden Markov models and the child layer is formed from the hidden Markov models.
  • 29. A system comprising: a hierarchical statistical model using both hidden Markov models and coupled hidden Markov models, the hierarchical statistical model supporting a parent layer having multiple supernodes and a child layer having multiple nodes associated with each supernode of the parent layer, a training module for the hierarchical statistical model that uses observation vectors extracted from a data set, and an identification module for the hierarchical statistical model that finds a substantially optimal state sequence segmentation.
  • 30. The system according to claim 29, wherein the parent layer is formed from the hidden Markov model and the child layer is formed from the coupled hidden Markov model.
  • 31. The system according to claim 29, wherein the parent layer is formed from the coupled hidden Markov models and the child layer is formed from the hidden Markov models.
  • 32. The system according to claim 29, wherein the hierarchical statistical model is applied to two dimensional data, with the parent layer describing data in a first direction and the child layer describing data in a second direction orthogonal to the first direction.
  • 33. The system according to claim 29, wherein the hierarchical statistical model defines an initial super state probability in a super channel.
  • 34. The system according to claim 29, wherein the hierarchical statistical model defines super state transition probability from a sequence of states in the supper channel.
  • 35. The system according to claim 29, wherein the hierarchical statistical model defines observation likelihood given a state of a channel.
  • 36. The system according to claim 29, wherein training further comprises collecting the training samples; and training the hierarchical statistical model using the training samples.
  • 37. The method according to claim 1, wherein the optimal state sequence segmentation is the Viterbi algorithm for the embedded coupled hidden Markov model/hidden Markov model.
  • 38. The system according to claim 29, wherein the optimal state sequence segmentation is the Viterbi algorithm for the embedded hidden Markov model/coupledhidden Markov model
  • 39. The system according to claim 29, wherein the data set includes two-dimensional data.
  • 40. A pattern recognition system, comprising: a hierarchical statistical model using a hidden Markov model and a coupled hidden Markov model, the hierarchical statistical model supporting a parent layer having multiple supernodes and a child layer having multiple nodes associated with each supernode of the parent layer, and an identification module for the hierarchical statistical model that finds a substantially optimal state sequence segmentation, using a Viterbi based algorithm.
  • 41. The method according to claim 40, wherein the parent layer is formed from the hidden Markov model and the child layer is formed from the coupled hidden Markov model.
  • 42. The method according to claim 40, wherein the parent layer is formed from the coupled hidden Markov models and the child layer is formed from the hidden Markov models.