Methods for recognition of multidimensiional patterns cross-reference to related applications

Information

  • Patent Application
  • 20130060788
  • Publication Number
    20130060788
  • Date Filed
    September 01, 2012
    12 years ago
  • Date Published
    March 07, 2013
    12 years ago
Abstract
A method implemented by a computer for recognition of multidimensional patterns, represented by multidimensional arrays of multidimensional vectors which are derived from data collected from speech, images, video, signals, static physical entities or moving physical entities. The recognition is based on classification into pattern classes. For the classification the invention provides efficient methods for the computation of similarity measures between input patterns and stored patterns. Usually, input patterns are acquired by sensors and their class is unknown. They are classified by finding the stored pattern class with the highest similarity measure to the input pattern. For speech and image recognition, the methods provide additional innovations which improve the reliability. Speech is represented by one dimensional arrays of multidimensional vectors. These arrays represent continuous speech our methods have novel means for separating the signal into words and phonemes. New penalty functions improve false positives and correct recognition rates. Similar approach is used for images and video.
Description
FEDERALLY SPONSORED RESEARCH

Not Applicable


SEQUENCE LISTING OR PROGRAM

Not Applicable


BACKGROUND OF THE INVENTION
Prior Art

This invention relates to a method implemented by a computer for the computation of similarity measures between input patterns and stored patterns wherein both input patterns and the stored patterns are derived from data collected from speech or images or video or signals or static physical entities or moving physical entities. The similarity measures obtained can then be used to classify the input patters as similar to one of the classes of the stored patterns. For example, if the input patterns are derived from speech, the method can classify segments of the speech into words by detecting high similarity measures of the input speech to stored exemplar words. In other applications, the method can identify faces in images, classify human actions in video or even classify patterns of weather, human genome etc. In all the applications, both input and stored patterns are converted into arrays of vectors, which are then classified by an algorithm that computes their mutual similarity measures. Our invention is an algorithm that we call VARIS which stands for: “Vector Array Recognition by Indexing and Sequencing”. VARIS has many advantages over currently widely used classification methods such as “Hidden Markov Models” (HMM) or “Dynamic Time Warping” DTW. Unlike HMM and DTW, which can be used only in classification of patterns such as speech, which are represented by one dimensional arrays of vectors, VARIS can classify any dimensional arrays of vectors with polynomial computation complexity. Whereas HMM and DTW have exponential complexity even in just two dimensions. VARIS has many other advantages over HMM such as ease of training, which enables to easily adapt each speech recognizer to any speaker with any accent and with any language. Recognition rates are much higher and much faster.


U.S. PATENTS



  • [6] U.S. Pat. No. 7,366,645, B2, April 2008, J. Ben-Arie, “Method of Recognition of Human Motion, Vector Sequences and Speech”.


    My patent search on the key words “multidimensional”+“recognition” or “multidimensional”+“pattern” showed zero results. So I could not find any invention that deals with multidimensional pattern recognition, which is the main topic of my invention. I also conducted a search on “adaptive speech recognition” I found the following patents:



Adaptive Speech Recognition—In Title:


1. U.S. Pat. No. 7,996,218 Aug. 9, 2011 User adaptive recognition method and apparatus


2. U.S. Pat. No. 7,003,460 Feb. 21, 2006 Method and apparatus for an adaptive speech recognition system utilizing HMM models


3. U.S. Pat. No. 6,662,160 Dec. 9, 2003 Adaptive speech recognition method with noise compensation


4. U.S. Pat. No. 6,418,411 Jul. 9, 2002 Method and system for adaptive speech recognition in noisy environment


5. U.S. Pat. No. 6,278,968 Aug. 21, 2001 Method and apparatus for adaptive speech recognition hypothesis construction and selection in a spoken language translation system


6. U.S. Pat. No. 6,044,343 Mar. 28, 2000 Adaptive speech recognition with selective input data to a speech classifier


7. U.S. Pat. No. 5,774,841 Jun. 30, 1998 Real-time reconfigurable adaptive speech recognition command and control apparatus and method


8. U.S. Pat. No. 5,170,432 Dec. 8, 1992 Method of speaker adaptive speech recognition


Additional Adaptive Speech Recognition—In Abstract


1. U.S. Pat. No. 6,208,964 Mar. 27, 2001 Method and apparatus for providing unsupervised adaptation of transcription


2. U.S. Pat. No. 4,720,802 Jan. 19, 1988 Noise compensation arrangement


In all these inventions the authors use either HMM (Hidden Markov Models) or DTW (Dynamic Time Warping) as the basic recognition approach. As will be elaborated below, these methods are entirely different from my invented method. Similarly, the most relevant patent applications were also on HMM principles:


20110066426 Real time speaker adaptive speech recognition apparatus and method


20060200347 User adaptive speech recognition method and apparatus


20060184360 Adaptive multi-pass speech recognition system


20030187645 Automatic detection of change in speaker in speaker adaptive speech recognition system.


NONPATENT LITERATURE DOCUMENTS



  • [1] J. Ben-Arie, Z. Wang, P. Pandit, and S. Rajaram. Human Activity Recognition using Multidimensional Indexing. IEEE Transactions in Pattern Analysis and Machine Intelligence, Vol. 24, No. 8, pp 1091-1105, August 2002.

  • [2] S. Franzini and J. Ben-Arie. Speech recognition by indexing and sequencing. In Soft Computing and Pattern Recognition (SoCPaR), 2010 International Conference of, pages 93-98, dec. 2010.

  • [3] S. Franzini and J. Ben-Arie, “Speech Recognition by Indexing and Sequencing,” International Journal of Computer Information Systems and Industrial Management Applications. ISSN 2150-7988 Volume 3, December (2011).

  • [4] Kai Ma and J. Ben-Arie, “Vector Array based Multi-view Face Detection with Compounded Exemplars,” in IEEE CVPR 2012, Providence, R.I., June 2012.

  • [5] Kai Ma and J. Ben-Arie, “Multi-view Multi-class Object Detection via Exemplar Compounding,” to appear in IEEE ICPR 2012, Tsukuba, Japan, November 2012.








Hence, I find that my invention is entirely novel with respect to other inventors. The only invention that bears some similarity to a part of my current invention is my own U.S. Pat. No. 7,366,645 B2 which describes the old version of RISq (Recognition by Indexing and Sequencing). However, RISq can recognize only one dimensional arrays of multidimensional vectors and was developed for human activity recognition [1]. The new invention is called VARIS (Vector Array Recognition by Indexing and Sequencing) and deals with multidimensional arrays of multidimensional vectors. VARIS employs an improved version of RISq with several unobvious innovations called 1D algorithm and in a recursive application, it is entirely non obvious and took me years to invent and develop. The recursive application enables to reduce the array's dimension by one in each recursive iteration, resulting finally with a similarity measures between multidimensional arrays. The extension to multidimensional arrays opens a whole new space for multidimensional vector array pattern recognition. As far as I investigated, my method is the only one that solves this enormously difficult problem in polynomial complexity of computations. Measuring similarity of multidimensional arrays enables in the first time to recognize physical phenomena such as videos invariant to their speed and distortion.


Description of VARIS (Vector Array Recognition by Indexing and Sequencing):


VARIS (U.S. provisional Patent 61/573,208)—is a Methodology for exemplar based Detection and Recognition/classification of signals that are represented by multidimensional arrays of vectors, which also can be typified as Tensors or as multidimensional sequences of vectors. If the arrays are of dimension n then they are equivalent to tensors of order n+1.


Such arrays can represent many kinds of physical signals. For example, speech can be represented by 1D array of vectors, which is equivalent to 1D temporal sequence of vectors or to Tensors of order 2. Every vector in the sequence represents the spectrum of a very short segment of the speech sound waveform. Among many other applications, 1D arrays are also useful in describing human actions, gestures and other activities. Images are represented by 2D arrays (tensors of order 3). Every vector in the 2D array is representing the properties such as color and brightness of one pixel of the image. Videos can be described by 3D arrays (tensors of order 4). 3D arrays are also useful in describing complex phenomena such as weather patterns, earth quakes, etc. Higher array dimensions can describe even more complex physical phenomena.


In this patent application, I describe a computer methodology that enables to detect and classify such signals with Robustness to Interference, Geometric Distortions and Incomplete Data. VARIS is actually a multidimensional extension of a 1D algorithm, which is an improvement of a method called RISq (Recognition by Indexing and Sequencing). A preliminary version of RISq was invented by me in 2000 and patented in 2008 [6]. RISq is designed to recognize 1D arrays of vectors and is described in detail in the following section.


The recognition of multidimensional arrays by VARIS is achieved by a recursive application of a 1D algorithm on the input array, each time on another dimension of the array. VARIS achieves better generic detection by incorporating in each class several exemplars that represent different instances of the same class. For example, in face detection, one can store many types of faces as stored exemplars. This enables to detect large variations of face appearances. The detection/recognition is further improved by introducing new similarity measures that penalize incompatible input-exemplar vector pairs in the arrays matched. Additional significant improvement is achieved by our new compounding approach. In this approach, each exemplar is divided into components in a way that enables to create new exemplars by compounding parts from several exemplars of the same class. Experimental results of a comparison between the performances of VARIS versus 3 of the best face detection methods, is illustrated in [4][5]. The comparison shows that the performance of VARIS is better both in recognition rates and in false detection rates.


Description of RISq:


RISq is a method for Detection or Recognition of 1D arrays of multidimensional vectors [6]. A more advanced version of RISq called 1D algorithm is being used in VARIS. Three innovative additions that are included in 1D algorithm are described in the section on speech recognition. The problem of detection and classification of patterns that are expressed by arrays of vectors is different and more difficult than classification of single vectors by classical Pattern Recognition (PR). The classical approach for detection and classification of patterns that were composed of vector arrays was to concatenate all the vectors in the array into one long vector that could be recognized by classical PR methods. This approach is not practical because physical patterns such as speech, images or video, usually are varying in time and therefore could not be effectively represented by rigid vectors as required by classical PR. 1D signals such as speech usually are represented by 1D array of multidimensional vectors (Tensors of order 2). Each vector in the array represents a sample of the speech. Methods such as Hidden Markov Models (HMM) or Dynamic Time Warping (DTW) were developed for detection and classification of 1D arrays. DTW is rarely used today because HMM produces much better results. In following paragraphs we elucidate the differences between RISq and DTW. As demonstrated in [2][3] RISq achieves even slightly better results than HMM in recognition of speech.


HMM, is a parametric method and therefore needs rigorous training by a complex algorithm called Expectation Maximization (EM) in order to quantify the parameters of each model. In contrast, 1D algorithm is non-parametric method and needs only one exemplar per class for training. This is the reason that 1D algorithm could be very easily adapted to different speakers, Languages or accents. 1D algorithm is based on k-Nearest Neighbors (kNN) approach in which classification is performed by estimating the posterior probability, which corresponds to the similarity of each vector in the array with respect to exemplar vectors in the its neighborhood. In our opinion, non-parametric methods have a significant advantage over parametric methods because one does not need to assume any functional description for the underlying probability distributions. In practice, distributions of signals such as speech or imagery are quite complex and a-priori unknown. Assuming a functional description that does not fit the actual data could result in low recognition rates and high false positives rates (false alarm rates). In addition, the non-parametric structure of 1D algorithm is very easy to train because it does not make any attempt at building statistical models of pattern classes. Instead, training is performed by simply storing one or more exemplar arrays per class in the 1D algorithm's database.


After training is performed, an unknown input array can be classified using a two-step algorithm. The first step is indexing, which consists of identifying a number of exemplar vectors, which are closest to each input vector and assigning them weights, which are proportional to their mutual similarity measure. The second step is sequencing, which finds using dynamic programming the maximally weighted bipartite graph matching between the input array and each exemplar array, while respecting a sequencing constraint. If vectors i and j in the input sequence are matched with vectors k and l in the exemplar sequence, then if i<j then k must be smaller than l according to the sequencing constraint. The aggregate scores of the bipartite graph matching to each exemplar array are compared and the input array is classified as a member of the class of the exemplar array with the highest score.


Description of VARIS as a Recursive Application of 1D Algorithm:


The VARIS algorithm was developed for vector arrays with 2 or more dimensions (Tensors of order 3 or more). The 1D algorithm, which was designed for optimal matching of 1D arrays of vectors, is applied recursively by VARIS each time on another dimension. The result of each application is an array, which is smaller by one dimension. For example, 2D arrays are arranged as a 2D matrix of vectors. There are two options to execute the VARIS algorithm for two dimensions by switching the processing order of the rows with the columns. One could start at the first phase with matching the columns (or the rows) of the 2D input array to all the columns (rows) of the exemplars, which are also 2D arrays. At the first phase, RISq finds the optimal similarity scores of each column of the input 2D array with all the columns of the exemplars. This task is not insurmountable because most of the exemplar columns do not have vectors which are close enough to the input vectors to be indexed. At second phase, each column of the input and of the exemplars is collapsed into a node in a 1D array. Next, the input 2D array is reduced into a 1D array. Similarly, each 2D exemplar is also reduced into 1D array. As a consequence of 1D algorithm finding similarity scores between each input column to all the columns of the exemplars in the first phase, each input node has similarity scores to each of the nodes of the exemplars. Next, 1D algorithm is applied again with the goal to find the collapsed 1D exemplar array that best matches the collapsed 1D input array. 1D algorithm now finds the optimal aggregate similarity scores of the input array with each of the exemplars. The input array is then classified as a member of the class of the exemplar with the highest similarity score.


The application of 1D algorithm finds the optimal, mutual similarity score between two 1D arrays while allowing any warping that abides by the sequencing constraint. In this 2D example, the sequencing constraint in 1D algorithm's optimization is applied twice. At first phase, the row number of each vector in the columns of the input and the exemplars' arrays serves as its 1D “timing” within the column. In the second phase, the “timing” is the column number of each collapsed column of both the input and the exemplars' arrays. This two tier sequencing allows a wide range of 2D warping, which still preserves the topology of the matches of the input array with the exemplar arrays. This enables to recognize humans and objects that are depicted in a wide divergence of viewpoints. The segmentation of the 2D input and exemplar images into arrays of independent vectors, where each vector represents a small image patch, enables VARIS to include in the aggregate similarity score only patches that belong to the subject and reject patches of the background and of other objects. Therefore, VARIS which is based on recursive application of 1D algorithm, is more robust in conditions of partial occlusion and missing image data.


To improve VARIS's generalization in detection and classification, I included in each class several exemplars, which represent as much as possible the variety of members within each class. Very recently I introduced a new approach which we call “exemplar compounding” [3][4][5]. In order to achieve even more flexibility and adaptability in each class, I developed algorithms that construct 2D optimal exemplars which are composed of patches of different members of the same class. Compounding provides more flexibility and higher similarity scores with relatively smaller exemplar sets. In experiments of person detection in images, VARIS with compounded exemplars achieved better results than any State Of the Art (SOA) detection method including VARIS with uncompounded exemplar. I developed compounding algorithms also for speech [3] in which word exemplars are composed of partial utterances of several people.


Claims 1-8 describe the general multidimensional VARIS method. Claims 15-19 describe VARIS applied for 2D arrays, which is useful for tasks such as face detection or object recognition in imagery. Claims 9-14 describe my new approach of using VARIS for 1D arrays. A task especially suited for adaptive continuous speech recognition. These claims include three innovative additions. The first is the introduction of negative similarity scores to segments that do not match the exemplars. This is an important addition that noticeably reduces the false positives rates because there are many cases where different words with similar features get high similarity scores unjustifiably. The second innovation is the segmentation of the continuous speech by matching the speech stream with overlapping segments with lengths that correspond to different word groups. The problem of continuous speech segmentation is very difficult because it is impossible to segment the speech by detecting silent periods, which are absent in continuous speech. Here both HMM and DTW fail because these methods require to specify the beginnings and ends of each word one intents to classify. The third innovation is in introducing a method for segmenting the phonemes in the stored and input words and pruning all the words which have even a single non-matching phoneme. This further reduces the false alarm rates. The first and second innovations were also included in the multidimensional VARIS claims as well.


Differences between DTW (Dynamic Time Warping) and 1D algorithm


DTW is a method for recognizing an input of 1D sequence of vectors as most similar to one of stored exemplars of 1D sequences of vectors. DTW algorithm optimizes the sum of vector distances between the two sequences using Dynamic Programming (DP). The algorithm allows time warping in which the sequences are shrunk or stretched to improve matching with the other sequence. This provides more flexible matching in cases where the timing of the two sequences is not compatible. The warping of an input sequence with M vectors, matched to an exemplar sequence of N vectors, can be represented by a path in a rectangular lattice which has M rows and N columns. Each junction (k,l) has a cost which is proportional to the distance between the k-th input vector to the l-th exemplar vector. DTW tries to find a monotonically connected path from (1,1) to (M,N) that has minimum cumulative cost=distance. The requirement of monotonous paths means that all the pieces of the path from say junction (k,l) to (h,j) must have k−h≦0 and l−j≦0. DTW has many uses mostly for speech recognition. In speech, the input and the exemplars are sequences of vectors, which are composed of Mel Frequency Cepstrum Coefficients (MFCCs) that are derived by sampling and processing signals of speech utterances. Although there are few similarities between DTW and 1D algorithm there are significant differences between the two. The two methods are superficially similar because both methods match vector sequences while allowing warping and both methods use DP to find the optimal matching score. But here the similarity ends. Firstly, DTW regards the absolute vector distances as an inverse similarity measure and tries to find the match between the sequences which has the minimal cumulative distance using DP, which is constrained by monotonic sequencing requirement. But the sequences must be matched according to a connected and monotonic path. On the other hand, 1D algorithm performs bipartite graph matching, which allows partial matching of both input and exemplar sequences in any sequential configuration i.e. the path could be disconnected. This provides much more flexibility in warping and improves 1D algorithm's and VARIS's recognition rates noticeably. Such a process is feasible because both RISq and VARIS are accumulating vector similarities not distances. RISq and VARIS also penalize dissimilarities while DTW uses only distance penalties. The similarity-dissimilarity scoring in 1D algorithm and in VARIS provide a substantial improvement in performance. Similarity and distance have inverse relations. Mathematically, there is a non linear and inverse relation between the two. However, minimizing accumulated distances is not the same as maximizing accumulated similarities. In addition, DTW must determine the endpoints of both sequences (also HMM). 1D algorithm and VARIS do not have to. This discrepancy alone could make a lot of difference in the recognition rates because DTW must apply pre-segmentation to locate these end points, a process which is very much error prone especially in continuous speech. Another substantial difference: with DTW one has to match all the exemplars serially one by one, in order to find the best match. This makes DTW much slower especially if it has many exemplars (HMM has the same problem). In contrast, VARIS is a parallel method that employs indexing and matches all the exemplars at once. Furthermore, a most important difference is that VARIS can recognize multidimensional sequences in polynomial complexity at any dimension. Whereas DTW can deal only with 1D sequences and has NP complete complexity even with 2D sequences (HMM has also exponential complexity in 2D). To conclude, all these differences result in much better performance of 1D algorithm and VARIS in recognition rates, in False alarm rates and in computation time requirements.

Claims
  • 1. A method implemented by a computer for the computation of similarity measures between input patterns and stored patterns wherein both said input patterns and said stored patterns are derived from data collected from speech, images, video, signals, static physical entities or moving physical entities wherein said input patterns are represented by input arrays of input vectors wherein said input arrays have at least one dimension wherein input vectors have at least one dimension, wherein said stored patterns are represented by stored arrays of stored vectors wherein said stored arrays have at least one dimension wherein said stored vectors have at least one dimension, wherein said input array is denoted by [Au] wherein: [A]u={aui1. . . iq|i1=1, 2, . . . , k1u; i2=1, 2, . . . , k2u; . . . ; iq=1, 2, . . . , kqu} wherein u denotes the serial number of said input array wherein q≧1 denotes the number of dimensions of said input arrays wherein q≧1 denotes also the number of dimensions of said stored arrays which is equal to said number of dimensions of said input arrays, wherein ai1. . . iqu denotes said input vectors wherein i1 . . . iq denote the indices which denote the serial numbers of said input vectors within each of said q dimensions, wherein k1u . . . kqu denote the total number of said input vectors in each of said q dimensions, wherein each said stored array is denoted by [MvcC] wherein the set of said stored arrays is denoted by: {[M]vcC}={{mjl . . . jqvcC|j1=1, 2, . . . , l1vcC; j2=1, 2, . . . , l2vcC; . . . ; jq=1, 2, . . . , lqvcC}} wherein c denotes the class of said stored array [MvcC] wherein c=1 . . . C; wherein C denotes the total number of said classes, wherein vc=1, . . . , Vc; denotes the serial number of said stored array [MvcC] within said class c wherein total number of stored arrays within said class c is Vc wherein mjl. . . jqvcC denotes said stored vectors of said stored array [MvcC] wherein j1 . . . jq denote the indices of said stored vectors which denote the serial numbers of said stored vectors within each of said q dimensions, wherein l1vcC . . . lqvcC denote the total number of said stored vectors in each of said q dimensionswherein said similarity measures {S{[A]u,[M]vcC}} between said input array [A]u and said stored arrays {[M]vcC} are computed by a recursive application of a 1D algorithm which computes said similarity measures between one dimensional input sequences of said input vectors and one dimensional stored sequences of said stored vectors wherein said input sequences are one dimensional parts of said input arrays wherein said stored sequences are one dimensional parts of said stored arrays.
  • 2. Claim number 1 wherein said input sequence of said input vectors is defined as one dimensional input array denoted by 1DArrayu (iα) wherein 1DArrayu(iα)={auil. . . ia. . . iq|iα=1 . . . kαu} is said one dimensional part of said input array [A]u wherein 1≦α≦q; wherein said stored sequence of said stored vectors is defined as one dimensional stored array denoted by 1DArrayvcC(jβ) wherein 1DArrayvcC(jβ)=(jβ)={mvcCjl. . . jβ. . . jq|jβ=1 . . . lβvcC} is said one dimensional part of said stored array [M]vcC wherein 1≦β≦q.
  • 3. Claim number 2 wherein said 1D algorithm which computes 1D similarity measure S{1DArrayu(iα), 1DArrayvcC(jβ)} between said input sequence 1DArrayu(iα) and stored sequence 1DArrayvcC(jβ) comprising: (a) computing vector pair similarities Svp(iα, jβ)=Fvp(ai l. . . iα. . . iqu, mjl. . . jβ. . . jq) between all said input vectors {auil. . . iα. . . iq|iα=1 . . . kαu} of said input sequence 1DArrayu(iα) and all said stored vectors {mvcCjl. . . jβ. . . jq. . . |jβ=1 . . . lβvcC} of said stored sequence 1DArrayvcC(jβ) wherein similarity function Fvp(αil. . . iα. . . iq|iα=1 . . . kαu, mjl. . . jβ. . . jqvcC) is an inverse function of the multidimensional distance between said input vector αil. . . iα. . . iqu and said stored vector mjl. . . jβ. . . jqvcC wherein said similarity function Fvp(αil. . . iα. . . iqu,mjl. . . jβ. . . jqvcC) increases when said multidimensional distance decreases, wherein said similarity function Fvp(αil. . . iα. . . iqu, mjl. . . jβ. . . jqvcC) decreases when said multidimensional distance increases;(b) defining a bipartite graph which represents said input sequence 1DArrayu(iα) and said stored sequence 1DArrayvcc(jβ) wherein said bipartite graph has two parts wherein the first part consists of input nodes wherein the second part consists of stored nodes wherein each of said input nodes {iα} of said first part of said bipartite graph is attached to one of said input vectors {αil. . . iα. . . iqu|iα=1 . . . kαu} and wherein each of said stored nodes of said second part {jβ} of said bipartite graph is attached to one of said stored vectors {mjl. . . jβ. . . jqvcC|jβ=1 . . . lβvcC} wherein each said input node of the first part has links to all said stored nodes {jβ} of the second part wherein each said link L(iα, jβ) connects said input node iα attached to said input vector αil. . . iα. . . iqu with said stored node jβ attached to said stored vector mjl. . . jβ. . . jqvcC wherein each said link L(iα, jβ) has a link weight which is equal to said vector pair similarity Svp(iα, jβ)=Fvp(αil. . . iα. . . iqu,mjl. . . jβ. . . jqvcC) between said input vector αil. . . iα. . . iqu and said stored vector mjl. . . jβ. . . jqvcC it connects;(c) defining a sequential link set as a set of said links L(iα, jβ) in said bipartite graph wherein all said links in said sequential link set fulfill a sequencing requirement wherein said sequencing requirement allows to include in said sequential link set only said links which have a mutual sequential relation wherein any two said links . . . L(iα=η, jβ=ε) . . . L(ia=λ, jβ=δ) . . . with said mutual sequential relation must fulfill the following four conditions: (I) η≠λ (II) ε≠δ (III) if η<λ then ε<δ (IV) if η>λ then ε>δ;(d) using a method of dynamic programming to compute the optimal-sequential said bipartite graph matching wherein said optimal-sequential said bipartite graph is defined as said bipartite graph with said sequential links set {L(iα, jβ)} wherein said sequential links set {L(iα, jβ)} has a total sum of said link weights which is the largest among all said sequential link sets possible in said bipartite graph;(e) allocating new said link weights to all said links L(iα, jβ) which have said link weights Svp(iα, jβ) smaller than a predetermined threshold link weight, wherein said new said link weight: SNvp(iα jβ)=FNvp(αil. . . iα. . . iqu,mjl. . . jβ. . . jqvcC≦0 is a predetermined penalty function FNvp(αil. . . iα. . . iqu,mjl. . . jβ. . . jqvcC)≦0 of said input vectors αil. . . iα. . . iqu and said stored vectors mjl. . . jβ. . . jqvcC (f) said method of dynamic programming computes said optimal-sequential said bipartite graph matching by gradually increasing the size of said optimal-sequential said bipartite graph starting with defining an initial said bipartite graph by initiating an input nodes list If={i1} wherein said list size: f=1, wherein said input nodes list denotes all said input nodes of said input part, wherein said second part of said bipartite graph has the full set of said stored nodes {jβ}, listing in each said stored node jβ said link L(i1, jβ) and said link weight Svp(i1, jβ);(g) increasing said input nodes list by one If={i1, i2} wherein said list size: f=2 and constructing said optimal-sequential said bipartite graph Gf which have as said input nodes said input nodes list If={i1, i2} and as said stored nodes said full set of said stored nodes: {jβ}, finding said optimal-sequential bipartite graph with maximum two said links L(i1, jφ), L(i2, jθ) wherein said links are said sequential link set wherein said sequencing requirement is: φ<θ≦β wherein the two said links listed have maximal said total sum Sf of maximum two said link weights of said optimal-sequential said bipartite graph Gf, recording said maximal said total sum of said link weights Sf;(h) increasing said list size by one f=f+1; increasing said input nodes list by one If={i1, i2, . . . , if} wherein previous said node list was If−1={i1, i2, . . . , if−1} and constructing said optimal-sequential said bipartite graph Gf which has as said input nodes said input nodes list If={i1, i2, . . . , if} and as said stored nodes said full set of said stored nodes: {jβ} finding said sequential links set with g links wherein g≦f is a maximal number possible in said optimal-sequential bipartite graph Gf wherein said sequential link set includes maximal number of links possible in said optimal-sequential said bipartite graph Gf including said links with negative weights whenever positive ones are not available, wherein said links listed have the maximal said total sum of g said link weights Sg possible in said optimal-sequential said bipartite graph Gf wherein if a number nα of said input nodes and a number nβ of said stored nodes do not have said links which can be included in said optimal-sequential said bipartite graph Gf then the total said sum of said links weights Sf for Gf is computed by: Sf=Sg−Nw(nα+nβ) wherein Nw is a predetermined penalty weight constant;(i) repeating step (h) if f<kαu otherwise if f=kαu ending the process wherein all said input nodes {iα} have been included in said list of said input nodes;(j) listing g≦kαu said links with the highest said total sum of said link weights Sg and recording said total sum of said link weights Sf, wherein Sf is equal to said optimal-sequential total similarity measure: Sf=S{1DArrayu(iα), 1DArrayvcC(ββ)} between said input sequence 1DArrayu(iα) and stored sequence 1DArrayvcC(jβ).
  • 4. Claim number 3 wherein said method of dynamic programming for computing said optimal-sequential said bipartite graph matching wherein said process initiating in step (f) starts by constructing said input node list starting with the last said input node ikαu and proceeds in reverse order ending at including in said list said input node i1 when f=kαu.
  • 5. Claim number 2 wherein said 1D algorithm computes said 1D similarity measure: S{1DArrayu(iα|a1, . . . , aα−1, aα+1, . . . , aq), 1DArrayvcC(jβ|b1, . . . , bβ−1, bβ+1, . . . , bq)} between said input sequence: 1DArrayu(iα)=1DArrayu(iα|a1, . . . , aα−1, aα+1, . . . , aq)={auil . . . iq|ia=1, 2, . . . , kαu; i1=α1, . . . , iα−1=aα−1, iα+1=aα+1, . . . , iq=αq} where auil . . . iq is a p-dimensional said input vector, and said stored sequence: 1DArrayvcC(jβ|b1, . . . , ββ−1, bβ+1, . . . , bq)=1DArrayvcC(jβ)={mjl. . . jqvcC|jβ=1, 2, . . . , lβvcC; j2=b2; . . . ; jβ−1=bβ−1; jβ+1=bβ+1; . . . ; jq=bq;} wherein mjl. . . jqvcC is a p-dimensional said stored vector,wherein each said input sequence is said one dimensional part of said input array: [A]u={αuil. . . iq|il=1, 2, . . . , ku1; i2=1, 2, . . . , k2u; . . . ; iq=1, 2, . . . , kqu}each said stored sequence is said one dimensional part of said stored array: [M]vcC={mjl. . . jqvcC|j1=1, 2, . . . , l1vcC; j2=1, 2, . . . , l2vcC; jq=1, 2, . . . , lqvcC}wherein said recursive application of said 1D algorithm yields said similarity measure: S{[A]u, [M]vcC} between said input array [A]u and said stored array [M]vcC wherein such said recursive application comprising:(a) selecting said input sequence: 1DArrayu(i1)=1DArrayu(il|a2, . . . , aq)={auil . . . iq|=1, 2, . . . , k1u; i2=a2, . . . , iq=aq} wherein {|i1=1, 2, . . . , k1u; 1≦a2≦k2u; . . . ; 1≦aq≦kqu} and selecting said stored sequence: 1DArrayvcC(j1)=1DArrayvcC(j1|b2, . . . , bq)={mjl . . . jqvcC|j1=1, 2, . . . , l1vcC; j2=b2; . . . ; jq=bq}wherein {|j1=1, 2, . . . , l1vcC; 1≦b2≦l2vcC; . . . ; 1≦bq≦lqvcC}predetermining a range constants: 0≦rangeyvcC≦lyvcC;y=1 . . . q;vc=1 . . . Vc;c=1 . . . C;(b) applying said 1D algorithm to compute said 1D similarity measure between selected said: input sequence: 1 DArrayu(i1|a2, . . . , aq) and said stored sequence 1DArrayvcC (j1|b2, . . . , bq) wherein said 1D algorithm is applied for all values of a2, . . . , aq wherein 1≦a2≦k2u; . . . ; 1≦aq≦kqu; wherein each application of said 1D algorithm for one said value of a2, . . . , aq said 1D algorithm is repeated for all values of b2, . . . , bq wherein {|j1=1, 2, . . . llvcC; a2−range2vcC≦b2≦a2+range2vcC; . . . ;aq−rangeqvcC≦bq≦aq+rangeqvcC}wherein applying said 1D algorithm computes a product of) {|(k2u)(k3u) . . . (kqu)(1+2range2vcC) . . . (1+2rangeqvcC)}said 1D similarity measures(c) arranging said product of (k2u)·(k3u) . . . (kqu)·(l2vcC)·(l3vcC) . . . (lqvcC) said 1D similarity measures in an input similarity array [I]q−1uvcC of q−1 dimensions with a total of (k2u)·(k3u) . . . (kqu) input similarity vectors wuvcCi2. . . iq: [I]q−1uvcC={wuvcCi2. . . iq|i2=1, 2, . . . , k2u; i3=1, 2, . . . . , k3u; . . . ; iq1, 2, . . . , kqu} wherein each row along dimension x of said input similarity array has kxu said input similarity vectors wherein each said input similarity vector has (l2vcC)·(l3vcC) . . . (lqvcC) input similarity components:
  • 6. Claim number 5 wherein constructing an indexing method for efficient detection of the largest said similarity measures {S{[A]u, [MvcC]}} between said input array [A]u and said stored arrays {[M]vcc} wherein: [A]u={auil. . . iq|i1=1, 2, . . . , k1u; 1, 2, . . . , k2u; . . . ; iq=1, 2, . . . , kqu} and {[M]vcc}={{mil. . . iq|i1=1, 2, . . . , l1vcC; i2=1, 2, . . . , l2vcC; . . . ; iq=1, 2, . . . , lqvcC}}(a) for a predetermined said input pattern number u all said input vectors ail. . . iqu are indexed into a p dimensional data structure Δp which stores all said stored vectors mjl. . . jqvcC of said stored arrays {[M]vcC} of wherein both said input vectors ail. . . jqu and said stored vectors mjl. . . jqvcC are p dimensional;(b) for each one of said input vectors ail. . . iqu which is indexed into said data structure Δp all said stored vectors mjl. . . jqvcC which are within a predetermined Euclidean distance dmax: d{ail. . . iqu, mjl. . . jqvcC}≦dmax are retrieved(c) all said retrieved stored vectors are separated into their original said stored arrays: {[M]vcC}(d) a predetermined selection function yields selection scores to each of said stored arrays said selection function is dependent on the number of retrieved said stored vectors mjl. . . jqvcC which belong to each said stored array, their Euclidean distances d{ajl. . . iqu, mjl. . . jqvcC} from their indexing said input vectors and the sum of indices differences between their said indexing input vectors and said retrieved said stored vectors: Σ [(il−j1)2+ . . . +(iq−jq)2]1/2 (e) said a predetermined number of stored arrays [MvcC] with the highest said selection scores are selected for further computations of said similarity measures: {S{[A]u,[MvcC]}};(f) computing said similarity measures {S{[A]u,[MvcC]}} of [A]u with each said selected stored arrays [MvcC];(g) selecting said stored array [MvcC] with the highest said similarity measure as the most similar to said input array [A]u and said input array can be classified into the same class c of the selected said stored array [MvcC] if said similarity measure is larger than a predetermined similarity threshold value.
  • 7. Claim number 3 wherein said 1D algorithm computes 1D similarity measure: S{1DArrayu(iα|a1, . . . , aα−1, aα+1, . . . , aq), 1DArrayvcC(jβ|b1, . . . , bβ−1, bβ+1, . . . , bq)} between said input sequence:1DArrayu(iα|a1, . . . , aα−1, aa+1, . . . , aw)={auil. . . iq|iα=1, 2, . . . , kαu; il=a1, . . . , iα−1=aα−1, iα+1=aα+1, . . . , iq=aq} and said stored sequence:1DArrayvcC(jβ|b1, . . . , bβ−1, bβ+1, . . . bq)={mjl. . . jqvcC|jβ=1, 2, . . . lβvcC; j2=b2; . . . ; jβ−1=bβ−1; jβ+1=bβ+1; . . . ; jq=bq;}each said input sequence is an one dimensional part of said input array: [A]u={ail. . . iqu|il=1, 2, . . . , k1u; i2=1, 2, . . . , k2u; . . . ; iq=1, 2, . . . , kqu}each said stored sequence is an one dimensional part of said stored array: [M]vcc={mjl. . . jqvcC|j1=1, 2, . . . , l2vcC; . . . ; jq=1, 2, . . . lvcC}wherein said recursive application of said 1D algorithm yields said similarity measure: S{[A]u, [M]vcC} between said input array [A]u and said stored array [M]vcC wherein such said recursive application comprising:(a) selecting said input sequence: 1DArrayu(i1|a2, . . . , aq)={ail. . . iqu|i1=1, 2, . . . , k1u; i2=a2, . . . , iq=aq} wherein {|i1=1, 2, . . . , k1u; 1≦a2≦k2u; . . . ; 1≦aq≦kqu} and selecting said stored sequence:1DArrayvcC(j1|b2, . . . , bq)={mjl. . . jqvcC|j1=1, 2, . . . , lqvcC; j2=b2; . . . ; jq=bq}wherein {|j1=1, 2, . . . , llvcC; 1≦b2≦l2vcC; . . . ; 1≦bq≦lqvcC}(b) applying said 1D algorithm to compute said 1D similarity measure between selected said input sequence: 1DArrayu(i1|a2, . . . , aq) and said stored sequence 1DArrayvcC(j1|b2, . . . , bq)wherein said 1D algorithm is applied for all values of a2, . . . , aq wherein1≦a2≦k2u; . . . ; 1≦aq≦kqu;and for all values of b2, . . . , bq Wherein 1≦b2≦l2vcC; . . . ; 1≦bq≦lqvcC;wherein applying said 1D algorithm computes a k2u·k3u . . . kqu·l2vcC·l3vcC . . . lqvcC
  • 8. Claim number 7 wherein constructing an indexing method for efficient detection of the largest said similarity measures {S{[A]u, [MvC]}} between said input array [A]u and said stored arrays {[M]vcC} wherein: [A]u={auil. . . iq|i1=1, 2, . . . , k1u; i21, 2, . . . , k2u; . . . ; iq=1, 2, . . . , kqu} and {[M]vcC}={{mil. . . iqu|i1=1, 2, . . . , l1vcC; i2=1, 2, . . . , l2vcC; . . . ; iq=1, 2, . . . , lqvcC}}(a) for a predetermined said input pattern number u all said input vectors ail. . . iqu are indexed into a p dimensional data structure Δp which stores all said stored vectors mjl. . . jqvcC of said stored arrays {[M]vcC} wherein both said input vectors ail. . . iqu and said stored vectors mjl. . . jqvcC are p dimensional;(b) for each one of said input vectors ail. . . iqu which is indexed into said data structure Δp all said stored vectors mjl. . . jqvcC which are within a predetermined Euclidean distance dmax: d{ail. . . iqu, mjl. . . jqvcC}≦dmax are retrieved(c) all said retrieved stored vectors are separated into their original said stored arrays: {[M]vcC}(d) a predetermined selection function yields selection scores to each of said stored arrays said selection function is dependent on the number of retrieved said stored vectors mjl. . . jqvcC which belong to each said stored array, their Euclidean distances d{ail. . . iqu, mjl. . . jqvcC} from their indexing said input vectors and the sum of indices differences between their said indexing input vectors and said retrieved said stored vectors: Σ[(il−j1)2+ . . . +(iq−jq)2]1/2;(e) said a predetermined number of stored arrays [MvcC] with the highest said selection scores are selected for further computations of said similarity measures: {S{[A]u,[MvcC]}}(f) computing said similarity measures {S{[A]u,[MvcC]}} of [A]u with each said selected stored arrays [MvcC];(g) selecting said stored array [M]cC with the highest said similarity measure as the most similar to said input array [A]u and said input array can be classified into the same class c of the selected said stored array [M]vcC if said similarity measure is larger than a predetermined similarity threshold value.
  • 9. A method implemented by a computer for the computation of similarity measures between segments of input patterns and stored patterns wherein both said input patterns and said stored patterns are derived from data collected from speech or one dimensional signals, wherein said input patterns are represented by input arrays of input vectors wherein said input arrays have one dimension wherein input vectors have at least one dimension, wherein said stored patterns are represented by stored arrays of stored vectors wherein said stored arrays have one dimension wherein said stored vectors have at least one dimension, wherein said input array is denoted by [A]u wherein [A]u={auil|i1=1, 2, . . . , k1u} wherein u denotes the serial number of said input array, wherein ailu denotes said input vectors wherein il denote the vectors indices which denote the serial numbers of said input vectors within said input array, wherein k1u denotes the total number of said input vectors within said input array wherein each said stored array is denoted by [M]vcC wherein the set of said stored arrays is denoted by {[M]vcC}={{mj1vcC|j1=1, 2, . . . , llvcC}} wherein c denotes the class of said stored array [M]vcC wherein c=1 . . . C; wherein C denotes the total number of said classes wherein vc=1, . . . , Vc; denotes the serial number of said stored array [M]vcC within said class c wherein total number of stored arrays within said class c is Vc wherein mjlvcC denotes said stored vectors of said stored array [M]vcC wherein j1 denote the indices of said stored vectors which denote the serial numbers of said stored vectors within said stored array, wherein llvcC denote the total number of said stored vectors within said stored arrayfor each said class c, dividing said input array [A]u into input array partially overlapping segments: [A(r)]cu wherein r is said segment's serial number, wherein all said segments lengths is equal to the average length of said stored arrays [M]vcC of said class c, wherein said 1D algorithm is used to compute said similarity measures {S{[A(r)]cu,[MvcC]}} of all said segments [A(r)]cu with said stored arrays [M]v cC said segments [A(r)]cu which have said similarity measure which is greater than a predetermined similarity threshold, are classified as members of class c.
  • 10. Claim number 9 wherein executing 1D algorithm for the computations of similarity measures: {S{[A(r)]cu, [M]vcC}} between said input array segment [A(r)]cu and said stored array [M]vcC, comprising: (a) dividing the total length k1u of said input array [A]u into partially overlapping length segments of lengths: Lc wherein said length segments: Lcu(r)=[(r−1)δ+1, (r−1)δ+2, . . . , (r−1)δ+1+Lc] wherein the overlap: δ=┌φLc┐ is the lowest integer which is greater or equal to a predetermined fraction φ of said segment length Lc wherein total number of partially overlapping said length segments is: rcu=└(k1u−Lc)/δ+1┘; wherein rcu is the largest integer smaller or equal to (k1u−Lc)/δ+1;(b) setting r=0;(c) incrementing r: rr+1, dividing said input array [A]u into said input array partially overlapping array segments: [A(r)]cu wherein [A(r)]cu={ailu|i1=(r−1)δ+1, (r−1)δ+2, . . . , (r−1)δ+1+Lc} wherein r=1, . . . rcu; wherein rcu is said total number of said length segments;(d) computing vector pair similarities Svp(i1, j1)=Fvp(ailu, mjlvcC) between all said input vectors {ailu|i1=(r−1)δ+1, (r−1)δ+2, . . . , (r+1)δ+1+Lc} of said input array segment [A(r)]cu and all said stored vectors {mvcCj1|j1=1 . . . llvcC} of said stored array [M]vcC wherein the similarity function Fvp(ailu, mjlvcC) is inverse function of the multidimensional distance an between said input vector ailu and said stored vector mjlvcC wherein said inverse function Fvp(ailu, mjlvcC) increases when said multidimensional distance decreases and said inverse function Fvp(ailu, mjlvcC) decreases when said multidimensional distance increases;(e) defining a bipartite graph which represents said input array segment [A(r)]cu and said stored array [M]vcC wherein each of the input nodes {i1} of the first part of said bipartite graph is attached to one vector of said input vectors {ai1u|i1=(r−1)δ+1, (r−1)δ+2, . . . , (r−1)δ+1+Lc} and wherein each of the stored nodes of the second part {j1} of said bipartite graph is attached to one of said stored vectors {mvcCj1|j1=1 . . . llvcC} wherein each said input node of the first part has links to all said stored nodes {j1} of the second part wherein each said link L(i1, j1) connects said input node i1 attached to said input vector ailu with said stored node j1 attached to said stored vector mjlvcC wherein each said link L(i1, j1) has a link weight which is equal to said vector pair similarity Svp(i1, j1)=Fvp(ailu, mjlvcC) between said input vector ailu and said stored vector mj1vcC it connects;(f) defining a sequential link set as a set of said links L(i1, j1) in said bipartite graph wherein all links in said sequential link set fulfill a sequencing requirement wherein said sequencing requirement allows to include in said sequential link set only said links which have a mutual sequential relation wherein any two said links . . . L(i1α, j1β) . . . Li1λ, j1δ) . . . with said mutual sequential relation must fulfill the following four conditions: (I) α≠λ (II) β≠δ (III) if α<λ then β<δ (IV) if α>λ then β>δ; wherein the notations (i1α, j1β),(i1λ, j1δ) represent various concrete values of (i1, j1) respectively of actual said links L(i1, j1);(g) using the method of dynamic programming to compute the optimal-sequential said bipartite graph matching wherein said optimal-sequential bipartite graph is defined as said bipartite graph with a sequential links set {L(i1, j1)} wherein said sequential links set {Li1, j1)} has a total sum of said link weights which is the largest among all said sequential link sets possible in said bipartite graph;(h) allocating new said link weights to all said links L(i1, j1) which have said link weights Svp(i1, j1) smaller than a predetermined threshold link weight, wherein said new said link weight: SNvp(i1, j1)=FNvp(ailu, mjlvcC)≦0 is a predetermined penalty function FNvp(ailu, mjlvcC)≦0 of said input vectors ailu and said stored vectors mjlvcC (i) said method of dynamic programming computes said optimal-sequential bipartite graph matching by gradually increasing the size of said optimal-sequential bipartite graph starting with defining an initial said bipartite graph by initiating an input nodes list If={i11} wherein list size: f=1, wherein second part of said bipartite graph has the full set of said stored nodes {j1}, listing in each said stored node j1 said links L(i1, jβ) and said link weights Svp(i1, j1) and recording said stored node j1 with the highest said link weight Svp(i1, j1);(j) increasing said input nodes list by one If={i11,i12} wherein said list size: f=2 and constructing said optimal-sequential bipartite graph Gf which have as said input nodes said input nodes list If={i11,i12} and as said stored nodes said full set of said stored nodes: {j1}, finding said optimal-sequential bipartite graph with two said links L(i12, j1φ), L(i11, j1θ) wherein said links are said sequential link set wherein said sequencing requirement is: φ<θ≦β wherein the two said links listed have maximal said total sum Sf of two said link weights of optimal-sequential bipartite graph Gf wherein f=2; recording said maximal said total sum of said link weights Sf;(k) increasing said list size by one f=f+1; increasing said input nodes list by one If={i11, i12, . . . , i1f} wherein previous said node list was If−1={i11, i12, . . . , i1f−1}, and constructing said optimal-sequential bipartite graph Gf which has as said input nodes said input nodes list If={i11, i12, . . . , i1f} and as said stored nodes said full set of said stored nodes: {j1} finding said sequential links set with g links wherein g g≦f is a maximal number possible in said optimal-sequential bipartite graph Gf wherein said sequential link set includes maximal number of links possible in said optimal-sequential bipartite graph Gf including said links with negative weights whenever positive ones are not available, wherein said links listed have the maximal said total sum: Sg of g said link weights possible in said optimal-sequential bipartite graph Gf wherein if a number nα of said input nodes and a number nβ of said stored nodes do not have said links which can be included in said optimal-sequential bipartite graph Gf then the overall sum: Sf of said weights for Gf which includes said total sum Sg of g said link weights and the nodes without said links is computed by: Sf=Sg−Nw(nα+nβ) wherein Nw is a predetermined empty weight constant;(l) repeating step (k) if f<k1u otherwise if f=k1u all said input nodes {i1} have been included in said list of said input nodes, advancing to step (m);(m) listing g≦k1u said links with the highest said total sum of said link weights Sg and recording said overall sum Sf for Gf wherein f=k1u, also recording that Sf is equal to said optimal-sequential total similarity measure: Sf=S{[A(r)]cu, [M]vcC} between said input array segment [A(r)]cu and said stored array [M]vcC;(n) returning to step (c) if r<rcu+1, otherwise ending 1D algorithm.
  • 11. Claim number 9 wherein said algorithm which computes 1D similarity measure S{1DArrayu(i1), 1DArrayvcc (j1)} between said input sequence 1DArrayu(i1) and said stored sequence 1DArrayvcC(j1) comprising: (a) computing vector pair similarities Svp(i1, j1)=Fvp(ailu,mjlvcC) between all said input vectors {auil|i1=1 . . . k1u} of said input sequence 1DArrayu(ii) and all said stored vectors {mvcCjl|=1 . . . llvcC} of said stored sequence 1DArrayvcC(j1) wherein the similarity function Fvp(ailu,mjlvcC) is an inverse function of the multidimensional distance between said input vector ailu and said stored vector mjlvcC wherein said inverse function Fvp(ailu,mjlvcC) increases when said multidimensional distance decreases wherein said inverse function Fvp(ailu,mjlvcC) decreases when said multidimensional distance increases;(b) defining a bipartite graph which represents said input sequence 1DArrayu(i1) and said stored sequence 1DArrayvcC(j1) wherein each of the input nodes {i1} of the first part of said bipartite graph is attached to one of said input vector {auil|i1=1 . . . klu} and wherein each of the stored nodes of the second part {j1} of said bipartite graph is attached to one of said stored vectors {mvcCj1|j1=1 . . . llvcC} wherein each said input node of the first part has links to all said stored nodes {j1} of the second part wherein each said link L(i1, j1) connects said input node i1 attached to said input vector aui1 with said stored node j1 attached to said stored vector mvcCj1 wherein each said link L(i1, j1) has a link weight which is equal to said vector pair similarity Svp(i1, j1)=Fvp(ailu,mjlvcC) between said input vector ailu and said stored vector mj1vcc it connects;(c) defining a sequential link set as a set of said links L(i1, j1) in said bipartite graph wherein all links in said sequential link set fulfill a sequencing requirement wherein said sequencing requirement allows to include in said sequential link set only said links which have a mutual sequential relation wherein any two said links . . . , L(i1α, j1β) . . . L(i1λ, j1δ) . . . with said mutual sequential relation must fulfill the following four conditions: (I) α≠λ (II) β≠δ (III) if α<λ then β<δ (IV) if α>δ then β>δ; wherein the notations i1α, j1β, i1λ, j1δ represent various concrete values of respectively;(d) using the method of dynamic programming to compute the optimal-sequential said bipartite graph matching wherein said optimal-sequential bipartite graph is defined as said bipartite graph with a sequential links set {L(i1, j1)} wherein said sequential links set {L(i1, j1)} has a total sum of said link weights which is the largest among all said sequential link sets possible in said bipartite graph;(e) allocating new said link weights to all said links L(i1, j1) which have said link weights Svp(i1, j1) smaller than a predetermined threshold link weight, wherein said new said link weight: SNvp(i1, j1)=FNvp(ailu,mjlvcC)≦0 is a predetermined penalty function FNvp(ailu,mjlvcC)≦0 of said input vectors ailu and said stored vectors mjlvcC;(f) said method of dynamic programming computes said optimal-sequential bipartite graph matching by gradually increasing the size of said optimal-sequential bipartite graph starting with defining an initial said bipartite graph by initiating an input nodes list If={i11} wherein list size: f=1, wherein second part of said bipartite graph has the full set of said stored nodes {j1}, recording in each said stored node j1 said links L(i1, jβ) and said link weights Svp(i1, j1) and recording said stored node j1 with the highest said link weight Svp(i1, j1);(g) increasing said input nodes list by one If={i11,i12} wherein said list size: f=2 and constructing said optimal-sequential bipartite graph Gf which have as said input nodes said input nodes list If={i11,i12} and as said stored nodes said full set of said stored nodes: {j1}, finding said optimal-sequential bipartite graph with two said links L(i12, j1φ), L(i11, j1θ) wherein said links are said sequential link set wherein said sequencing requirement is: φ<θ≦β wherein the two said links listed have maximal said total sum Sf of two said link weights of optimal-sequential bipartite graph Gf wherein f=2; recording said maximal said total sum of said link weights Sf;(h) increasing said list size by one f=f+1; increasing said input nodes list by one If={i11, i12, . . . , i1f} wherein previous said node list was If−1={i11, i12, . . . , i1f−1}, and constructing said optimal-sequential bipartite graph Gf which has as said input nodes said input nodes list If={i1l, i12, . . . , i1f} and as said stored nodes said full set of said stored nodes: {j1} finding said sequential links set with g links wherein g≦f is a maximal number possible in said optimal-sequential bipartite graph Gf wherein said sequential link set includes maximal number of links possible in said optimal-sequential bipartite graph Gf including said links with negative weights whenever positive ones are not available, wherein said links listed have the maximal said total sum: Sg of g said link weights possible in said optimal-sequential bipartite graph Gf wherein if a number nα of said input nodes and a number nβ of said stored nodes do not have said links which can be included in said optimal-sequential bipartite graph Gf then the overall sum: Sf of said weights for Gf which includes said total sum Sg of g said link weights and the nodes without said links is computed by: Sf=Sg−Nw(nα+nβ) wherein Nw is a predetermined empty weight constant;(i) repeating step (h) if f<k1u otherwise if f=klu ending the process when all said input nodes {i1} have been included in said list of said input nodes;(j) listing g≦k1u said links with the highest said total sum of said link weights Sg and recording said overall sum Sf for Gf wherein f=k1u, also noting that Sf is equal to said optimal-sequential total similarity measure: Sf=S{1DArrayu(i1), 1DArrayvcC(j1)} between said input sequence 1DArrayu(i1) and stored sequence 1DArrayvcC(j1).
  • 12. Claim number 11 wherein said method of dynamic programming to compute said optimal-sequential said bipartite graph matching wherein said process initiating in step (f) starts by constructing said input node list starting with the last said input node i1k1α and proceeds in reverse order ending at including in said list said input node i11.
  • 13. Claim number 11 wherein constructing an indexing method for efficient detection of the largest said similarity measures {S{[A]u,[MvcC]}} between said input array [A]u and said stored arrays {[M]vcC} wherein: [A]u={auil|i1=1 . . . k1u} and{[M]vcC}={{milvcC|i1=1, 2, . . . , llvcC}}(a) for a predetermined said input pattern number u all said input vectors ailu are indexed into a p dimensional data structure Δp which stores all said stored vectors mjivcC of said stored arrays {[M]vcC} wherein both said input vectors ai1u and said stored vectors mj1vcC are p dimensional;(b) for each one of said input vectors ai1u which is indexed into said data structure Δp all said stored vectors mj1vcC which are within a predetermined Euclidean distance dmax: d{ai1u,mj1vcC}≦dmax are retrieved;(c) all said retrieved stored vectors are separated into their original said stored arrays: {[M]vcC};(d) a predetermined selection function yields selection scores to each of said stored arrays said selection function is dependent on the number of retrieved said stored vectors mj1vcC which belong to each said stored array, their Euclidean distances d{ai1u,mj1vcC} from their indexing said input vectors ai1u and the sum of indices differences between their said indexing input vectors and said retrieved said stored vectors: Σ[(i11−j11)2+ . . . +(i1k1u−j1klu)2]1/2;(e) said a predetermined number of stored arrays [MvcC] with the highest said selection scores are selected for further computations of said similarity measures: {S{[A]u,[MvcC]}};(f) computing said similarity measures {S{[A]u,[MvcC]}} of [A]u with each of said selected stored arrays [MvcC] as {S{[A]u,[MvcC]}}=S{1DArrayu(i1), 1DArrayvcC(j1)} wherein 1DArrayu(i1)={aui1|i1=1 . . . k1u}=[A]u and wherein 1DArrayccC(j1)={mvcCj1|=1 . . . l1vcC}=[M]vcC;(g) selecting said stored array [MvcC] with the highest said similarity measure as the most similar to said input array [A]u and said input array can be classified into the same class c of the selected said stored array [M]vcC if said similarity measure is larger than a predetermined similarity threshold value.
  • 14. Claim number 10 wherein one dimensional said input array [A]u represents a continuous speech signal that contains spoken words and wherein sets of one dimensional said stored arrays {[M]vcC} which represent a collection of exemplar words wherein each class c includes a set of Vc said stored arrays {[M]vcC} which represent the same spoken said exemplar word wherein classification of said spoken words in said input array [A]u is performed by the following steps: (a) separating said exemplar words represented by {[M]vcC} into phonemes by dividing each said stored array [MvcC] into stored segments [HhvcC] by clustering similar said stored vectors mj1vcC which are also temporal neighbors wherein said stored array [M]vcC={mi1vcC|1, 2, . . . , l1vcC} is divided into nh(vcc) said stored segments: [M]vcC=[H1vcC]+ . . . +[Hnh(vcC)vcC] wherein each said stored segment [Hω]vcC has the following said stored vectors [Hω]vcC|j1=jω, . . . , jω+μω; wherein ω=1, . . . , nh(vcc); is the serial number of said stored segment, wherein 1≦jω; jω+μω≦l1vcC and wherein said clustering minimizes a predetermined clustering criterion which is a function of mutual distances of said stored vectors: {mj1vcC|j1=jω, . . . , jω+μω} within each said stored segment: [Hω]vcC;(b) computing the average length of each said class c:
  • 15. Claim number 1 wherein said number of dimensions of said input array is: q=2, wherein said number of dimensions of said stored array: q=2, wherein said input sequence of said input vectors is defined as one dimensional input array denoted by 1DArrayu(i1|a2) wherein 1DArrayu(i1|a2)={eui1,i2|i1=1 . . . k1u; i2=a2} is said one dimensional part of said input array [A]u, wherein a2=1, . . . , k2u; wherein said stored sequence of said stored vectors is defined as one dimensional stored array denoted by 1DArrayvcC(j1|b2) wherein 1DArrayvcC(j1|b2)={mvcCj1,j2|j1=1 . . . l1vcC|j2=b2} is said one dimensional part of said stored array [M]vcC wherein b2=1, . . . , l2vcC.
  • 16. Claim number 15 wherein said 1D algorithm which computes 1D similarity measure S{1DArrayu(j1|a2), 1DArrayvcC(j1|b2)} between said input sequence 1DArrayu(i1|a2) and stored sequence 1DArrayvcC(j1|b2) comprising: (a) computing vector pair similarities Svp(i1, j1|a2,b2)=Fvp(ei1,a2u,mj1,b2vcC) between all said input vectors {eui1,i2|i1=1 . . . klu; i2=a2} of said input sequence 1DArrayu(i1|a2) and all said stored vectors {mvcCj1,j2|j1=1 . . . l1vcC|j2=b2} of said stored sequence 1DArrayvcC(j1|b2) wherein similarity function Fvp(ei1,a2u,mji,b2vcC) is an inverse function of the multidimensional distance between said input vector ei1,a2u and said stored vector mj1,b2vcC wherein said similarity function Fvp(ei1,a2u,mj1,b2vcC) increases when said multidimensional distance decreases, wherein said similarity function Fvp(ei1,a2u,mj1,b2vcC) decreases when said multidimensional distance increases;(b) defining a bipartite graph which represents said input sequence 1DArrayu(i1|a2) and said stored sequence 1DArrayvcC(j1|b2) wherein said bipartite graph has two parts wherein the first part consists of input nodes wherein the second part consists of stored nodes wherein each of said input nodes {i1|a2}={i1|i1=1, . . . , k1u;a2} of said first part of said bipartite graph is attached to one of said input vectors {eui1,i2|i1=1 . . . k1u;i2=a2} and wherein each of said stored nodes of said second part {j1|b2}={j1|j1=1, . . . , l1vcC;b2} of said bipartite graph is attached to one of said stored vectors {mvcCj1,j2|j1=1 . . . , l1vcC|j2=b2} wherein each said input node of the first part has links to all said stored nodes {j1|b2} of the second part wherein each said link L(i1, j1|a2,b2) connects said input node i1 attached to said input vector ei1,a2u with said stored node j1 attached to said stored vector mj1,b2vcC wherein each said link L(i1, j1|a2, b2) has a link weight which is equal to said vector pair similarity Svpuvcc(i1, j1|a2,b2)=Fvp(ei1,a2u,mj1,b2vcC) between said input vector ei1,a2u and said stored vector mj1,b2vcC it connects;(c) defining a sequential link set as a set of said links L(i1, j1|a2,b2) in said bipartite graph wherein all said links in said sequential link set fulfill a sequencing requirement wherein said sequencing requirement allows to include in said sequential link set only said links which have a mutual sequential relation wherein any two said links . . . L(i1=η,j1=ε|a2,b2) . . . L(i1=λ, j1=δ|a2,b2) . . . with said mutual sequential relation must fulfill the following four conditions: (I) η≠λ (II) ε≠δ (III) if η<λ then ε<δ (IV) if η>λ then ε<δ;(d) allocating new said link weights: SNvpuvcc(ii1, j1|a2,b2) to all said links L(i1, j1|a2,b2) which have said link weights Svpuvcc(i1, j1|a2, b2)<ThLW smaller than a predetermined threshold link weight: ThLW, wherein Svpuvcc(i1, j1|a2,b2)=SNvpuvcc(i1, j1|a2,b2), wherein said new said link weight: SNvpuvcc(i1, j1|a2,b2)=FNvp(ei1,a2u,mj1,b2vcC)≦0 is a predetermined penalty function: FNvp(ei1,a2u,mj1,b2vcC)≦0 of said input vectors: ei1,a2u and said stored vectors: mj1,b2vcC;(e) using said method of dynamic programming to compute the optimal-sequential said bipartite graph matching wherein said optimal-sequential said bipartite graph is defined as said bipartite graph with said sequential links set {L(i1, j1|a2,b2)} wherein said sequential links set {L(i1, j1|a2,b2)} has a total sum of said link weights: TsLWzuvcc(a2,b2) wherein
  • 17. Claim number 16 wherein said 1D algorithm which computes said optimal-sequential total similarity measures: {S{1DArrayu(i1|a2), 1DArrayvcC(j1|b2)}} between said input sequence 1DArrayu(i1|a2) and stored sequence 1DArrayvcC(j1|b2) wherein a2=1, . . . , k2u;b2=1, . . . , l2vcC, wherein defining a set of similarity links: {SLuvcc(a2,b2)} which have similarity link weights: {SLWuvcc(a2,b2)} which are equal to said optimal-sequential total similarity measures: {S{1DArrayu(i1|a2), 1DArrayvcC(j1|b2)}}wherein {SLWuvcc(a2,b2)}={S{1DArrayu(i1|a2), 1DArrayvcC(j1|b2)}},wherein said algorithm which computes said similarity measures {S{[A]u,[M]vcC}} between said input array [A]u and said stored arrays {[M]vcC} comprising:(a) building a similarity bipartite graph which represents said input sequence 1DArrayu(a2) and said stored sequence 1DArrayvcC(b2) wherein said similarity bipartite graph has two parts, wherein the first part consists of input nodes {a2}={a2|a2=1, . . . , k2u} wherein the second part consists of stored nodes {b2}={b2|b2=1, . . . , l2vcC}, wherein each said input node a2 of the first part has similarity links to all said stored nodes {b2} of the second part wherein each said similarity link SLuvcc(a2,b2) connects said input node a2 with said stored node b2 wherein each said similarity link SLuvcc(a2,b2) has said similarity link weight SLWuvc(a2,b2);(b) defining a sequential similarity link set as a set of said similarity links SLuvcc(a2,b2) in said bipartite graph wherein all said similarity links in said sequential similarity link set fulfill said sequencing requirement wherein said sequencing requirement allows to include in said sequential similarity link set only said similarity links which have a mutual sequential relation wherein any two said similarity links . . . SLuvcc(a2=η, b2=ε) . . . SLuvcc(a2=λ, b2=δ) . . . with said mutual sequential relation must fulfill the following four conditions: (I) η≠λ (II) ε≠δ (III) if η<λ then ε<δ (IV) if η>λ then ε<δ;(c) allocating new said similarity link weights to all said similarity links {SLuvcc(a2,b2)} which have said similarity link weights SLWuvcC (a2, b2) smaller than a predetermined threshold similarity link weight: TSLWuvcvc wherein said new said similarity link weight: SLWuvcc (a2,b2)=NSLWuvcc(a2,b2)=Puvcc≦0 wherein Puvcc≦0 is a predetermined penalty function of said serial number of said input array u, said serial number of stored array: vc within said class c and said class c;(d) using said method of dynamic programming to compute said optimal-sequential said similarity bipartite graph wherein said optimal-sequential said similarity bipartite graph is defined as said similarity bipartite graph with said sequential similarity links set: {SLuvcc(a2,b2)} wherein said sequential similarity links set {SLuvcc(a2, b2)} has a total sum of said similarity link weights TSLWuvcc(a2,b2) wherein
  • 18. Claim number 15 wherein said 1D algorithm which computes 1D similarity measure S{1DArrayu(i1|a2), 1DArrayvcC(j1|b2)} between said input sequence 1DArrayu(i1|a2) and stored sequence 1DArrayvcC(j1|b2) comprising: (a) computing vector pair similarities Svp(i1, j1|a2, b2)=Fvp(ei1,a2u,mj1,b2vcC) between all said input vectors {eui1,i2|i1=1 . . . k1u;i2=a2} of said input sequence 1DArrayu(i1|a2) and all said stored vectors {mvcCj1,j2|j1=1 . . . l1vcC|j2=b2} of said stored sequence 1DArrayvcC(j1|b2) wherein similarity function Fvp(ei1,a2u,mj1,b2vcC) is an inverse function of the multidimensional distance between said input vector ei1,a2u and said stored vector mj1,b2vcC wherein said similarity function Fvp(ei1,a2u,mj1,b2vcC) increases ncrees when said multidimensional distance decreases, wherein said similarity function Fvp(ei1,a2u,mj1,b2vcC) decreases when said multidimensional distance increases;(b) defining a bipartite graph which represents said input sequence 1DArrayu(i1|a2) and said stored sequence 1DArrayvcC(j1|b2) wherein said bipartite graph has two parts wherein the first part consists of input nodes wherein the second part consists of stored nodes wherein each of said input nodes {i1} of said first part of said bipartite graph is attached to one of said input vectors {eui1,i2|i1=1 . . . k1u;i2=a2} and wherein each of said stored nodes of said second part {j1} of said bipartite graph is attached to one of said stored vectors {mvccji,j2|j1=1 . . . l1vcC|j2=b2} wherein each said input node of the first part has links to all said stored nodes {j1} of the second part wherein each said link L(i1, j1|a2, b2) connects said input node i1 attached to said input vector ei1,a2u with said stored node j1 attached to said stored vector mj1,b2vcC wherein each said link L(i1, j1|a2,b2) has a link weight which is equal to said vector pair similarity Svp(i1, j1|a2, b2)=Fvp(ei1,a2u,mj1,b2vcC) between said input vector ei1,a2u and said stored vector mj1,b2vcC connects;(c) defining a sequential link set as a set of said links L(i1, j1|a2,b2) in said bipartite graph wherein all said links in said sequential link set fulfill a sequencing requirement wherein said sequencing requirement allows to include in said sequential link set only said links which have a mutual sequential relation wherein any two said links . . . L(i1=η, j1=ε|a2,b2) . . . L(i1=λ, j1=δ|a2,b2) . . . with said mutual sequential relation must fulfill the following four conditions: (I) η≠λ (II) ε≠δ (III) if η<λ then ε<δ (IV) if η>λ then ε<δ;(d) allocating new said link weights: SNvp(i1, j1|a2,b2) to all said links L(i1, j1|a2,b2) which have said link weights Svp(i1, j1|a2,b2)<TLW smaller than a predetermined threshold link weight: TLW, wherein Svp(i1, j1|a2,b2)=SNvp(i1, j1|a2,b2), wherein said new said link weight: SNvp(i1, j1|a2,b2)=FNvp(ei1,a2u,mj1,b2vcC)≦0 is a predetermined penalty function: FNvp(ei1,a2u,mj1,b2vcC)≦0 of said input vectors: ei1,a2u and said stored vectors: mj1,b2vcC;(e) using the method of dynamic programming to compute the optimal-sequential said bipartite graph matching wherein said optimal-sequential said bipartite graph is defined as said bipartite graph with said sequential links set {L(i1, j1|a2,b2)} wherein said sequential links set {L(i1, j1|a2,b2)} has a total sum of said link weights which is the largest among all said sequential link sets possible in said bipartite graph;(f) said method of dynamic programming computes said optimal-sequential said bipartite graph matching by gradually increasing the size of said optimal-sequential said bipartite graph starting with defining an initial said bipartite graph by initiating an input nodes list If={i1=1} wherein said list size: f=1, wherein said input nodes list denotes all said input nodes of said input part, wherein said second part of said bipartite graph has the full set of said stored nodes {j1}, listing in each said stored node j1 said link L(i1=1, j1|a2,b2) and said link weight Svp(i1=1, j1|a2,b2);(g) increasing said input nodes list by one If={i1=1, i1=2} wherein said list size: f=2 and constructing said optimal-sequential said bipartite graph Gf which have as said input nodes said input nodes list If={i1=1, i1=2} and as said stored nodes said full set of said stored nodes: {j1}, finding said optimal-sequential bipartite graph with maximum two said links L(i1=1, j1=φ|a2,b2), L(i1=2, j1=θ|a2,b2) wherein said links are said sequential link set wherein said sequencing requirement is: φ<θ≦β wherein the two said links listed have maximal said total sum Sf of maximum two said link weights of said optimal-sequential said bipartite graph Gf, recording said maximal said total sum of said link weights Sf;(h) increasing said list size by one f=f+1; increasing said input nodes list by one If={i1=1,i1=2, . . . , i1=f} wherein previous said node list was If−1={i1=1, i1=2, . . . , i1=f−1} and constructing said optimal-sequential said bipartite graph Gf which has as said input nodes said input nodes list If={i1=1,i1=2, . . . , i1=f} and as said stored nodes said full set of said stored nodes: {j1} finding said sequential links set with g links wherein g≦f is a maximal number possible in said optimal-sequential bipartite graph Gf wherein said sequential link set includes maximal number of links possible in said optimal-sequential said bipartite graph Gf including said links with negative weights whenever positive ones are not available, wherein said links listed have the maximal said total sum of g said link weights Sguvcc(a2,b2) possible in said optimal-sequential said bipartite graph Gf wherein if a number nα of said input nodes and a number nβ of said stored nodes do not have said links which can be included in said optimal-sequential said bipartite graph Gf then the total said sum of said links weights Suvcc(a2,b2) for Gf is computed by: Sfuvcc(a2,b2)=Sguvcc(a2,b2)−Nw(nα+nβ) wherein Nw is a predetermined penalty weight constant;(i) repeating step (h) if f<klu otherwise if f=klu ending the process wherein all said input nodes {i1} have been included in said list of said input nodes;(j) listing g≦k1u said links with the highest said total sum of said link weights Sguvcc(a2,b2) and recording said total sum of said link weights Sfuvcc(a2,b2), wherein Sfuvcc(a2,b2) is equal to said optimal-sequential total similarity measure: Sfuvcc(a2, b2)=S{1DArrayu(i1|a2), 1DArrayvcC(j1|b2)} between said input sequence 1DArrayu(i1|a2) and stored sequence 1DArrayvcC(j1|b2).
  • 19. Claim number 18 wherein said 1D algorithm which computes said optimal-sequential total similarity measures:{SfuvcC(a2,b2)}={S{1DArrayu(i1|a2), 1DArrayvcC(j1|b2)}} between said input sequence 1 DArrayu(i1|a2) and stored sequence 1DArrayvcC(j1|b2) wherein a2=1, . . . , k2u;b2=1, . . . l2vcC, wherein similarity links: {SLuvcc(a2,b2)} which have similarity link weights: {SLWuvcc(a2,b2)} are equal to said optimal-sequential total similarity measures: {Sfuvcc(a2,b2)} wherein {SLWuvcc(a2,b2)}={Sfuvcc(a2,b2)}={S{1DArrayu(i1|a2), 1DArrayvcc(j1|b2)}}, wherein said algorithm which computes said similarity measures {S{[A]u,[M]vcc}} between said input array [A]u and said stored arrays {[M]vcc} comprising: (a) building a similarity bipartite graph which represents said input sequence 1DArrayu(a2) and said stored sequence 1DArrayvcc(b2) wherein said similarity bipartite graph has two parts, wherein the first part consists of input nodes {a2}={a2|a2=1, . . . , k2u} wherein the second part consists of stored nodes {b2}={b2|b2=1, . . . , l2vcc}, wherein each said input node a2 of the first part has similarity links to all said stored nodes {b2} of the second part wherein each said similarity link SLuv cc(a2,b2) connects said input node a2 with said stored node b2 wherein each said similarity link SLuvcc(a2,b2) has said similarity link weight SLWuvcc(a2,b2);(b) defining a sequential similarity link set as a set of said similarity links SLuvcc(a2,b2) in said bipartite graph wherein all said similarity links in said sequential similarity link set fulfill said sequencing requirement wherein said sequencing requirement allows to include in said sequential similarity link set only said similarity links which have a mutual sequential relation wherein any two said similarity links . . . SLuvcc(a2=η,b2=ε) . . . SLuvcc(a2=λ,b2=δ) . . . with said mutual sequential relation must fulfill the following four conditions: (I) η≠λ (II) ε≠δ (III) if η<λ then ε<δ (IV) if η>λ then ε>δ;(c) allocating new said similarity link weights to all said similarity links {SLuvcc(a2,b2)} which have said similarity link weights SLWuvcc(a2,b2) smaller than a predetermined threshold similarity link weight: TSLWuvcc wherein said new said similarity link weight: SLWuv cc(a2,b2)=NSLWuvcc(a2,b2)=Puvcc≦0 wherein Puvc≦0 is a predetermined penalty function of said serial number of said input array u, said serial number of stored array: vc within said class c and said class c;(d) using a method of dynamic programming to compute said optimal-sequential said similarity bipartite graph wherein said optimal-sequential said similarity bipartite graph is defined as said similarity bipartite graph with said sequential similarity links set: {SLuvcc(a2, b2)} wherein said sequential similarity links set {SLuvcc(a2,b2)} has a total sum of said similarity link weights TSLWuvcc(a2,b2) wherein
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of a provisional patent application: Ser. No. 61/573,208

Provisional Applications (1)
Number Date Country
61573208 Sep 2011 US