Embodiments disclosed herein relate in general to classification of multidimensional (having a dimension equal to or greater than 2) data points (MDPs), in dynamic datasets and in particular to the computation of the location of a newly arrived MDP by using a multi-scale extension (MSE) method that maps (provides coordinates of) each such newly arrived MDP efficiently into a lower dimension “embedded” space.
Large dynamic multidimensional datasets (“big data”) are common in a variety of fields. Exemplarily, such fields include finance, communication networking (i.e. protocols such as TCP/IP, UDP, HTTP, HTTPS, SCADA and cellular) and streaming, social networking, imaging, databases, e-mails, governmental database and critical infrastructures. In these, MDPs are accumulated constantly. A main goal in processing big data is to understand it and to extract intelligence from it. Big data can be described by hundreds or thousands of parameters (features). Consequently, in its original form in a source metric space, big data is incomprehensible to understand, to process, to analyze and to draw conclusions from.
Dimensionality reduction methods that embed data from a metric space, (where only the mutual distances or “affinities” between MDPs are given) into a lower dimension (vector) space are known. One such method involves diffusion maps (“DM”), see R. R. Coifman and S. Lafon, “Diffusion Maps”, Applied and Computational Harmonic Analysis, 21:5-30, 2006. A kernel method such as DM assigns distances between MDPs. These distances quantify the affinities between the MDPs. In the DM method, a diffusion operator is first formed on the MDPs. Spectral decomposition of the operator then produces from the data a family of maps in a Euclidean space. This is an “embedded” MDP matrix. The Euclidean distances between the embedded MDPs approximate the diffusion distances between the MDPs in the source metric space, i.e. the diffusion distance becomes the transition probability in t time steps from one MDP to another. In the MDP matrix, each row contains one MDP. A spectral decomposition of the MDP matrix, whose dimensions are proportional to the size of the data, has high computational costs. One problem is to determine how a new, ‘unseen’ sample (newly arrived MDP) can be mapped into a previously learnt or established embedded lower-dimension space. The DM procedure in particular cannot be repeated constantly for each newly arrived MDP.
Consider as an example a simple classification problem involving a set of training samples and a separate set of test samples, the latter used to check the validity of the classification. If one wishes to reduce the dimensionality of these datasets so that one can perform the classification in a lower-dimension space, one option is to combine the training and test sets into one “combined” dataset and to perform the coordinate computation on this combined dataset before splitting into two sets again in the low-dimension space, Another option is to run the algorithm on the training set only, then apply what has been learnt from this process to map the test set into the lower-dimension space. The advantage of the latter approach is that it is not only potentially less computationally expensive, but also that new samples can be continually added to the lower-dimension embedding without the need to re-compute the lower-dimension space. This approach is commonly referred to as the “out-of-sample extension” or “OOSE”.
In an OOSE problem, a new MDP needs to be mapped into a space that can be low-dimensional without affecting this space and without requiring a re-learning or change in the space parameterization for future learning. When the mapping is into a lower-dimension space it is also called “sampling” or “sub-sampling”. One way to perform OOSE is by using a geometric harmonics methodology (see. e.g. R. R. Coifman and S. Lafon, “Geometric Harmonics: A novel tool for multi-scale out-of-sample extension of empirical functions”, Applied and Computational Harmonic Analysis, 21 (1), 31-52, 2006, referred to hereinafter as “GH”). Another way to perform OOSE is by using the Nystrom method (see C. T. H Baker, “The numerical treatment of integral equations”, Oxford: Calrendon Press 1977 and W. H. Presse et al., “Numerical Recipes in C”, Cambridge University Press, 2nd Edition, 1992, page 791-802, hereinafter “Press”).
The OOSE is performed on data in which the only known entities are the affinities between MDPs, as well as on empirical functions. The goal is to sub-sample big data and then to find the coordinates of a newly arrived MDP where only affinities between the source MDPs are given. The empirical functions (which may be for example either functions or mapping from one space to another such as embedding) are defined on MDPs and are employed for embedding newly arrived MDPs. The embedding occurs in a Euclidean space, determined once by a finite set of MDPs in a training process. The affinities between MDPs in an original source space (which form a training dataset) are converted into coordinates of locations in the embedded (Euclidean) space. The conversion of affinities of newly arrived MDPs into coordinates of locations in the embedded space is then done reliably and quickly without the need to repeat the entire computation, as done in training phase. To clarify, the training process is not repeated for each newly arrived MDP.
A numerical rank of a matrix is the number of numerically independent columns of the matrix. Suppose that l(s) is the numerical rank of a n×n Gaussian kernel matrix G(s) (EQ. 5) for a fixed scale s. To sub-sample correctly the data points, one needs to identify the l(s) columns in G(s) that constitute a well-conditioned basis for its numerical range. In other words, one needs to look for a n×l(s) matrix B(s) whose columns constitute a subset of the columns of G(s) and for a l(s)×n matrix P(s), such that l(s) of its columns make up an identity matrix and B(s)Ps≈G(s). Such matrix factorization is called interpolative decomposition (“ID”). The MDPs D3={xs
1. Apply exemplarily the pivoted QR routine (described in G. H. Golub and C. F. Van Loan, “Matrix Computations”, The John Hopkins University Press, 3rd Edition, 1996, Algorithm 5.4.1) to A to obtain APR=QR, where PR is a N×n permutation matrix, Q is a m×m orthogonal matrix and R is a m×n upper triangular matrix, and where the absolute values on the diagonal are in decreasing orders.
2. Split R and Q such that
Q=[Q1 Q2], where R11 is k×k, R12 is k×(n−k), R22 is (m−k)×(n−k), Q1 is m×k and Q2 is m×(m−k).
3. Define m×k matrix B=Q1R11.
4. Define the k×n matrix P=[Ik R11−1R12] where Ik is the k×k identity matrix.
1. Use a random number generator to form a real k×m matrix Gε whose entries are independent and identically distributed (iid) Gaussian random variables of zero mean and unit variance. Compute the k×n product matrix W=GεA, where Gε is defined by EQs. 1 and 2,
where the entries are the affinities matrix given by
G
ε)i,j=gε(xi, xj)=exp(−∥xi−xj∥2/ε), i,j=1, 2, . . . , n (2)
and where ∥·∥ is a metric on the space.
2. Using the DID algorithm, form a k×l matrix S whose columns constitute a subset of the columns of W and a real l×n matrix P such that ∥SP−W∥2≦√{square root over (4l(n−l)+1)}σl+1(W).
3. There exists a finite sequence i1, i2, . . . , il of integers such that for any j=1, . . . , l, the jth column of S is the ij column of W. Collect the corresponding columns of A into a real m×l matrix B so that for any j=1, . . . , l, the jth column of B is the ijth column of A. Then, the sampled dataset is Ds={xi
The GH and Nystrom OOSE schemes have three significant disadvantages: (1) the diagonalization of the kernel costs O(n3) operations where n2 is the size of the matrix; (2) the kernel may be ill-conditioned due to a fast decay of its spectrum, i.e. it becomes numerically non-invertible; and (3), it is unclear how to choose the length parameter in the kernel, since the output is sensitive to this choice.
There is therefore a need for, and it would be advantageous to have a method for big data sampling and extension and for classification of multidimensional data points in big data that does not suffer from the disadvantages listed above.
Method embodiments disclosed herein teach classification of multidimensional data points in big data (MDP matrices) through computation of locations of newly arrived MDPs using a multi-scale extension (MSE) method that maps such MDPs efficiently into a lower-dimension space. Such classification is exemplarily needed for many useful purposes, for example for finding anomalies in big data, cyber security, process control, performance monitoring, fraud detection, trend identification, etc. The MSE method includes a single-scale OOSE step and a multi-scale OOSE step. To clarify, as used herein, “single-scale OOSE” refers to OOSE performed with a scale parameter s =0, and “multi-scale OOSE” refers to OOSE performed with s≧1.
The MSE disclosed herein replaces the known GH and Nystrom extensions. In order to know the coordinates of newly arrived MDPs, given mutual distances between MDPs in a training set are converted by the MSE into coordinates of locations of these MDPs in an embedded space. The MSE uses a coarse-to-fine hierarchy of the multi-scale decomposition of a Gaussian kernel that established the distances between MDPs in the training set to find the coordinates of newly arrived MDPs in the embedded space. The first step in the MSE operation is the determination of a well-conditioned basis of an input MDP matrix, exemplarily using
RID. The MSE generates a sequence of approximations to a given empirical function or to a given mapping on a set of MDPs, as well as their OOSE to any newly-arrived MDP. The result is a set of coordinates for each newly arrived MDP.
In an embodiment there is provided a method for classification of a newly arrived MDP in a dynamic data set, comprising the steps of: generating a well-conditioned basis in a source matrix of multidimensional data points; applying a single-scale (s=0) OOSE to the newly arrived MDP on the well-conditioned basis to provide coordinates of an approximate location of the newly arrived MDP in an embedded space; and applying a multi-scale (s≧1) OOSE to the newly arrived MDP to provide improved coordinates of the newly arrived MDP location in the embedded space, thereby classifying the newly arrived MDP in relation to the source matrix multidimensional data points of the dynamic data set.
In an embodiment, the step of applying a multi-scale OOSE to the newly arrived MDP includes applying the multi-scale OOSE to the newly arrived MDP on the well-conditioned basis.
Aspects, embodiments and features disclosed herein will become apparent from the following detailed description when considered in conjunction with the accompanying drawings, in which:
Gε defined in EQ. 1 is now computed using a multi-scale approach as follows: define a sequence of Gaussian kernel matrices Gs, s=0, 1, . . . , whose entries are
(Gs)i, j=gε
where εs is a positive monotonic decreasing function of the scale parameter s that tends to zero as s tends to infinity. gε
Assume we have an empirical function ƒ=[ƒ1, . . . , ƒn]T on a dataset D={x1, . . . , xn} in d (ƒiƒ(xi), xi ∈d, i=1, . . . , n). This means that each xi ∈d, i=1, . . . , n, is a vector of d features in a Euclidean space of dimension d. The goal is to extend ƒ via OOSE to any MDP in d by a superposition of Gaussians centered at D. This can be done for example by using the Nystrom extension in the following way:
1. Calculate the coordinates vector c=(c1, . . . , cn)T of ƒ in the basis of Gε (EQ. 2) columns such that c=Gε−1 ƒ.
2. Extend ƒ to x* ∈d by an extension of the Gaussians to x* such that
Thus, we define the notation in EQ. (3) to be
G(s)=Gε
where {εs=2−1}∞s=0 is a decreasing positive sequence that tends to zero as s tends to infinity. We use the following multi-scale two-phase scheme:
1. Sampling: a well-conditioned basis of G(s)′s (EQ. 5) columns is identified. Accordingly, the sampled dataset is the set of MDPs associated with these columns. This overcomes the problem arising from the numerical singularity of G(s).
2. OOSE: an empirical function ƒ is projected on this basis. Then, ƒ(s), which is the projection of ƒ on this basis, is extended by a continuous extension of the involved Gaussians to x* in a way similar to EQ. (4).
ƒ does not have to be equal to its projection ƒ(s). In this case (when f is not equal to its projection), we apply the procedure to ƒ−ƒ(s) with G(s+1). This way we obtain a multi-scale scheme for data sampling and OOSE. The OOSE is achieved by sampling and by a two-phase multi-scale scheme.
Once RID is applied to G(s), the columns of B(s) constitute a well-conditioned basis for the columns of G(s). A single scale extension (below) is used to extend the orthogonal projection of ƒ=[ƒ1, . . . , ƒn]T on B(s) to a newly-arrived MDP x* ∈d\D that belongs to d (x* is a vector of d features) and does not belong to D. For this we need the following notation:
G
*
(s)
=[g
ε
(x*, xs
where gε
The single-scale OOSE step is described in more detail with reference to
Input: A n×l(s) matrix B(s), the associated sampled data Ds={xs
Output: The projection ƒ(s)=[ƒ1(s), . . . , ƒn(s)]T of f on B(s) and its extension ƒ*(s) to x*—step 210.
1. Step 200—apply a singular value decomposition (“SVD”) to B(s) (see e.g. Press, pages 59-70) such that B(s)=UΣV*.
2. Step 202—calculate the pseudo-inverse (B(s))†=VΣ−1U* of B(s).
3. Step 204—calculate the coordinates' vector of the orthogonal projection of ƒ(s) on the range of B(s) in the basis of the columns of B(s) c=(B(s))†ƒ.
4. Step 206—calculate the orthogonal projection of ƒ on the columns of B(s), ƒ(s)=B(s)c.
5. Step 208—form the matrix G*(s) from EQ. 6.
6. Step 210—calculate the extension ƒ*(s) of ƒ(s) to x*:
ƒ*(s)=G*(s)c. (7)
Due to EQ. (7), ƒ(s) is a linear combination of l(s) Gaussians with a fixed length parameter εs.
The multi-scale data OOSE is described in detail with reference to
Input: A dataset D={x1, . . . , xn} in d, a positive number T>0, a new multidimensional MDP x* ∈d\D that does not belong to D, the numerical rank l(s) of G(s), an empirical function ƒ=[ƒ1, . . . , ƒn]T to for the OOSE of x* and an error parameter err≧0.
Output: An approximation F=[F1, . . . , Fn]T of ƒ on D and its OOSE F* to X*—step 312 in
1. Step 300—set the scale parameter s=0, F(−1)=0 ∈n and F*(−1)−0. n is Euclidean vector space where each element in the space is a vector of size n.
2. Step 302—WHILE ∥F−F(s−1)∥>err DO
where P is the minimal cover of the cubes of volume (√{square root over (ε)}d) that are associated with G(s).
3. ENDWHILE
4. Step 314—F=F(s−1) and F*=F*(s−1).
The input parameters to the multi-scale OOSE are chosen in the following way: T is the length parameter of the Gaussian kernel matrix at the first scale of the algorithm. Therefore, in order to capture x*, we set T to be T=max {dist (x*, D), κ(D)} where
diameter (D) is the distance between MDPs of the most distant pair in D. This choice of T ensures that in the first scale the influence of D on x* is significant, and that D is covered by a single Gaussian. err is a user defined accuracy parameter. If we take err=0, then F=ƒ, i.e. we have a multi-scale interpolation scheme. An err too large may result in inaccurate approximation of ƒ. Typically, we take δ=0.1, which guarantees that B(s) is well-conditioned and, as a consequence, guarantees the robustness of the multi-scale data OOSE.
During the testing stage, new MDPs, which did not participate in the training procedure (training phase), arrived constantly. To find the locations of the newly arrived MDPs in the embedded space, multi-scale OOSE as described herein was applied to each newly arrived MDP to find its coordinates. Computed coordinates of the newly arrived MDPs are shown by 410, 510 and 610 in
The various features and steps discussed above, as well as other known equivalents for each such feature or step, can be mixed and matched by one of ordinary skill in this art to perform methods in accordance with principles described herein. Although the disclosure has been provided in the context of certain embodiments and examples, it will be understood by those skilled in the art that the disclosure extends beyond the specifically described embodiments to other alternative embodiments and/or uses and obvious modifications and equivalents thereof. Accordingly, the disclosure is not intended to be limited by the specific disclosures of embodiments herein. For example, any digital computer system can be configured or otherwise programmed to implement the methods disclosed herein, and to the extent that a particular digital computer system is configured to implement the methods of this invention, it is within the scope and spirit of the disclosed embodiments. Once a digital computer system is programmed to perform particular functions pursuant to computer-executable instructions from program software that implements the method embodiments disclosed herein, it in effect becomes a special purpose computer particular to the invention embodiments disclosed herein. The techniques necessary to achieve this are well known to those skilled in the art and thus are not further described herein.
Computer executable instructions implementing the methods and techniques of the present invention can be distributed to users on a computer-readable medium and are often copied onto a hard disk or other storage medium. When such a program of instructions is to be executed, it is usually loaded into the random access memory of the computer, thereby configuring the computer to act in accordance with the techniques disclosed herein. All these operations are well known to those skilled in the art and thus are not further described herein. The term “computer-readable medium” encompasses distribution media, intermediate storage media, execution memory of a computer, and any other medium or device capable of storing for later reading by a computer a computer program implementing the present invention.
All patents and publications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual patent and publication was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present application.
This application claims the benefit of U.S. provisional patent application 61/611,282, filed Mar. 15, 2012, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61611282 | Mar 2012 | US |