LEARNING SPARSITY-CONSTRAINED GAUSSIAN GRAPHICAL MODELS IN ANOMALY DETECTION

Abstract
A first dependency graph is constructed based on a first data set by solving an objective function constrained with a maximum number of non-zeros and formulated with a regularization term comprising a quadratic penalty to control sparsity. The quadratic penalty in constructing the second dependency graph is determined as a function of the first data set. A second dependency graph is constructed based on a second data set by solving the objective function constrained with the maximum number of non-zeros and formulated with the regularization term comprising a quadratic penalty. The quadratic penalty in constructing the second dependency graph is determined as a function of the first data set and the second data set. An anomaly score is determined for each of a plurality of sensors based on comparing the first dependency graph and the second dependency graph, nodes of which represent sensors.
Description
STATEMENT REGARDING PRIOR DISCLOSURES BY THE INVENTOR OR A JOINT INVENTOR

The following disclosure is submitted under 35 U.S.C. 102(b)(1)(A): D. T. Phan, T. Ide, J. Kalagnanam, M. Menickelly and K. Scheinberg, “A Novel l0-Constrained Gaussian Graphical Model for Anomaly Localization,” 2017 IEEE International Conference on Data Mining Workshops (ICDMW), New Orleans, La., USA, November 2017, pp. 830-833.


BACKGROUND

The present disclosure relates to computer-implemented anomaly detection and learning of dependency graphs via which anomalies are detected, for example, in industries such as manufacturing and heavy industries.


In industrial or manufacturing processes, or other like processes, a network of sensors is deployed to enable automatic monitoring of various operating conditions. Data mining techniques, for example, can convert low-level sensor data collected from Internet-of-Things (IoT) devices into actionable insights, for example, for condition-based maintenance. For instance, identifying an unusual occurrence or change in data can signal anomaly.


In traditional problem settings, a goal of anomaly detection is to compute or determine the degree of anomalousness for a multivariate measurement, giving an overall anomaly score. In sensor networks, for example, wireless sensor networks, it may be inadequate simply to indicate whether or not a network is behaving anomalously, for example, to perform anomaly detection. For example, if a sensor network takes measurements from different car parts of a prototype automobile on a test track, then it may be also valuable to the field engineers to know which car parts are contributing to an anomalous behavior, for example, in addition to knowing that the car as a whole is behaving anomalously. Similar problem settings can be found across different application domains, e.g., monitoring mechanical stresses in buildings and bridges, identifying the sources of flooding, and finding sources of threats in computer networks.


BRIEF SUMMARY

A method and system, for example, which detects anomaly in sensor networks, may be provided. The method, in one aspect, may include receiving a first data set comprising sensor data detected by a plurality of sensors coupled to equipments in operation during a first period of time. The method may also include constructing a first dependency graph based on the first data set by solving a first objective function constrained with a maximum number of non-zeros and formulated with a regularization term comprising a quadratic penalty to control sparsity, wherein the quadratic penalty is determined as a function of the first data set in constructing the first dependency graph. The method may further include receiving a second data set comprising sensor data detected by the plurality of sensors coupled to the equipments in operation during a second period of time. The method may also include constructing a second dependency graph based on the second data set by solving the first objective function constrained with the maximum number of non-zeros and formulated with the regularization term comprising a quadratic penalty, wherein the quadratic penalty is determined as a function of the first data set and the second data set in constructing the second dependency graph. The method may also include determining an anomaly score for each of the plurality of sensors based on comparing the first dependency graph and the second dependency graph. In a further aspect, the method may also include, responsive to determining that an anomaly score associated with a sensor in the plurality of sensors meets a threshold value indicative of abnormal functioning, automatically performing a corrective action to correct equipment coupled to the sensor.


A system, in one aspect, may include a hardware processor coupled with a communication interface and a memory device coupled to the hardware processor. The hardware processor may be operable to receive via the communication interface, a first data set comprising sensor data detected by a plurality of sensors coupled to equipments in operation during a first period of time. The hardware processor may be further operable to construct a first dependency graph based on the first data set by solving a first objective function constrained with a maximum number of non-zeros and formulated with a regularization term comprising a quadratic penalty to control sparsity, wherein the quadratic penalty is determined as a function of the first data set in constructing the first dependency graph. The hardware processor may be further operable to receive a second data set comprising sensor data detected by the plurality of sensors coupled to the equipments in operation during a second period of time. The hardware processor may be further operable to construct a second dependency graph based on the second data set by solving the first objective function constrained with the maximum number of non-zeros and formulated with the regularization term comprising a quadratic penalty, wherein the quadratic penalty is determined as a function of the first data set and the second data set in constructing the second dependency graph. The hardware processor may be further operable to determine an anomaly score for each of the plurality of sensors based on comparing the first dependency graph and the second dependency graph. In a further aspect, the hardware processor may be further operable to, responsive to determining that an anomaly score associated with a sensor in the plurality of sensors meets a threshold value indicative of abnormal functioning, automatically perform a corrective action to correct equipment coupled to the sensor.


A computer readable storage medium storing a program of instructions executable by a machine to perform one or more methods described herein also may be provided.


Further features as well as the structure and operation of various embodiments are described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram illustrating anomaly detection for multivariate noisy sensor data in one embodiment.



FIG. 2 is a diagram illustrating computing of a change score of a sensor in a dependency graph in one embodiment.



FIG. 3 is a diagram showing components of a system in one embodiment, which performs localization for multivariate time series datasets detects anomalies in sensor data.



FIG. 4 is a diagram illustrating a screen display of an analytics tool in an embodiment, which integrate anomaly detection technique in one embodiment.



FIG. 5 illustrates a schematic of an example computer or processing system that may implement a system in one embodiment of the present disclosure.





DETAILED DESCRIPTION

In embodiments, methods, systems and techniques are provided, which detect anomalies in sensor networks. In embodiments, methods, systems and techniques may anomaly localization, where a variable-wise anomaly score is desired. In embodiments, methods, systems and techniques include learning a dependency graph of multivariate data, which may be noisy. The methods, system and techniques automatically identify the number of dependencies and perform transfer learning from a training phase to a testing phase. Learned dependency graphs are used to detect anomalies in sensor networks. In one aspect, a learning performed according to an embodiment in the present disclosure may be useful in tracking inter-variable dependency, reducing the number of runs to obtain a graph that is sparse enough for interpretability (e.g., in detecting anomalies), while capturing real dependencies in the multivariate data.


In an embodiment, methods, systems and techniques may be implemented as a computer-implemented solution library, for instance, which can be integrated or plugged into a computer-implemented platform or server which may function as analytics tool, for example, in performing operations managements in an industry.


In an embodiment, an optimization is generated, which allows to automatically identify the number of conditional dependencies for a pair of variables and transfer a learning sparsity pattern and magnitude of correlations from a normal state to a testing phase. In an embodiment, a two-step algorithm based on projections and active set identification is presented. In an embodiment, a full gradient may be used for solution quality and a reduced subspace Newton direction to accelerate it; Feasibility with respect to membership in sparsity constraint is handled via projection; Symmetric positive-definiteness is ensured through a line-search; a continuation step improves solution quality and running time.


Anomaly detection, for example, via learned sparse dependency graph, in one embodiment identifies asset behavior that is outside of a defined norm that constitutes normal behavior. In an embodiment, anomaly detection algorithm is based on a probabilistic graphical model, and may detect anomalies with higher accuracy and fewer false alarms. In an embodiment, early detection of anomalies or faults in a system can lead to considerable savings of time and cost. A system and/or method of the present disclosure, for example, reduces maintenance cost for example, incurred in vessel monitoring, increases asset utilization, prevents unexpected equipment breakdowns by identifying anomalous conditions that can lead to accidents or equipment damage, and reduces false positive and false negative alerts. In industrial processing, for example, productivity of assets may be enhanced, and downtime and maintenance costs may be reduced.


A system and/or method, in an embodiment, solved a sparsity-constrained model, allows transfer learning via an algorithm. The algorithm, in one aspect, learns and transfers connectivity pattern and dependency magnitude from the normal training phase to the testing phase for the dependency matrix. In an embodiment, an additional regularization term may be included, which keeps the magnitudes of precision matrix entries to manageable amount. In an embodiment, a system and/or method selects the sparsity level of the dependency graphs a priori.



FIG. 1 is a diagram illustrating anomaly detection for multivariate noisy sensor data in one embodiment. A model building phase 102 in an embodiment includes using past dataset associated with sensors, for example, operating in a domain (e.g., industry domain), for example, in normal condition (e.g., without failure). Such dataset is considered to be normal data. For instance, dataset 104, for example, of past period (e.g., selected historical time period, e.g., a number of time series sample data points (e.g., 100 sample data points) associated with each sensor), may be obtained and a dependency graph model 106 is built based on the normal data. The dependency graph model includes nodes which represent sensors and links between the sensors during operating conditions.


During a runtime phase 108, a current dependency graph 112 from real-time sensor data 110 (data points associated with current period of time, or window of time) is built or computed. The dependency graph 106 associated with the normal state is compared with the current dependency graph 112. A change in the dependency among or between the nodes indicates anomaly in the current processing in the domain. In this way, anomalies among sensors are detected in real-world situations to predict when and where maintenance is required.


For example, at 114, a dependency graph 106 may be discovered or built based on the past sensor data 104 taken during normal operations of a process (e.g., manufacturing or industrial process). At 116, a current dependency graph 112 is computed from real-time sensor data 108. The dependency graphs 106 and 112 may be compared to identify or determine any anomaly in the current operations of the process.


As an example application, nodes in a dependency graph represent sensors, for example, monitoring different components of a process such as a manufacturing or industrial process. Dependency arcs or edges between nodes represent that the data of the sensors move together, or have a pattern of moving together. If, for example, two nodes are conditionally independent, then the value of dependency edge is zero. In one embodiment, detecting anomaly is carried out by comparing the change of dependency graphs between the training phase and testing phase.



FIG. 2 is a diagram illustrating computing of a change score of a sensor in a dependency graph in one embodiment. A preparation phase 202, in an embodiment, includes obtaining or receiving data matrix X, which includes data such as sensor data output during an operation of a manufacturing or industrial process, for example, by a system coupled with sensors for operating the manufacturing or industrial process. For instance, for each sensor, a time series of data points (e.g., of a selected time window) associated with a system operating under normal operating conditions (e.g., no errors, faults or failures of equipment or process), may be obtained.


At 206, the data of data matrix X is input to an optimization model, Equation (a), and Equation (a) is solved subject to a specified constraint.


In an embodiment, the constraint specifies a maximum number of non-zero entries. Non-zero entries indicate a relationship between sensors. The partial correlation between two sensors, for example, is non-zero when the corresponding element of dependency matrix is non-zero. At 208, a solution to Equation (a) creates a dependency graph G (e.g., a Gaussian graphical model).


An operation phase 210, for example, at time t, in an embodiment, includes obtaining or receiving data matrix X(t) 212, for example, sensor data output during a manufacturing or industrial process at time window (t), which may be the current time or near current time. At 214, the data of data matrix X(t) 212 is input to the optimization model, Equation (a), and Equation (a) is solved subject to a specified constraint. At 216, the solution of the optimization model (Equation (a)) creates a dependency graph Gt, (e.g., a Gaussian graphical model) associated with the current data (operations data of time t window).


In another aspect, at 214, Equation (c) may be solved using both data matrix X 210 and data matrix X(t) 212 to create the dependency graph Gt, for example, in lieu of Equation (a).


The dependency graph G 208 and the dependency graph Gt 216 are input to another model, Equation (b), and Equation (b) is solved at 218. Solving Equation (b) produces a change score ai (anomaly score for sensor i) at time t 220, for each sensor considered. For example, an anomaly score for each sensor is determined, for example, by solving Equation (b).

















min

X

0






S
,
X




-

log






det


(
X
)



+


ρ
2





X


F















s
.
t
.







X


0



κ

,






s
.
t
.





X

i
,
j



=
0

,

if






x
i






and






x
j






are





conditionally





independent







Equation






(
a
)








a
i

=




dx

-
i




p


(


x

-
i



D

)







dx
i



p


(



x
i



x

-
i



,
D

)




ln


(


p


(



x
i



x

-
i



,
D

)



p


(



x
i



x

-
i



,

D



)



)










Equation






(
b
)








where D is the training data set and D′ is the testing data set. A detailed formula is given in Equation (8) below.

















min

X

0






S
,
X




-

log






det


(
X
)



+


ρ
2






X
-

X
train




F















s
.
t
.







X


0



κ

,






s
.
t
.





X

i
,
j



=
0

,

if






x
i






and






x
j






are





conditionally





independent







Equation






(
c
)








In the above formulations, S represents the sample covariance of the user-inputted data, X represents the dependency graph to be learned at time t (also referred to as X(t) above), and Xtrain represents the dependency graph from the normal operation phase (i.e., the training phase), ρ represents a regularization parameter, ∥⋅∥F represents a norm term. Details of the formulations are described further below.


In an embodiment, the L0 (l0)-constraint (∥X∥0≤K) guarantees that the solution admits a level of sparsity. In an embodiment, the quadratic penalty






(


ρ
2






X
-

X
train




F


)




encourages the capacity of selecting groups in the presence of highly correlated variables and transfer learning. The models of Equations (a) and (b) are non-convex models, which may be challenging to solve, and in embodiments, optimization algorithms are provided to solve such non-convex models.


In an embodiment, the problem of anomaly localization in a sensor network for multivariate time-series data is considered by computing anomaly scores for each variable separately. A system and/or method in an embodiment evaluate a difference measure using the conditional expected Kullback-Liebler divergence between two sparse Gaussian graphical models (GGMs) learned from different sliding windows of the dataset. To estimate the sparse GGMs, an optimization model is provided and the system and/or method in an embodiment constrain the number of nonzeros (l0 norm) in the learned inverse covariance matrix and apply an additional l2-regularization in the objective. In an embodiment, a projected gradient algorithm efficiently solves this nonconvex problem. Numerical evidence supports the benefits of using the model and method over the usual convex relaxations (i.e., l1-regularized models) for learning sparse GGMs in this anomaly localization setting.


Anomaly localization of the present disclosure in an embodiment determines or computes a variable-wise anomaly score. In wireless sensor networks, e.g., in addition to determining whether or not a network is behaving anomalously, it is useful to determine or know what aspect of the network is anomalous. For example, if a sensor network takes measurements from different car parts of a prototype automobile on a test track, then it is valuable to the field engineers to know which car parts are contributing to an anomalous behavior, e.g., more than simply knowing that the car as a whole is behaving anomalously. Similar problem settings can be found across different application domains: monitoring mechanical stresses in buildings and bridges, identifying the sources of flooding or forest fires, pinpointing the change points in stock price time series, finding sources of serious threats in computer networks, and identifying car loss.


In an embodiment, sparse dependency graphs (e.g., GGMs) are learned in the context of anomaly localization. In an embodiment, a model is presented to learn a sparse dependecy graph where both sparsity pattern and numeric values are taken into account. GGM-learning model employing both an l0 constraint and an l2-regularization, along with an optimization algorithm to solve it, outperforms other models including typical l1-regularized models. In one aspect, the system and/or method leverage the conditional expected Kullback-Liebler (KL) divergence method in computing anomaly scores for each variable. A comparative study conducted on various GGM-learning models and scoring methods for assigning anomaly scores demonstrates efficiency of a system and method in an embodiment. Using l0 norm-based methods can present computationally challenging issues. The system and/or method in an embodiment provide l0-constrained GGM learning algorithms in the anomaly detection setting. In an embodiment, an l0-constrained l2-regularized model learns a sparse dependency graph for anomaly localization, and a proximal gradient algorithm handles both an l0 constraint and a positive-definite constraint.


In an embodiment, the following notations are used. For any X∈Rn×nm ∇f(X) denotes the gradient of f, which is a matrix in Rn×n. Notation tr(X) denotes the trace operator, i.e., Σi=1nXi,i. The vectorization operator vec(X) transforms X into a vector in Rn2 by stacking the columns from left to right. σ1(X) and σn (X) denote the minimum and maximum eigenvalues of X, respectively. Given an index set F⊂{1, . . . , n}×{1, . . . , n}, [X]F denotes the matrix obtained from the identity matrix In by replacing the (i, j)-th entry by Xi,j for all (i, j)∈F. Let ⊗ denote the Kronecker product of matrices and let ei denote a unit vector with a 1 in the i th coordinate. For any finite set S, |S| denotes the number of elements of S.


Learning Sparse Graphical Models


The following describes optimization models that the system and/or method use in an embodiment to learn sparse precision matrices. Given m data samples {y1, . . . ym}⊆Rn, the system in an embodiment considers fitting an n-variate Gaussian distribution N(μ, X−1) to the data, where μ denotes the mean and X is the precision matrix. In an embodiment, it is assumed the data is centered so that μ=0. Zeros in X indicate conditional independence between two variables. To estimate the unknown parameter X, a maximum likelihood problem as follows may be solved,












min

X

0








tr


(

S





X

)



-

log






det


(
X
)




,




(
1
)







where Xcustom-character0 denotes a positive definite constraint. In an embodiment, the empirical covariance matrix is defined as






S
=


1
m






i
=
1

m








y
i




y
i
T

.








The solution X∈Rn×n is the estimated inverse covariance, which describes a Gaussian graphical model (GGM). However, in general, due to noise in the data, X obtained from Equation (1) is a dense matrix and thus the conditional dependencies suggested by X may reveal very little. For reasons of interpretability, sparsity in the matrix X is desired.


The sparse graphs for anomaly localization have been computed by solving the well-studied l1-regularized maximum likelihood problem, i.e.,












min

X

0








tr


(

S





X

)



-

log






det


(
X
)



+

λ




X


1



,




(
2
)







where ∥X∥1i,j=1n|X| is the element-wise l1 norm of the matrix X, and λ>0 is a regularization parameter to control sparsity. Equation (2) is a convex problem, and generally admits a unique global minimizer. Thus, given S, selecting a sparse graph is a matter of computing a regularization path by varying λ and cross-validating the solutions along the path. Since sensor data of industrial applications are very noisy in general, some useful interaction information among the variables in the graph can be omitted.


Enforcing sparsity in the constraint set instead of in the objective function may avoid bias. In an embodiment, the system and/or method of the present present disclose and solve a model that estimates sparse graphs. In an embodiment, the system and/or method directly constrains sparsity by specifying a maximally allowable number of nonzeros κ in the optimal solution and adds a quadratic penalty












min

X

0








tr


(

S





X

)



-

log






det


(
X
)



+


λ
2





X


F
2











s
.
t
.







X


0



κ

,





(
3
)







where λ>0 is a regularization parameter and the Frobenius norm term ∥X∥F2i,j=1n|Xij|2 is also referred to as l2-regularization. The present disclosure denotes by ∥⋅∥0 the number of nonzeros of the argument, the so-called l0-norm. The l0-constraint guarantees that the solution admits a level of sparsity, while the quadratic penalty encourages the capacity of selecting groups in the presence of highly correlated variables. Without the regularization, some entries of the purely l0-constrained model can have relatively large magnitudes. The associated anomaly scores significantly dominate the others, and thus some faulty variables can be overlooked. Hence, in an embodiment, the l2-regularization term keeps the magnitude of all entries uniformly similar. From a computational point of view, the l2-term makes the problem of Equation (3) better-behaved. The objective function is strongly convex and the optimal solution set is bounded. Thus, it is expected to be solved more efficiently. In an embodiment, the l0-constraint and the l2-regularization are combined. Because of the presence of the l0 constraint, it may be challenging to solve problem of Equation (3); in an embodiment, an efficient proximal gradient algorithm is presented to handle the problem.


In an embodiment, the model of Equation (3) can be used to learn dependency graphs in both the train phase and the testing phase, for example, learning a sparse GGM for anomaly detection. In an embodiment, the system and/or method may also use the following for the testing phase












min

X

0








tr


(

S





X

)



-

log






det


(
X
)



+


λ
2






X
-

X
normal




F
2











s
.
t
.







X


0



κ

,









(
4
)







where Xnormal is a precision matrix from a normal state. The system and/or method in one embodiment perform transfer learning for sparsity pattern and magnitude of correlations from the normal state to the testing phase.












min

X

0






tr


(
SX
)


-

log






det


(
X
)














s
.
t
.













X


0



κ
.









(
5
)








min

X
>
0








tr


(
SX
)



-

log






det


(
X
)



+


λ
1





X


1


+


λ
2






X


F
2

.






(
6
)







Equations (5) and (6) represent additional models, which may be applied in learning a sparse GGM for anomaly detection algorithms, for instance, in the context of anomaly localization. Equation (6) is an an l1-based “elastic net” version for GGM model.


Methods for Computing Anomaly Score of Each Variable


The following describes embodiments of techniques that perform a change analysis, e.g., assigning anomaly scores to individual variables to measure the magnitude of their contributions to the observed differences between two GGMs from training and testing datasets. The techniques, for example, in embodiments, implement the processing shown at 120 in FIG. 1.


Conditional Expected KL-Divergence


The system and/or method in an embodiment may consider the conditional expected Kullback-Liebler divergence between two learned GGMs to compute anomaly scores. Given two probabilistic models A and B with respective distributions pA and pB, the system and/or method consider the expected KL divergence diAB between the marginal distributions pA(xi|zi) and pB(xi|zi)











d
i
AB

=




dz
i




p
A



(

z
i

)







dx
i




p
A



(


x
i



z
i


)



ln




p
A



(


x
i



z
i


)




p
B



(


x
i



z
i


)








,




(
7
)







for each variable i. By swapping A and B in Equation (7), the system and/or method obtain a symmetric formula for diBA. Because the system and/or method are learning GGMs, the formula of Equation (7) takes a special form that reduces to matrix-vector multiplications. For the i-th variable, and for a learned GGM X, let










X
=

[



L


I





I
T



α



]


,


X

-
1


=

[



W


w





w
T



β



]






(

7

b

)







be permuted such that the last row and column of X, X−1 correspond to the i-th variable. Then, letting XA and XB denote the GGMs learned from datasets A and B respectively, it can be shown










d
i
AB

=



w

A
T




(


I
B

-

I
A


)


+


1
2



[




I

B
T




W
A



I
B



α
B


-



I

A
T




W
A



I
A



α
A



]


+



1
2



[


ln



α
A


α
B



+


β
A



(


α
B

-

α
A


)



]


.






(
8
)







The system and/or method then define the anomaly score of the i th variable as






d
i=max{diAB,diBA}.  (9)


When a threshold t>0 is selected, di>t is an indication that the i-th variable is anomalous, while di≤t is an indication that the i th variable is healthy. Alpha and beta, L and W notations are components of matrices X and X−1 when they are are broken up as in equation (7b).


Stochastic Nearest Neighbors


The system and/or method in an embodiment may consider using a stochastic nearest neighbors approach to anomaly scoring. Let SA and SB denote the sample covariance matrices of datasets A and B respectively, and let SiA, SiB denote the i th columns of SA and SB respectively. For a sparse GGM XA learned for dataset A, define an indicator vector associated with the i th variable lA,i coordinate-wise by











[

l

A
,
i


]

j

=

{



1



if





i





and





j





are





adjacent





in






X
A






0








otherwise
.










(
10
)







Then, a measure of the dissimilarity between the neighborhoods of the i th variable between the GGMs for A and B, weighted by the sample covariances of the two datasets, is given by










d
i
AB

=





l

A
,
i

T



(


S
i
A

-

S
i
B


)




(

1
+


l

A
,
i

T



S
i
A



)



(

1
+


l

A
,
i

T



S
i
B



)









(
11
)







Symmetrically, the system and/or method define diBA and then compute an anomaly score di as in Equation (9). “T” in the equations stands for the transpose of a vector or matrix.


Sparsest Subgraph Approximation


The system and/or method in an embodiment may consider using sparsest subgraph approximation technique to score anomaly. Given two (sparse) dependency graphs, e.g. XA and XB consider a graph given by an (also sparse) adjacency matrix A defined entrywise by Λij=|XijA−XijB|. The problem of finding an independent set (a set of mutually disconnected nodes) of size k in Λ is generalized to finding a “sparsest k-subgraph” in Λ:












min

d



{

0
,
1

}

n










d
T


Λ





d





subject





to






1
n
T


d


=
k

,




(
12
)







where 1n denotes a vector of ones. Intuitively, the solution to Equation (12) identifies a set of k nodes that change the least between XA and XB, suggesting a healthy (non-anomalous) set of nodes. However, (12) is a mixed-integer quadratic programming (MIQP) formulation, the solution of which is generally NP-hard, and moreover requires foreknowledge of the magnitude of k, the size of the supposedly healthy set of nodes. Therefore, the system and/or method may use a convex QP relaxation (solvable in polynomial time)












min

d


R
n










d
T



Λ
μ


d






s
.
to







1
n
T


d


=
1

,

d


0
n


,




(
13
)







where 0n is a vector of zeros and Λμ=Λ+μIn is the original matrix Λ with an added scaled identity with μ≥0 sufficiently bounded away from zero to enforce the positive definiteness of Λμ. With this relaxation, the solution d* to Equation (13) can be interpreted as anomaly scores.


In an embodiment, the system and/or method uses the l0-l2 based model of Equations (3) and (4) to learn a sparse GGM in the training phase and the testing phase respectively, and computes anomaly scores for each variable by applying the conditional expected KL divergence of Equation (9).


Optimization Algorithm for l0 Sparse Models


In embodiments, the system and/or method implements a projected gradient algorithm for solving the learning problems or models of Equation (3) and Equation (4).


Projected Gradient Newton Algorithm


In an embodiment, a projected gradient Newton algorithm with warm starts may solve the l0-constrained inverse covariance learning problems of Equations (3) and (4). The following describes a method for solving Equation (3), and the method also can be applied to Equation (4). It is noted when A is small, Equation (3) is difficult to be solved. In an embodiment, instead of directly solving (3), the system and/or method solves a sequence of subproblems parameterized by a bigger λ>0,












min

X

0






f


(
X
)


=


tr


(
SX
)


-

logdet


(
X
)


+


λ
2





X


F
2









s
.
t
.












X


0


κ



















X

i
,
j


=

0









(

i
,
j

)


I




,








(
14
)







where the system and/or method denote the Frobenius norm by ∥X∥F2i,j=1n|Xij|2. The objective of Equation (14) is strongly convex and the optimal solution set of Equation (14) is bounded. The l2 regularizer makes the subproblems better-conditioned, possibly allowing for finding higher quality local minima of Equation (14) faster. By driving λ to zero in Equation (14), an optimal solution for Equation (3) can be obtained. Thus, to yield a practical algorithm, the system and/or method in an embodiment consider a sequence of strictly decreasing values {λk}; in the k-th iteration of an algorithm, the system and/or method seek an approximate local minimizer to Equation (14) with λ=λk. On the (k+1)-th iteration, the system and/or method warm start the solution of the (k+1)-th subproblem with the solution to the k-th subproblem as an initial point.


Before analyzing optimality conditions of (3) or (14), a hardness result may be proved, for example, demonstrating that for any κ>0 in Equation (3), there exists no λ>0 such that an optimal solution to the l1 regularized model of Equation (2) with regularization parameter λ recovers the global optimum of Equation (3). This suggests solving Equation (3) without resorting to convex formulations.


Theorem 1. For any κ≥n, there exists no value of λ>0 such that an optimal solution of (2) globally minimizes (3).


Proof 1. Towards a contradiction, suppose there exists κ≥n and λ>0 such that X is a global minimizer of both (2) and (3). Define Y=X−1 and Ψ(X)=−log det(X)+tr(SX). Since S is positive semi-definite and nonzero, there exists at least one diagonal entry Sii>0. Without loss of generality, suppose S11>0. For any α such that |α|<min(X11, σ1(X)), both ∥X+αe1e1T0=∥X∥0≤κ and X+αe1e1T are positive definite. Additionally, since (X+αe1e1T)kl=0 for any (k,l)∈| it may be concluded that X+αe1e1T is feasible in Equation (3) for every α such that |α|<min(X11, σ1(X)).


Moreover, using a property of determinants,











ψ


(

X
+

α






e
1



e
1
T



)


-

ψ


(
X
)





=








-

log


(


det


(
X
)




(

1
+

α






Y
11



)


)



+














tr


(

S


(

X
+

α






e
1



e
1
T



)


)


+














logdet


(
X
)


-

tr


(
SX
)










=








-

log


(

1
+

α






Y
11



)



+

α






S
11




Δ
_

_




g


(
α
)


.










Note that if X is a minimizer of Equation (3), then g(α)≥0 for every |α|<min(X111(X)). Since g is a convex one-dimensional function of α in the neighborhood of zero and g(0)=0, it may be concluded that g′(0)=0. This implies that S11=Y11. From a characterization of optimal solutions of Equation (2), if X is optimal for Equation (2), then S11=Y11−λ. Since λ>0, this is a contradiction.


First-Order Stationarity Conditions


The following characterizes properties of Equation (14), for example, concerning the boundedness of the set of optimal solutions and appropriate optimality conditions for Equation (3). In an embodiment, it may be assumed that the l2-regularization parameter λ>0. For ease of notation, the sparsity constraint set is defined as





ωΔ{X∈Rn×n:∥X∥0≤κ,Xi,j=0∀(i,j)∈|},


and the projection operator








P
Ω



(
X
)





Δ
_

_


arg



min

Y

Ω








X
-
Y



F

.






It may be noted that the gradient of the objective function in Equation (14) is





f(X)=S−X−1+λX,


and the Hessian is




2f(X)=X−1⊗X−1+λIn2.


It is shown in the following theorem that Equation (14) can be safely reformulated in such a way that the feasible set is compact.


Theorem 2. Assume that problem of Equation (14) is feasible and λ>0. Consider the problem












min

X

0






tr


(
SX
)


-

logdet


(
X
)


+


λ
2





X


F
2













s
.
t
.













X


0


κ



















X

i
,
j


=
0

,




(

i
,
j

)


I






















σ
_



I
n



X



σ
_



I
n



,








(
15
)







where there are fixed constants








σ
_

=



n
2

+



n
4

+

2

λ






nf


(

tI
n

)



-

2

λ






n
2





λ


,






σ
_

=


exp


-

f


(

tI
n

)



+



n





λ

2



exp


-
2



f


(

tI
n

)







σ
_


2
-

2

n









σ
_


1
-
n




,




and






t
=




-

tr


(
S
)



+




tr


(
S
)


2

+

4


n
2


λ





2

n





λ


.





Then, the sets of global minima of Equation (14) and Equation (15) are equivalent.


Proof 2. Suppose X*∈Rn×n is an optimal solution of Equation (14) and 0<σ1≤ . . . ≤σn are the eigenvalues of X*. The system and/or method in an embodiment seek lower and upper bounds for σ1 and σn, respectively.


Consider X=tIn for an arbitrary fixed scalar t>0. X is feasible with respect to the constraints of Equation (14), and











F


(

X
_

)




f


(

X
*

)



=



tr


(

SX
*

)


-

logdet


(

X
*

)


+


λ
2





X


F
2






-

logdet


(

X
*

)



+


λ
2






i
=
1

n







X
ii

*
2









-
n







log


(

σ
n

)



+


λ

2

n





(




i
=
1

n







X
ii
*


)

2







-
n







log


(

σ
n

)



+


λ

2

n





σ
n
2

.








(
16
)







The second inequality is due to the fact that S±0 and X*custom-character0, and thus tr(SX*)≥0. The third inequality uses the Cauchy-Schwarz inequality, and the last inequality holds because Σi=1nXii*=Σi=1nσi and σi>0. Combining Equation (16) with the fact that log(σn)≤σn−1, the system and/or method get










f


(

X
_

)





-

n


(


σ
n

-
1

)



+


λ

2

n





σ
n
2

.







(
17
)







The right-hand side of Equation (17) tends to infinity as σn→∞; since f(X) is finite for the fixed value of t, σn is bounded from above. Specifically, σn cannot exceed the larger of the two solutions to the quadratic equation









λ

2

n




σ
n
2


-

n


(


σ
n

-
1

)


-

f


(

X
_

)



=
0.




That is,









σ
n




σ
_




Δ
_

_





n
2

+



n
4

+

2

λ






nf


(

X
_

)



-

2

λ






n
2





λ


<


.





(
18
)







Likewise, a lower bound for σ1 is estimated as






f(X)≥f(X*)≥−log det(X*)





≥−log(σ1)−(n−1)log(σ),


from which it may be concluded





σ1≥exp(−f(X))σ1-n.  (19)


In an embodiment, this bound can be tightened further. Starting from an intermediate step in the derivation of (16), there is







f


(

X
_

)





-

logdet


(

X
*

)



+


λ

2

n





(




i
=
1

n







σ
i


)

2






-

log


(

σ
1

)



-


(

n
-
1

)



log


(

σ
_

)



+



n





λ

2



σ
1
2






-

log


(

σ
1

)



-


(

n
-
1

)



log


(

σ
_

)



+



n





λ

2



exp


(


-
2



f


(

X
_

)



)






σ
_


2
-

2

n



.







Thus,









σ
1




σ
_




Δ
_

_



exp


-

f


(

X
_

)



+



n





λ

2



exp


-
2



f


(

X
_

)







σ
_


2
-

2

n









σ
_


1
-
n



>
0.




(
20
)







From Equations (18) and (20), in order to tighten the bounds σ and σ, the system and/or method may choose t so that f(X)=f(tIn) is minimized. Observe that







f


(

tI
n

)


=



tr


(
S
)



t

-

n






log


(
t
)



+



n





λ

2



t
2







is a strongly convex one-dimensional function in t for t>0. Thus f (tIn) is minimized when the necessary optimality condition








d
dt



f


(

tI
n

)



=
0




is satisfied; thus the following is chosen







t
=



-

tr


(
S
)



+




tr


(
S
)


2

+

4


n
2


λ





2

n





λ



,




which completes the proof.


The following introduces in Theorem 3 optimality conditions for problem (14). For any positive definite matrix H and a scalar t>0, define












h

t
,
H
,
X




(
Z
)


=


tr


(





f


(
X
)


T




(

Z
-
X

)


)


+


1

2

t







Z
-
X



H
2




,




(
21
)









S

t
,
H




(
X
)


=


arg







min

Z

Ω









h

t
,
H
,
X




(
Z
)








=

arg







min

Z

Ω








P





Z
-

(

X
-

t




H
,



f


(
X
)








)





H

-
1


2






,






where








X


H


=





vec


(
X
)


T



Hvec


(
X
)




.






(
22
)







Theorem 3. Suppose that X* is an optimal solution of (14) and H is positive definite. Then there exists T>0 such that






X*=S
t,H(X*) for all t∈(0,T).  (23)


Proof 3. From Theorem 2, there is, any optimal solution X* of (14) satisfies σIncustom-characterX*, where σ>0 is independent of X*. Since ∇2f(X*)=X*−1⊗X*−1+λIn2, it follows that ∇2f(X*)custom-character(σ−2+λ)In2. Hence, there is, from strong convexity that











f


(
Y
)





f


(

X
*

)


+

tr


(






f


(

X
*

)



,

(

Y
-

X
*


)




)


+


1

2


L
f








Y
-

X
*




F
2




,




(
24
)







for all Y∈Ω, where Lf=(σ−2+λ)−1. It is shown that (23) holds for T=σ1(H)Lf. First, it is demonstrated that X*∈St,H (X*) for any t∈[0, T). This first claim is a statement concerning existence of X*. The following subsequently proves uniqueness of X*, which proves the theorem. From (24) and since








1
t


H




1

L
f




I
n






for all t∈[0, T), there is, for all Y∈Ω







f


(

X
*

)




f


(
Y
)





f


(

X
*

)


+

tr


(




f


(

X
*

)





(

Y
-

X
*


)


)


+


1

2

t








Y
-

X
*




H
2

.







It follows that








h

t
,
H
,

X
*





(
Y
)


=



tr


(




f


(

X
*

)





(

Y
-

X
*


)


)


+


1

2

t







Y
-

X
*




H
2




0





for all Y∈Ω. Thus X*∈St, (X*) since ht,H,X*(X*)=0, proving the claim.


To prove uniqueness, suppose towards contradiction that there exists {circumflex over (X)}∈St,H(X*) satisfying (23), but {circumflex over (X)}≠X*. Noting that by definition ht,H,X*(X*)=0, we must have ht,H,X*({circumflex over (X)})=0, and so










tr


(




f


(

X
*

)





(


X
^

-

X
*


)


)


=


-

1

2

t










X
^

-

X
*




H
2

.






(
25
)







Combining (24) and (25), there is,












f


(

X
^

)





f


(

X
*

)


+

tr


(




f


(

X
*

)





(


X
^

-

X
*


)


)


+


1

2


L
f









X
^

-

X
*




F
2




=



f


(

X
*

)


-


1
2




(


X
^

-

X
*


)

T



(


H
t

-


I
n


L
f



)



(


X
^

-

X
*


)



<

f


(

X
*

)




,




(
26
)







which contradicts the global optimality of X* for (14).


For a general positive definite matrix H, solving the problem defining St,H (X) in (22) is generally intractable due to the cardinality constraint. As a result, verifying the optimality condition (23) is computationally expensive. The following lemma offers a simpler condition, which is an immediate consequence of Theorem 3 when H=In. A similar result was established, but for a slightly different constraint set Ω.


Lemma 1. Suppose that X* is an optimal solution of (14). Then there exists T>0 such that






X*=P
Ω(X*−t∇f(X*)) for all t∈(0,T).  (27)


In one aspect, a subsequent convergence analysis relies on Lemma 1. The following states and proves a technical proposition regarding the operator St,In, concerning how it acts on convergent sequences.


Proposition 1. Let t>0. Suppose {Xk} is a sequence in Rn×n converging to X*, and suppose there exists a corresponding sequence {Yk} such that Yk∈St,In (Xk) for each k. If there exists Y* such that Yk→Y*, then Y*∈St,In(X*).


Proof 4. Since Yk∈Ω for each k and Ω is closed, it is concluded that Y*∈Ω. Hence, it is obtained from the definition of St,In(X*) that














Y
*

-

(


X
*

-

t




f


(

X
*

)





)




F




min

Y

Ω







Y
-

(


X
*

-

t




f


(

X
*

)





)




F






(
28
)







If inequality (28) holds as an equality, then the proposition is proven. Towards a contradiction, suppose (28) holds as a strict inequality. Then, for all k,

















Y
*

-

(


X
*

-

t




f


(

X
*

)





)




F



>




                         



min

Y

Ω







Y
-

(


X
*

-

t




f


(

X
*

)





)




F














                  








min

Y

Ω









Y


-

(


X
k

-

t




f


(

X
k

)





)




F

-
















(


X
*

-

t




f


(

X
*

)





)

-


(


X
k

-

t




f


(

X
k

)





)







F












=




                         













Y
k


-

(


X
k

-

t




f


(

X
k

)





)




F

-

















(


X
*

-

t




f


(

X
*

)





)

-


(


X
k

-

t




f


(

X
k

)





)







F





,








(
29
)







where the second inequality follows from the triangle inequality, and the equality follows from the definition of Yk. As k→∞, (29) implies





Y*−(X*−t∇f(X*))∥F>∥Y*−(X*−t∇f(X*))∥F,


a contradiction.


Algorithm Description


An algorithm is presented to solve a general problem of the form





min{f(X):X∈Rn×n,∥X∥0≤κ,Xi,j=0,∀(i,j)∈|,Xcustom-character0,X=XT},  (30)


where f is twice differentiable on its domain. In an embodiment, a strategy for solving the constrained optimization problem is to embed approximate Newton method into a projected gradient method. In this algorithm, feasibility with respect to sparsity constraint Ω is handled via projection, while the symmetric positive-definiteness of the iterates (i.e. feasibility with respect to {Xcustom-character0}) in ensured through a line-search procedure. Convergence analysis can be applied, for example, when the objective function has the special structure as in (14):







f


(
X
)


=


tr


(
SX
)


-

logdet


(
X
)


+


λ
2






X


F
2

.







A particular issue in the case of applying a projected gradient method to (30) is that the constraint set of (30) is nonconvex and not closed. This is a different situation from problems in the conventional scope of projected gradient methods, which require the feasible set to be convex and closed. In one aspect, due to the openness of the positive-definite (PD) constraint, directly applying a projected gradient method to Equation (30) is difficult. In an embodiment, a methodology of the present disclosure avoids dealing with the PD constraint in the projection step.


In an algorithm of the present disclosure in an embodiment, the projected gradient direction and the approximate Newton step are alternatively employed. On each gradient search step, the algorithm begins from a feasible iterate Xk and then backtrack along the projection are defined by PΩ(Xk−αk∇f(Xk)), initializing the stepsize αk>0 with a Barzilai-Borwein (BB) stepsize [1]:







α
k
BB

=



tr


(



(




f


(

X
k

)



-



f


(

X

k
-
1


)




)

T



(


X
k

-

X

k
-
1



)


)







X
k

-

X

k
-
1





F
2


.





BB-stepsizes perform better when they are embedded in a nonmonotone line search. In an embodiment, a scheme based on the Grippo-Lampariello-Lucidi (GLL) stepsize rule is considered. Let fmaxk denote the largest of the M most recent function values:






f
max
k=max{f(Xk−j):0≤j<min(k,M)}.


This quantity fmaxk is used to define a sufficient decrease condition. For instance, the algorithm terminates the backtracking procedure for computing Xk+1=PΩ(Xk−αk∇f(Xk)) by gradually reducing αk once the following conditions (C1) and (C2) are both satisfied:


(C1) A sufficient decrease condition for the objective function value is attained, i.e.








f


(

X

k
+
1


)





f
max
k

-


δ
2







X

k
+
1


-

X
k




F
2




,




for some algorithmic parameter δ>0.


(C2) The next iterate Xk+1 is feasible with respect to the positive definite constraint, i.e. Xk+1custom-character0.


In an embodiment, to accelerate convergence and make use of the fact that the optimal solution is often very sparse in many applications, i.e., κ=n2, the algorithm in the present disclosure may additionally incorporate projected approximate Newton steps, by using second-order information in a reduced subspace. At each iterate Xk, a local quadratic model of the objective function can be minimized subject to the constraint that all but a working set Wk of the matrix entries will not be fixed to zero in the minimization. In an embodiment of the algorithm, Wk may be chosen as






W
k={(i,j):Xi,jk≠0}∪Ok  (31)





or






W
k={(i,j):Yi,j≠0 for some Y∈PΩ(Xk−αk∇f(Xk))}∪Ok,  (32)


where αk>0 is a step size and Ok is a set of indices corresponding to the ┌ηκ┐ largest absolute values of entries in Xk−Xk−1 for some (small) η∈(0,1). The inclusion of Ok helps to close the optimality gap faster when the condition (27) in Lemma 1 does not hold. The determination for updating the working set Wk from (31) or (32) is based on how Xk is computed, which is described in detail. In one aspect, the particular selection of Wk is not critical to proving theoretical convergence results, and the prescriptions in (31), (32) are intended as practical guidance.


For any working set Wk⊆{1, . . . , n}×{1, . . . , n} and a matrix Y, define Ywk coordinate-wise via








(

Y

W
k


)


(

i
,
j

)


=

{




Y

(

i
,
j

)






if






(

i
,
j

)




W
k











0









otherwise
.










Then, given a current iterate Xk, gradient ∇f(Xk), and Hessian ∇2f(Xk), solve the following reduced quadratic problem to obtain a Newton direction Dk:











min

D


R

n
×
n






q


(
D
)






tr


(





f


(

X
k

)



W
k





D

W
k



)


+


1
2




vec


(

D

W
k


)


T





2




f


(

X
k

)



W
k






vec


(

D

W
k


)


.







(
33
)







To efficiently obtain an approximate solution of (33), an embodiment of a methodology of the present disclosure makes use of a conjugate gradient method. This conjugate gradient method takes advantage of the particular Kronecker product structure of ∇2f(Xk), as well as the observation that |Wk|<<n2, for instance, since the methodolgoy searches for a very sparse solution in applications.


A detailed description of our algorithm for solving (30) is given in Algorithm 1.












Algorithm 1: Reduced-space Projected Gradient Algorithm -


RPGA(X0, Ω, λ)
















 1
Choose parameters



σϵ (0,1),δ > 0,ηϵ (0,1),αminNewton > 0,[αminmax] ⊂ (0,∞) ,







integers p,q such that 0 < p < q , integerM > 0, grad_step = true .








 2
Set k = 0.


 3
while some stopping criteria not satisfied do


 4
 Step 1: Active-set Newton step


 5
 if (mod(k,q) ≠ p) then


 6
  if (grad_step = true) then


 7
   Wk ←Eq. (31)


 8
  else


 9
   Wk ←Eq. (32)



  /* end of if-then-else of line 6 */


10
  Approximately solve for Dk from (33)


11
  Choose αk = αk,0 = 1


12
  j ← 0, ls ← true


13
 while (ls = true) & (αk ≥ αminNewton) do


14
   αk ← σjαk,0


15
   Xtrialk+1 ← PΩ (Xk + αkDk)





16
   
if(f(Xk+1)fmaxk-δ2Xk+1-XkF2andXk+10)







   then


17
    Xk+1 ← Xtrialk+1


18
    ls ← false


19
   else


20
    j ← j + 1



   /* end of if-then-else of line 16 */



  /* end of while loop of line 13 */



  /* end of if-then of line 5 */


21
  Step 2: Gradient projection step


22
  grad_step ← false


23
  if (mod (k,q) = p) or (αk < αminNewton) then


24
   Choose αk,0 = min(αmax,max(αminkBB))


25
   j ← 0, ls ← true


26
   while (ls = true) do


27
    αk ← σjαk,0


28
    Xtrialk+1 ← PΩ(Xk − αk∇f(Xk)





29
    
if(f(Xk+1)fmaxk-δ2Xk+1-XkF2andXk+10)







    then


30
     Xk+1 ← Xtrialk+1


31
     ls ← false


32
    else


33
     j ← j + 1 /* end of if-then-else of line 29 */



    /* end of while loop of line 26 */


34
  grad_step ← true



  /* end of if statement of line 23 */


35
Step 3: Iterate


36
k ← k + 1



/* end of whiile loop of line 3 */









Remark 1. As mentioned, Ω is a non-convex set, and as such, the operator PΩ is generally set-valued, i.e. projections are non-unique. A point in PΩ(X) can be quickly obtained. If X∈Rn×n, set Xi,j=0 for every (i, j)∈custom-character. Then Pδ(X) is computed by sorting the n2 many entries of |X|, where |⋅| denotes an entry-wise absolute value, and then replacing all but the κ largest values of |X| with 0, where ties could be broken arbitrarily. In order to attain a unique projection, when we compute PΩ, ties are not broken arbitrarily, but by a dictionary order on the matrix entries. The dictionary order is chosen in such a way to ensure that the projected matrix is symmetric. As a consequence of sorting n2 matrix entries, the projection operation can be performed in O(n2 log(n)) time.


On each iteration of the backtracking procedure, one would compute an eigenvalue decomposition of the trial step Xk+1 both in order to compute the value of det(Xk+1) in the objective function and to determine the satisfaction of (C2). Computing a determinant is generally an expensive operation. However, since a method in the present disclosure is explicitly searching for a positive definite matrix, it is enough to attempt a Cholesky factorization of Xk+1, which can be done more efficiently. If the factorization fails (a complex number appears on the diagonal of the lower triangular matrix), then the current Xk+1 is not positive definite and the method can immediately retract along the projection arc. If the factorization is successful, then the determinant of the matrix Xk+1 can be computed from the diagonal values of lower triangular matrix. This sequence of attempted Cholesky factorizations is repeated until the stopping criteria is satisfied. The finite termination in the backtracking step is proved below.


In an embodiment, performing a line search in the projected gradient direction in the full space may help a projected approximate Newton algorithm identify a good working set subspace, and enables proving of global convergence. Thus, in an embodiment, the general logic of Algorithm 1 is that, for fixed q∈N, after every q−1 iterations of Step 1 (the active-set Newton steps), the algorithm performs a single iteration of Step 2, the gradient projection step.


As in the case in nonconvex optimization, the iterates of Algorithm 1 need not converge to a global minimizer of (14), but only to a point satisfying necessary conditions for optimality. The difference between (14), which Algorithm 1 is to solve, and (3), the real problem of interest, lies in the l2 regularization term. In an embodiment, a method of the present disclosure uses an outer homotopy method for solving (3). Algorithm 2 solves a sequence of subproblems of the form (14) by calling Algorithm 1, attempting to find increasingly better-quality minimizers. Two strictly decreasing sets of integer values κ01> . . . >κK=κ and real values λ01> . . . >λK=A are selected before running the algorithm. A method of the present disclosure may first (approximately) solve (14) using a sparsity parameter κ0, regularization parameter λ0, and some initial point X0. Then, the method may sequentially solve (14) to some tolerance for each pair (κjj): j=1, . . . , K, initializing each subproblem solve with the previous solution Xj-1. The final regularization parameter λK should be quite small in practice.












Algorithm 2: Projected Gradient Newton Algorithm with Warm


Starts - PPNA















1 Choose homotopy sequence {(κj, λj): j = 0,1,2,..., K}, initial point X0.


2 for j = 0,1,...,K do


3   Ωj ← {X:||X||0 ≤ κj, Xi,j = 0 ∀(i, j) ϵ ℑ}


4   Xj+1 ← RPGA(Xj jj)


  /* end of for loop of line 2 */









Convergence Analysis


This section provides a proof that the iterates of Algorithm 1 for solving problem (14) accumulate at a point satisfying the necessary optimality conditions (27). It may be assumed that the initial point X0 is feasible with respect to the constraints of (14).


The following states an assumption on κ:


Assumption 1. Assume that κ in (14) (and (15)) is an integer satisfying κ∈[n, n2). Observe that this assumption is reasonable, since as demonstrated earlier, n is a sufficient condition for (15) to have a non-empty feasible set, and if κ=n2, then the problem reduces to a convex one. It is demonstrated that the line search in either Step 1 (the active-set Newton step) or Step 2 (the gradient projection step) terminates in a finite number of iterations.


Lemma 2. Suppose that problem (14) satisfies Assumption 1. Let {Xk} denote the sequence of iterates generated by Algorithm 1 for solving problem (14). Suppose the line search starting in either Line 13 or Line 26 of Algorithm 1 is reached in the k-th iteration and suppose Xk is feasible with respect to the constraints of (14). Then, the line search in steps 1 and 2 terminates in a finite number of iterations.


Proof 5. The line search in Step 1 terminates in a finite number of iterations because the termination condition αkminNewton is enforced. The following proves the finite termination when the line search occurs in Step 2 (the gradient projection step). Assume ∇f(Xk)≠0, since otherwise the algorithm stops a local solution Xk.


There exists Tpk>0 such that every matrix in PΩ(Xk−t∇f(Xk)) is positive definite for all t∈(0, Tpk). For a nonzero symmetric matrix U, denote the spectral norm of U by ∥U∥2Δ max(|σmin(U)|, |σmax(U)|)>0, where σmin(U),σmax(U) denote the smallest and largest eigenvalues of U, respectively. Thus, for all t∈(0,L1), there is Xk+tUcustom-character0, where there is defined







L
1

=




σ
1



(

X
k

)





U


2


.





Given Xk, let I and A denote respectively the inactive and active matrix indices of Xk with respect to the cardinality constraint, i.e.:






I
Δ{(i,j): Xi,jk≠0}






A
Δ{(i,j): Xi,jk=0}.


Additionally define














(


i
x

,

j
x


)






Δ
_

_









arg







min


(

i
,
j

)


I






X

i
,
j

k











(


i

g
,
n


,

j

g
,
n



)





Δ
_

_









arg







max


(

i
,
j

)


I







(



f


(

X
k

)



)


i
,
j












(


i

g
,
a


,

j

g
,
a



)





Δ
_

_




arg







max


(

i
,
j

)


A








(



f


(

X
k

)



)


i
,
j




.









The following demonstrates the existence of the necessary Tpk>0 by analyzing two separate cases concerning the cardinality of the set of inactive indices, |I|.


Case 1.): For every (i, j)∈I there is





|Xi,jk−t(∇f(Xk))i,j|≥|Xi,jk|−t|(∇f(Xk))i,j|≥|Xix,jxk|−t|(∇f(Xk))ig,n,jg,n|.


Defining







T
1
k

=




X


i
x

,

j
x


k








(



f


(

X
k

)



)



i

g
,
n


,

j

g
,
n






+




(



f


(

X
k

)



)



i

g
,
a


,

j

g
,
a









,




it is concluded from (34) that for all (i, j)∈I,





|Xi,jk−t(∇f(Xk))i,j|>t|(∇f(Xk))ig,a,jg,a|≥t|(∇f(Xk))h,i|,  (34)


for all (h,l)∈A and t∈(0, T1k). Note that T1k is well-defined since |Xix,jxk| and |(∇f(Xk))ig,n,jg,n|+|(∇f(Xk))ig,a,jg,a| are strictly positive. It is possible to decompose






X
k
−t∇f(Xk)=[Xk−t∇f(Xk)]I+[Xk−t∇f(Xk)]A=[Xk−t∇f(Xk)]I−t[∇f(Xk)]A.


Combining this decomposition with Remark 1 and (34), it follows that





[Xk−t∇f(Xk)]I=PΩ(Xkt∇f(Xk)) for t∈(0,T1k).


Now consider U=[∇f(Xk)]I, and define







T
2
k




Δ
_

_



{






σ
1



(

X
k

)





U


2











if





U


0











1



otherwise








Then, letting






T
p
k=min(T1k,T2k),


not only is Xk−t[∇f(Xk)]I=PΩ(Xk−t∇f(Xk)), but also Xk−t[∇f(Xk)]I is positive definite for any t∈(0,Tpk).


Case 2. (|I|<κ): Denote by Â⊆A the set of indices (i, j)∈A corresponding to the κ−|I| largest absolute magnitude entries of [∇f(Xk)]A.


Denote






(


i

g
,

a




,

j

g
,

a





)




Δ
_

_


arg







max


(

i
,
j

)



A

\


A
^










(



f


(

X
k

)



)


i
,
j




.






Define






T
3
k

=





X


i
x

,

j
x


k








(



f


(

X
k

)



)



i

g
,
n


,

j

g
,
n






+




(



f


(

X
k

)



)



i

g
,

a




,

j

g
,

a










.





From (34), it follows that





|Xi,jk+t(∇f(Xk))i,j|>t|(∇f(Xk))ig,a,jg,a|≥t|(∇f(Xk))k,h|,  (35)


for all (i, j)∈I,(k, h)∈A\Â and t∈(0, T3k). Similarly to Case 1, it is possible to decompose











X
k

-

t




f


(

X
k

)







=









[


X
k

-

t




f


(

X
k

)





]

I

+


[


X
k

-

t




f


(

X
k

)





]


A
^


+














[


X
k

-

t




f


(

X
k

)





]


A

\


A
^










=









[


X
k

-

t





f


(

X
k

)


k




]

I

-


t


[



f


(

X
k

)



]



A
^


-



t


[



f


(

X
k

)



]



A

\


A
^



.









From this decomposition, Remark 1, and (35), there is [Xk−t∇f(Xk)]I∪Â∈PΩ(Xk−t∇f(Xk)) for t∈(0, T3k). Define U=[∇f(Xk)]I∪Â, and analogous to the definition of T2k in Case 1, define







T
4
k




Δ
_

_



{






σ
1



(

X
k

)





U


2











if





U


0











1



otherwise








Choose





T
p
k=min(T3k,T4k).


It is concluded that for any t∈(0,Tpk), there is both Xk−t[∇f(Xk)]I∪Â∈PΩ(Xk−t∇f(Xk)) and Xk−t[∇f(Xk)]I∪Â is positive definite. Since every matrix in PΩ(Xk−t∇f(Xk)k) can be represented as Xk−t[∇f(Xk)]I∪Â, the existence of the needed Tpk is proven.


The following proves that there exists a positive number Tδ>0, independent of k, such that












f


(

X

k
+
1


)





f
max
k

-


δ
2







X

k
+
1


-

X
k




F
2






provided





t





(

0
,

T
δ


)


,




(
36
)







where Xk+1=PΩ(Xk−t∇f(Xk)) and suppose that







f


(

X
k

)





f
max

k
-
1


-


δ
2








X
k

-

X

k
-
1





2

.







Here, the notion X−1=X0 and fmax−1=f(X0) is adopted. Define






L={X∈R
n×n
:f(X)≤f(X0)}.


Note that f(Xk)≤fmaxk−1≤f(X0), thus Xk∈L. Replacing X* and X by X and X0 respectively in (18) and (20) yields L∪{X∈Rn×n:σIncustom-characterXcustom-characterσIn}, where σ and σ are some constants satisfying 0<σ<σ<∞. Define






K={X∈R
n×n:0.5σIncustom-characterXcustom-character(σ+0.5σ)In}.  (37)


There is L⊂K. Since Xk+1 minimizes ht,In,xk(Z), it follows











h

t
,

I
n

,

X
k





(

X

k
+
1


)




=








tr


(





f


(

X
k

)


T




(


X

k
+
1


-

X
k


)


)


+


1

2

t








X

k
+
1


-

X
k




F
2















                                                            




h

t
,


I
n



X
k






(

X
k

)


=
0.








Obtain




Xk+1−XkF2≤2ttr(∇f(Xk)T(Xk−Xk+1))


Taking norms yields





Xk+1−XkF≤2t∥∇f(Xk)∥F=2t∥S−(Xk)−1+λXkF


Since Xk lies in the compact set K, there exists a constant c, independent of Xk∈K, such that





∥∇f(Xk)∥F=∥S−(Xk)−1+λXkF≤c.


Choose β∈(0,∞) so that 2δc≤0.5σ. Xk+1∈K when t≤β and Xk∈L . Indeed, there is











σ
min



(

X

k
+
1


)









                                       




σ
min



(

X
k

)


-


σ
max



(


X

k
+
1


-

X
k


)










=




                                             




σ
min



(

X
k

)


-





X

k
+
1


-

X
k




2














                                                            



σ
_

-






X

k
+
1


-

X
k




F


n




















σ
_

-


σ
_


2


n




>

0.5


σ
_







(


since






X

k
+
1


-

X
k




F




0.5


σ
_



)










Furthermore, there is











σ
max



(

X

k
+
1


)














σ
max



(

X
k

)


+


σ
max



(


X

k
+
1


-

X
k


)















                      



σ
_

+





X

k
+
1


-

X
k




F














                                         



σ
_

+

0.5



σ
_

.










Hence it is concluded that Xk+1∈K. Because the eigenvalues of any matrix in K are bounded, f is Lipschitz continuously differentiable on K. Denote Lf by the Lipschitz constant for f on K. In view of the convexity of K, the line segment connecting Xk and Xk+1 is contained in K. There is










f


(

X

k
+
1


)













f


(

X
k

)


+

tr


(





f


(

X
k

)


T




(


X

k
+
1


-

X
k


)


)


+


1

2


L
f









X

k
+
1


-

X
k




F
2















                                           



f


(

X
k

)


-


(


1

2

t


-

1

2


L
f




)







X

k
+
1


-

X
k




F
2















                                               



f
max
k

-


(


1

2

t


-

1

2


L
f




)







X

k
+
1


-

X
k




F
2















                                  




f
max
k

-


δ
2







X

k
+
1


-

X
k




F
2






if





δ





1
t

-


1

L
f


.










Define







T
σ

=

min


(



σ
_


4

c


,


L
f



δ






L
f


+
1



)



,




then there is












f


(

X

k
+
1


)





f
max

k
-
1


-


δ
2







X

k
+
1


-

X
k




F
2






provided





t





[

0
,

T
δ


)


,




(
38
)







i.e., the sufficient decrease condition in Line 29 holds for all t∈[0, Tδk). Thus, letting Tpk be the appropriate Tpk from either Case 1 or Case 2, and letting σ and σk,0 be taken from Algorithm 1, it is concluded that within









log
σ



(


min


{


T
p
k

,

T
δ


}



α

k
,
0



)







iterations, Xk+1 will satisfy both of the stopping criteria of the line search.


Lemma 3. Suppose that problem (14) satisfies Assumption 1. Let {Xk} denote the sequence of iterates generated by Algorithm 1 for (14). Then, ∥Xk+1−Xk∥→0. Moreover, the sequence {f(Xk)} admits a limit point f*.


Proof 6. If ∇f(Xk)=0 for any k, then the lemma is immediate. So, assume ∇f(Xk)≠0 for every k. Due to the stopping criteria for the line-search in Lines 16 and 28 of Algorithm 1, there is f(Xk+1)≤fmaxk for all k. Thus, fmaxk+1≤fmaxk for all k. The sequence {fmaxk} is monotone decreasing and bounded from below, hence there exists a limit point {fmaxk}→f*. Since f is uniformly continuous on the level set L⊂{X:σIncustom-characterXcustom-characterσIn}, an induction argument can be used to conclude that ∥Xk+1−Xk∥→0.


Theorem 4. Suppose that problem (14) satisfies Assumption 1. Let {Xk} denote the sequence of iterates generated by Algorithm 1 for (14). If X* is an accumulation point of {Xk}, then X* satisfies the necessary optimality conditions stated in Lemma 1, i.e. there exists T>0 such that (27) holds.


Proof 7. Again, assume ∇f(Xk)≠0 for every k, since otherwise the theorem is immediate. Since X* is an accumulation point of {Xk}, there exists a subsequence of indices {ki}i=0, 1, 2, . . . such that








lim

i






X

k
i



=


X
*

.





By the definition or limit, for a sufficiently large k, the positions of the nonzero entries of Xki are the same for ki>k, and their magnitudes are uniformly bounded away from zero. In Lemma 2, it was proven that Xk∈L∈{σIncustom-characterXcustom-characterσIn} for all k, the largest eigenvalue of ∇f(Xk)=S−(Xk)−1+λXk is bounded from above for all k. Thus, reusing the definition of Tpk from the proof of Lemma 2 for either Case 1 or Case 2, there exists a constant Tp>0 such that TpkiTp for all ki>k. So, the step-size αki taken at the end of the ki-th iteration is lower-bounded as







α

k
i





α


k
i

,
0




σ




log
σ



(


min


{


T
_



p
,

T
δ


}





α


k
i

,
0



)










and so there exists α>0 such that









lim


i











α

k
i





α
_


,




since there are bounds αmin≤αki,0≤αmax as algorithmic parameters. Take any T such that 0<T<min(α,Tp). Consider two possible cases for the subsequence {Xki+1}:


Case 1: Suppose that infinitely many iterates in {Xki+1}i=0, 1, . . . are generated in Step 2 (the gradient projection step). Then, without loss of generality, it can be assumed that the entire subsequence {Xki+1}i=0, 1, . . . is generated in Step 2. By triangle inequality, there is









lim

i










X


k
i

+
1


-

X
*




F






lim

i










X


k
i

+
1


-

X

k
i





F


+


lim

i










X

k
i


-

X
*




F




,




and so it is concluded from Lemma 3 and the choice of subsequence {ki} that








lim

i






X


k
i

+
1



=


X
*

.





For any t>0, let Yki∈St,In(Xki). It follows from the definition of St,In that











tr


(





f


(

X

k
i


)


T




(


X


k
i

+
1


-

X

k
i



)


)


+


1

2

t








X


k
i

+
1


-

X

k
i





F
2






tr


(





f


(

X

k
i


)


T




(


Y

k
i


-

X

k
i



)


)


+


1

2

t









Y

k
i


-

X

k
i





F
2

.







(
39
)







Since






X


k
i

+
1





S


α

k
i


.

I
n





(

X

k
i


)






for all ki, there also is from the definition of






S


α

k
i


.

I
n






that











tr


(





f


(

X

k
i


)


T




(


Y

k
i


-

X

k
i



)


)


+


1

2


α

k
i










Y

k
i


-

X

k
i





F
2






tr


(





f


(

X

k
i


)


T




(


X


k
i

+
1


-

X

k
i



)


)


+


1

2


α

k
i










X


k
i

+
1


-

X

k
i





F
2







(
40
)







Combining (39) and (40), then, it is obtained for any t>0











tr


(





f


(

X

k
i


)


T




(


X


k
i

+
1


-

X

k
i



)


)


+


1

2

t








X


k
i

+
1


-

X

k
i





F
2






tr


(





f


(

X

k
i


)


T




(


X


k
i

+
1


-

X

k
i



)


)


+


1

2


α

k
i










X


k
i

+
1


-

X

k
i





F
2


+


(


1

2

t


-

1

2


α

k
i





)








Y

k
i


-

X

k
i





F
2

.







(
41
)







If t∈(0, T), then it may be concluded from (42) that









lim

i










Y

k
i


-

X

k
i





F


=
0

,




since








lim

i










X


k
i

+
1


-

X

k
i





F


=
0




by Lemma 3 and since








1

2

t


-

1

2


α

k
i





>
0




by the choice of t. By triangle inequality,









lim

i










Y

k
i


-

X
*




F






lim

i










Y

k
i


-

X

k
i





F


+


lim

i










X

k
i


-

X
*




F




,




and hence








lim

i






Y

k
i






X
*

.





So, using Proposition 1, it is concluded that X*∈PΩ(X*−t∇f(X*)) for all t∈(0, T).


Case 2: Suppose that only finitely many elements of the subsequence {Xki+1} are generated in Step 2. Then, without loss of generality, it can be assumed that every iterate of the subsequence {Xki+1}i=0, 1 . . . is generated in Step 1. By the pigeonhole principle, there exists an integer m∈[0, p) such that mod(ki+1,q)=m for infinite many ki. Hence one has mod(ki+1+p−m,q)=p for infinite many ki; that is, every iterate of the subsequence {Xki+1+p−m} is generated in Step 2. From the triangle inequality,









lim

i










X


k
i

+
1
+
p
-
m


-

X
*




F






lim

i









j
=
0


p
-
m












X


k
i

+
j
+
1


-

X


k
i

+
j





F



+


lim

i










X

k
i


-

X
*




F




,




and so by Lemma 3 and the choice of the subsequence {Xki}, Xki+1+p−m→X*. From here, the same argument as in Case 1 can be applied to the subsequence {Xki+1+p−m} to get X*∈PΩ(X*−t∇f(X*)) for all t∈(0, T).


It now only remains to prove the uniqueness of PΩ(X*−t∇f(X*)), show that X*=PΩ(X*−t∇f(X*)) as a set equality. Two separate cases are analyzed:


Case A. (|I(X*)|=κ): Assume that Z*∈PΩ(X*−t∇f(X)) for all t∈(0,T). Towards a contradiction, suppose that Z*≠X* for some t′∈(0, T). By the definition of PΩ, it is concluded that because Z*≠X*, |Xi′,j′*|≤|Zl′,m′*(t′)|, where there is defined





|Xi′,j′*|Δargmin{|Xu,v*|:(u,v)∈I(X*)}





and





|Zl′*,m′*(t)|Δargmax{|(X*−t∇f(X*))u,v|:(u,v)∈A(X*)}.


This implies that (∇f(X*))l′,m′=0, since otherwise there exists {tilde over (t)}∈(0,T}) such that |(X*−{tilde over (t)}∇f(X*))l′,m′|>|Xi′,j′*|, which would contradict X*∈PΩ(X*−t∇f(X*)), since (i′, j′)∈I(X*) and (l′, m′)∈A(X*). But, if (∇f(X*))l′,m′=0, then X*∉(X*−t∇f(X*)) for any t∈(0,T), since (X*−t∇f(X*))l′,m′=(X*)l′,m′, and (l′,m′)∈A(X*). This is a contradiction.


Case B. (|I(X*)|<κ): If (∇f(X*))i,j≠0 for some (i, j)∈A(X*), then there exists {tilde over (t)}∈(0,T) such that |(X*−{tilde over (t)}∇f(X*))i,|≠0; hence X*∉PΩ(X*−{tilde over (t)}∇f(X*)), which would be a contradiction. Thus, it is the case that [∇f(X*)]A(X*)*=0. Consider the reduced projection problem over the inactive indices in I(X*),










min

Y

i
,
j









(

i
,
j

)



I


(

X
*

)








(


Y

i
,
j


-


(


X
*

-

t




f


(

X
*

)





)


i
,
j



)

2

.






(
42
)







Since Hi,j* is a solution to (42), it is derived that [∇f(X*)]I(X*)=0; thus it is concluded that ∇f(X*)=0, and so it follows that X*=PΩ(X*−t∇f(X*)) for any t∈(0,T).


The following result further shows that any accumulation point produced by Algorithm 1 is a strict local minimizer of (14).


Theorem 5. Any accumulation point X* of {Xk}k=0, 1, . . . generated by Algorithm 1 is a strict local minimizer of (14).


Proof 8. It may be proven that there exists ε>0 such that for all Y so that X*+Y is feasible with respect to the constraints of (14) and ∥Y∥F∈(0,ε), it holds that f(X*+Y)>f(X*). From Theorem 4, there exists T>0 such that X*=PΩ(X*−t∇f(X*)) for all t∈(0,T). Equivalently, for all such t, the matrix 0 is the unique global minimizer of







arg



min



Y


:



X
*


+
Y


Ω






g
t



(
Y
)





Δ
_

_



f


(

X
*

)





+

tr


(




f


(

X
*

)




Y

)


+


1

2

t







Y


F
2

.






Moreover, f is strongly convex on K={x:0.5σInºXº(σ+0.5σ)In} defined in (37). Denote ρ by the strong convexity constant. Note that Xk∈L⊂{σInºXºσIn}, hence the limit X*∈{σIncustom-characterXcustom-characterσIn}. In Lemma 2, it was proven that if ∥Y∥F≤0.5σ then X*+Y∈K. For any ∥Y∥F≤0.5σ, there is













f


(


X
*

+
Y

)









                      



f


(

X
*

)


+

tr


(






f


(

X
*

)



,
Y



)


+


1

2

ρ






Y


F
2










=









f


(

X
*

)


+


1

2

ρ







Y
+

ρ




f


(

X
*

)







F
2


-


ρ
2








f


(

X
*

)





F
2



,








(
43
)







where the equality is by completing the square. As in the proof of Theorem 4, the analysis may be split into two cases.


Case A (|I(X*)|=κ): Observe that there is Yi,j=0 for every (i, j)∈A(X*), in order for X*+Y∈Ω to hold in an arbitrarily small neighborhood of X*. Also observe that (∇f(X*))i,j=0 for every (i, j)∈I(X*); otherwise, X*∉PΩ(X*−t∇f(X*)) for sufficiently small t, which would yield a contradiction. Taking





ε<min{0.5σ,min{|Xi,j*|:(i,j)∈I(X*)}},  (44)


there is X*+Ycustom-character0 since 0.5σ1(X*). From (43), it is obtained










f


(


X
*

+
Y

)













f


(

X
*

)


+


1

2

ρ








(

i
,
j

)



I


(

X
*

)







(


Y

i
,
j


+


ρ


(



f


(

X
*

)



)



i
,
j



)

2



+















1

2

ρ








(

i
,
j

)



A


(

X
*

)







(


Y

i
,
j


+


ρ


(



f


(

X
*

)



)



i
,
j



)

2



-















ρ
2







(

i
,
j

)



I


(

X
*

)







(


(



f


(

X
*

)



)


i
,
j


)

2



-














ρ
2







(

i
,
j

)



A


(

X
*

)







(


(



f


(

X
*

)



)


i
,
j


)

2










=








f


(

X
*

)


+


1

2

ρ








(

i
,
j

)



I


(

X
*

)






Y

i
,
j

2



+















1

2

ρ








(

i
,
j

)



A


(

X
*

)







(


ρ


(



f


(

X
*

)



)



i
,
j


)

2



-














ρ
2







(

i
,
j

)



A


(

X
*

)







(


(



f


(

X
*

)



)


i
,
j


)

2












=

f


(

X
*

)











+

1

2

ρ









(

i
,
j

)



I


(

X
*

)






Y

i
,
j

2













>

f


(

X
*

)



,













where the last strict inequality comes from the fact that not all the entries of Y are 0.


Case B. (|I(X*)|<κ): As in the proof of Case B in Theorem 4, there is ∇f(X*)=0. Hence from (43), there is








f


(


X
*

+
Y

)





f


(

X
*

)


+


1

2

ρ






Y


F
2



>

f


(

X
*

)



,




for any ∥Y∥F≤ε (ε defined in (44)), which completes the proof.


In one aspect, the number of nonzeros κ and the coefficient λ that penalizes the Frobenius norm may be tuned. In one aspect, it is noted that the performance of the model of Equation (3), for example, evaluated based on real data, is stable for a wide range of parameters K and A.


A system and/or method in embodiments provide for anomaly localization for multivariate time series datasets. An l0-constrained model with an l2-regularization in an objective function in one embodiment learns sparse dependency graphs, e.g., creates a depedency graph structure. In an embodiment, a projected gradient algorithm for the l0-constrained problem is developed and proved that it converges to a stationary point. In embodiments, metrics are applied for scoring anomalies. An l2-regularized l0-constrained models of Equation (3) and Equation (4), for example, combined with a metric, e.g., the conditional expected Kullback-Liebler divergence anomaly scoring of Equation (8), improves methods for detecting anomalies.



FIG. 3 is a diagram showing components of a system in one embodiment, which performs localization for multivariate time series datasets detects anomalies in sensor data. One or more hardware processors 302 such as a central processing unit (CPU), a graphic process unit (GPU), and/or a Field Programmable Gate Array (FPGA), an application specific integrated circuit (ASIC), and/or another processor, may be coupled with a memory device 304, and constructs a first dependency graph based on a first data set by solving a first objective function constrained with a maximum number of non-zeros and formulated with a regularization term, which includes a quadratic penalty to control sparsity. In an embodiment, the quadratic penalty is determined as a function of the first data set in constructing the first dependency graph. One or more hardware processors 302 also constructs a second dependency graph based on the second data set by solving the first objective function constrained with the maximum number of non-zeros and formulated with the regularization term, which includes a quadratic penalty. In constructing the second dependency graph, the quadratic penalty is determined as a function of the first data set and the second data set.


In an embodiment, one or more hardware processors 302 receive via a communication or network interface 308, a first data set, which includes sensor data detected by a plurality of sensors coupled to equipments in operation during a first period of time, for example, shown at 312. During this first period of time, the equipments are considered to have functioned in normal manner (e.g., without particular errors or malfunctioning), and hence the first data set includes data associated with normal state of operations of the equipments. One or more hardware processors 302 also receive a second data set comprising sensor data detected by the plurality of sensors coupled to the equipments 312 in operation during a second period of time. The second period of time may be the current time, for example, a window of current time. One or more hardware processors 302 determines an anomaly score for each of the plurality of sensors based on comparing the first dependency graph and the second dependency graph.


One or more hardware processors 302, in an embodiment, responsive to determining that an anomaly score associated with a sensor in the plurality of sensors meets a threshold value indicative of abnormal functioning, automatically triggers a corrective action to correct an equipment coupled to the sensor. For instance, one or more hardware processors 302 may send a signal to adjust one or more parameters of one or more equipments 312, via a network interface 308.


In an embodiment, one or more hardware processors 302 solve the first object function implementing a projected gradient algorithm. The projected gradient algorithm embeds an approximate Newton method, wherein feasibility with respect to sparsity constraint is handled via projection, and a symmetric positive-definiteness of iterates is ensured through a line-search procedure.


The memory device 304 may include random access memory (RAM), read-only memory (ROM) or another memory device, and may store data and/or processor instructions for implementing various functionalities associated with the methods and/or systems described herein. A processor 302 may execute computer instructions stored in the memory 304 or received from another computer device or medium. The memory device 304 may, for example, store instructions and/or data for functioning of the one or more hardware processors 302, and may include an operating system and other program of instructions and/or data. In one aspect, input data such as the first data set and the second data set, and program instructions which one or more of the hardware processors 302 may execute, can be stored on a storage device 306 or received via a network interface 308 from a remote device, and may be temporarily loaded into the memory device 304 to construct dependency graphs and detect anomaly. One or more hardware processors 302 may be coupled with interface devices such as a network interface 308 for communicating with remote systems, for example, via a network, and an input/output interface 310 for communicating with input and/or output devices such as a keyboard, mouse, display, and/or others.



FIG. 4 is a diagram illustrating a screen display of an analytics tool in an embodiment, which integrate anomaly detection technique in one embodiment. Learning dependency models and detecting anomalies according to embodiments in the present disclosure can be integrated as an analytics function, for example, shown at 402, which can be selectable via a user interface 400 of an analytics tool. Selecting the function 402, for example, navigates a user through building of dependency graphs and detecting anomalies in connected sensor network, for example, of equipments in operation, for example, in a manufacturing or another industrial process.



FIG. 5 illustrates a schematic of an example computer or processing system that may implement a system in one embodiment of the present disclosure. The computer system is only one example of a suitable processing system and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the methodology described herein. The processing system shown may be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the processing system shown in FIG. 5 may include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.


The computer system may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. The computer system may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.


The components of computer system may include, but are not limited to, one or more processors or processing units 12, a system memory 16, and a bus 14 that couples various system components including system memory 16 to processor 12. The processor 12 may include a module 30 that performs the methods described herein. The module 30 may be programmed into the integrated circuits of the processor 12, or loaded from memory 16, storage device 18, or network 24 or combinations thereof.


Bus 14 may represent one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.


Computer system may include a variety of computer system readable media. Such media may be any available media that is accessible by computer system, and it may include both volatile and non-volatile media, removable and non-removable media.


System memory 16 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) and/or cache memory or others. Computer system may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 18 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (e.g., a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 14 by one or more data media interfaces.


Computer system may also communicate with one or more external devices 26 such as a keyboard, a pointing device, a display 28, etc.; one or more devices that enable a user to interact with computer system; and/or any devices (e.g., network card, modem, etc.) that enable computer system to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 20.


Still yet, computer system can communicate with one or more networks 24 such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 22. As depicted, network adapter 22 communicates with the other components of computer system via bus 14. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.


The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.


The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.


Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.


Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.


Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.


These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.


The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


The corresponding structures, materials, acts, and equivalents of all means or step plus function elements, if any, in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims
  • 1. A method, comprising: receiving a first data set comprising sensor data detected by a plurality of sensors coupled to equipments in operation during a first period of time;constructing a first dependency graph based on the first data set by solving a first objective function constrained with a maximum number of non-zeros and formulated with a regularization term comprising a quadratic penalty to control sparsity, wherein the quadratic penalty is determined as a function of the first data set in constructing the first dependency graph;receiving a second data set comprising sensor data detected by the plurality of sensors coupled to the equipments in operation during a second period of time;constructing a second dependency graph based on the second data set by solving the first objective function constrained with the maximum number of non-zeros and formulated with the regularization term comprising a quadratic penalty, wherein the quadratic penalty is determined as a function of the first data set and the second data set in constructing the second dependency graph; anddetermining an anomaly score for each of the plurality of sensors based on comparing the first dependency graph and the second dependency graph.
  • 2. The method of claim 1, further comprising: responsive to determining that an anomaly score associated with a sensor in the plurality of sensors meets a threshold value indicative of abnormal functioning, automatically performing a corrective action to correct equipment coupled to the sensor.
  • 3. The method of claim 1, wherein the first object function is solved by a projected gradient algorithm.
  • 4. The method of claim 3, wherein the projected gradient algorithm embeds an approximate Newton method, wherein feasibility with respect to sparsity constraint is handled via projection, and a symmetric positive-definiteness of iterates is ensured through a line-search procedure.
  • 5. The method of claim 1, wherein the determining of the anomaly score for each of the plurality of sensors comprises implementing a conditional expected Kullback-Liebler divergence between the first dependency graph and the second dependency graph.
  • 6. The method of claim 1, wherein the determining of the anomaly score for each of the plurality of sensors comprises implementing a stochastic nearest neighbors algorithm measuring dissimilarity between neighborhood of i-th variable representing a sensor between the first dependency graph and the second dependency graph.
  • 7. The method of claim 1, wherein the determining of the anomaly score for each of the plurality of sensors comprises implementing a sparsest subgraph approximation based on the first dependency graph and the second dependency graph.
  • 8. The method of claim 1, wherein the equipments comprise equipments of an industrial process.
  • 9. The method of claim 1, wherein the equipments comprise equipments of a manufacturing process manufacturing a product.
  • 10. A computer readable storage device storing a program of instructions executable by a machine to perform a method comprising: receiving a first data set comprising sensor data detected by a plurality of sensors coupled to equipments in operation during a first period of time;constructing a first dependency graph based on the first data set by solving a first objective function constrained with a maximum number of non-zeros and formulated with a regularization term comprising a quadratic penalty to control sparsity, wherein the quadratic penalty is determined as a function of the first data set in constructing the first dependency graph;receiving a second data set comprising sensor data detected by the plurality of sensors coupled to the equipments in operation during a second period of time;constructing a second dependency graph based on the second data set by solving the first objective function constrained with the maximum number of non-zeros and formulated with the regularization term comprising a quadratic penalty, wherein the quadratic penalty is determined as a function of the first data set and the second data set in constructing the second dependency graph; anddetermining an anomaly score for each of the plurality of sensors based on comparing the first dependency graph and the second dependency graph.
  • 11. The computer readable storage device of claim 10, further comprising: responsive to determining that an anomaly score associated with a sensor in the plurality of sensors meets a threshold value indicative of abnormal functioning, automatically performing a corrective action to correct equipment coupled to the sensor.
  • 12. The computer readable storage device of claim 10, wherein the first object function is solved by a projected gradient algorithm.
  • 13. The computer readable storage device of claim 12, wherein the projected gradient algorithm embeds an approximate Newton method, wherein feasibility with respect to sparsity constraint is handled via projection, and a symmetric positive-definiteness of iterates is ensured through a line-search procedure.
  • 14. The computer readable storage device of claim 10, wherein the determining of the anomaly score for each of the plurality of sensors comprises implementing a conditional expected Kullback-Liebler divergence between the first dependency graph and the second dependency graph.
  • 15. The computer readable storage device of claim 10, wherein the determining of the anomaly score for each of the plurality of sensors comprises implementing a stochastic nearest neighbors algorithm measuring dissimilarity between neighborhood of i-th variable representing a sensor between the first dependency graph and the second dependency graph.
  • 16. The computer readable storage device of claim 10, wherein the determining of the anomaly score for each of the plurality of sensors comprises implementing a sparsest subgraph approximation based on the first dependency graph and the second dependency graph.
  • 17. A system, comprising: a hardware processor coupled with a communication interface;a memory device coupled to the hardware processor;the hardware processor operable to receive via the communication interface, a first data set comprising sensor data detected by a plurality of sensors coupled to equipments in operation during a first period of time;the hardware processor operable to construct a first dependency graph based on the first data set by solving a first objective function constrained with a maximum number of non-zeros and formulated with a regularization term comprising a quadratic penalty to control sparsity, wherein the quadratic penalty is determined as a function of the first data set in constructing the first dependency graph;the hardware processor operable to receive a second data set comprising sensor data detected by the plurality of sensors coupled to the equipments in operation during a second period of time;the hardware processor operable to construct a second dependency graph based on the second data set by solving the first objective function constrained with the maximum number of non-zeros and formulated with the regularization term comprising a quadratic penalty, wherein the quadratic penalty is determined as a function of the first data set and the second data set in constructing the second dependency graph; andthe hardware processor operable to determine an anomaly score for each of the plurality of sensors based on comparing the first dependency graph and the second dependency graph.
  • 18. The system of claim 17, wherein the hardware processor is further operable to, responsive to determining that an anomaly score associated with a sensor in the plurality of sensors meets a threshold value indicative of abnormal functioning, automatically trigger a corrective action to correct an equipment coupled to the sensor.
  • 19. The system of claim 17, wherein the first object function is solved by a projected gradient algorithm.
  • 20. The system of claim 19, wherein the projected gradient algorithm embeds an approximate Newton method, wherein feasibility with respect to sparsity constraint is handled via projection, and a symmetric positive-definiteness of iterates is ensured through a line-search procedure.