The present invention relates to a regression apparatus, and a regression method for learning a classifier and cluster the covariates (features of each data sample), and a computer-readable storage medium storing a program fix realizing these.
Classification and interpretability of the classification result is important fix various applications. For example: Text classification: which groups of words are indicative of the sentiment? Microarray classification: which groups of genes are indicative of a certain disease?
In particular, we consider here the problem where the following information is available:
Data samples with class labels,
Prior knowledge about the interaction of the features (e.g. word similarity).
There is only few prior works that addresses this problem. The first work, called OSCAR (e.g., see NPL 1), performs joint linear regression and clustering using the following objective function. The objective function is a also convex problem (like one of our proposed methods). However, it has mainly two problems/limitations:
Highly negative correlated covariates are also put into the same cluster. This is not a problem for the predictive power (since the absolute values are encouraged to be the same, and not the original value), however interoperability may suffer (see Remark to FIG. 2 in NPL 1).
Auxiliary information about the features (covariates) cannot not be included.
Another approach that allows to include auxiliary information about covariates is BOWL (e.g., see NPL 2). The basic components are illustrated in
It is a two-step approach that
1. Cluster covariates e.g. with k-means. Here they cluster words using word embeddings.
2. Train a classifier with the word clusters.
NP1: Howard D Bondell and Brian J Reich. Simultaneous regression shrink-age, variable selection, and supervised clustering of predictors with oscar. Biometrics, 64(1): 115-123, 2008.
NPL 2: Weikang Rui, Kai Xing, and Yawei Jia. Bowl: Bag of word clusters text representation using word embeddings. In International Conference on Knowledge Science, Engineering and Management, pages 3-14. Springer, 2016.
However, the main problem is that the clustering (after first step) is fixed and can got adjust to the class labels. To see why this is a problem, consider the following example.
Let us assume that the word embeddings of “great” and “bad” are very similar (which indeed is often the case, since they can occur in very similar contexts). This would lead to the result that in the first step, “great” and “bad” are clustered together.
However, if the classification task is sentiment classification, then this will degrade performance, (Reason: the cluster (“great”, “bad”) will be a feature that cannot be used for distinguishing positive and negative comments). This example, is also illustrated in
Previous methods can either not include prior knowledge about covariates, or they suffer from degraded solutions which are due to a sub-optimal two step procedure (see above example), and are prone to bad local minima due to a non-convex optimization function.
One example of an object of the present invention is to provide a regression apparatus, a regression method, and a computer-readable storage medium according to which the above-described problems are eliminated and a quality of the resulting classification and clustering are both improved.
Instead of separating the clustering and classification step, we propose a apparatus, a method, and a computer-readable storage medium that jointly learns the parameters of a classier and a clustering of the covariates. Furthermore, we propose a solution that is convex, and, therefore independent of the initialization is guaranteed to find the global optima.
In order to achieve the foregoing object, a regression apparatus according to one aspect of the present invention is for optimizing a joint regression and clustering criteria, and includes:
a train classifier unit that trains a classifier with a weight vector or a weight matrix, using labeled training data, a similarity of features, a loss function characterizing regression quality, and a penalty encouraging the similarity of features, wherein the strength of the penalty is proportional to the similarity of features.
an acquire clustering result unit that, uses the trained classifier, to identify feature clusters by grouping the features which regression weights are equal.
In order to achieve the foregoing object, a regression method according to another aspect of the present invention is for optimizing a joint regression and clustering criteria, and includes:
(a) a step of training a classifier with a weight vector or a weight matrix, using labeled training data, a similarity of features, a loss function characterizing regression quality, and a penalty encouraging the similarity of features, wherein the strength of the penalty is proportional to the similarity of features,
(b) a step of, by using the trained classifier, identifying feature dusters by grouping the features which regression weights are equal.
In order to achieve the foregoing object, a computer-readable recording medium according to still another aspect of the present invention has recorded therein a program for optimizing a joint regression and clustering criteria using a computer, and the program includes an instruction to cause the computer to execute:
(a) a step of training a classifier with a weight vector or a weight matrix, using labeled training data, a similarity of features, a loss function characterizing regression quality, and a penalty encouraging the similarity of features, wherein the strength of the penalty is proportional to the similarity of features,
(b) a step of, by using the trained classifier, identifying feature clusters by grouping the features which regression weights are equal.
As described above, the present invention can improve a quality of the resulting classification and clustering.
[
[
[
[
[
[
[
[
The following describes a regression apparatus, a regression method, and a computer-readable recording medium according to an embodiment of the present invention with reference to
First, a configuration of a regression apparatus 10 according to the present embodiment will be described using
As shown in
As described above, the regression apparatus 10 learns the parameters of a classier and a clustering of the covariates. As a result, the regression apparatus 10 can improve a quality of the resulting classification and clustering.
Here, a configuration and function of a regression apparatus 10 according to the present embodiment 1 will also be described in addition to the monitoring apparatus 10 with reference to
Remark about our notation: We denote a matrix, e.g. B εRd×d, and a column vector e.g. x ε Rd. Furthermore, the i-th row of B is denoted by B, and is a row vector. The j-th column of B is denoted by B
and is a column vector.
Our proposed procedure is outlined in the diagram shown in
As shown in
In the following, we propose two different formulations as an optimization problem. The general idea is to jointly cluster the features (covariates) and learn a classifier.
The first formulation provides explicit cluster assignment probabilities for each covariate. This can be advantageous for example, when the meaning of covariates is ambiguous. However, the resulting problem is not convex. The second formulation is convex, and we therefore can find a global optima.
In the formulation 1, the loss function is the multi-logistic regression loss with regression weight vectors for each feature, and includes a penalty. The penalty is set for each pair of features, and consists of some distance measure between each pair of feature weight vector times the similarity between the features.
Let xs ε Rd denote the covariate vector of sample s, and Let Z ε Rd×d be the covariate-cluster assignment matrix, where the i-th row corresponds to the i-th covariate, and the j-th column corresponds to the j-th cluster.
For simplicity, we consider here logistic regression for classification. Let f be the logistic function with parameter vector β ε Rd and bias β0. Class probability is defined as follows.
ys ε {−1, 1} is the class-label of samples. Then our objective time ion is optimized by the following equation.
The parameters β, w εRd, β0εR and ZεRd×d, and fixed hyper-parameters λ>0 and γ>=0. λ is a hyper-parameter that controls the sparsity of the columns of Z, and therefore the number of clusters. To understand this, note that the term A(Math. 6) is a group lasso penalty on the columns of Z (for group lasso see e.g. reference [1]). The hyper-parameter γ controls the weight of the clustering objective.
Reference [1]: Trevor Hastie, Robert Tibshirani, and Martin Wainwright. Statistical learning with sparsity. CRC press, 2015.
The matrix Z dotes the clustering. To better understand the resulting clustering, note that in Equation (1), we can write as follows.
The vector cs represents data sample s in terms of the clustering induced by Z. In particular, we have the following,
We say a cluster j exists, if and only if, the j-th column Z is not the zero vector. Therefore, we see that the number of clusters is controlled by the hyper-parameter λ, since it controls the number of zero columns in Z. We also see that Zi,j can be interpreted as the probability that covariate i is assigned to cluster j.
Furthermore, from Equation (7), we see that w(j) defines the logistic regression weight for cluster j. Also, note that due to the regularizer of w, we have that w(j) is zero, if cluster j does not exist.
The effect of this proposed formulation is also illustrated in
In order to be able to determine λ using cross-validation, it is necessary that the forming of clusters helps to increase generalizability. One way to encourage the forming of clusters is to punish weights of smaller clusters more than the weights of larger clusters. One possibility is the following extension:
pj corresponds to the expected number of covariates in cluster j plus one (which is added to prevent division by zero in the objective function). The term B (Math. 15) penalizes high cluster weights in order to prevent over-fitting, whereas small clusters are penalized more. Note that C(Math. 16) is convex, since it is the sum of d functions of the form f(wj, pj)=wj2/pj, where f(wj, pj) is convex (see e.g. reference [2] page 72)
Reference [2]: Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
Let S be a similarity matrix between any two covariates i1 and i2. For example, for text classification, each corresponds to a word. In that case, we can acquire a similarity matrix between words using word embeddings. Let ei εRh denote the embedding of the i-th covariate. Then, we can define S as follows:
where u is a hyper-parameter.
In incorporate the prior knowledge given by S, we propose to add the following penalty:
where q ε{1, 2, ∞}. The penalty encourages similar covariates to share the same cluster assignment.
The final optimization problem is then
As pointed out before, the final optimization problem in Equation (19) is not convex. However, we can get a stationary point by alternating between the optimization of w (holding Z fixed) and Z (holding w fixed). Each step is a convex problem, and can, for example, solved by the Alternating Direction Method of Multipliers. The quality of the stationary point depends on the initialization. One possibility is to initialize Z with the clustering result from k-means.
In the formulation 2, the loss function has a weight for each cluster, and the additional penalty, the additional penalty penalizes large weights, and is less for larger clusters.
Let BεRkxd, where k is the number of classes, and d is the number of covariates. B is the weight vector for class 1. Furthermore, β0εRk contains the intercepts. We now assume the multi-class logistic regression classifier defined by the following equation.
We propose the following formulation for jointly classifying samples x, and clustering the covariates:
The last term is a group lasso penalty on the alas weights for any pair of two features i1 and i2. The penalty is large for similar features, and therefore encourages that B-B
is 0, that means that B
and B
are equal.
The final clustering of the features can be found by grouping two features i1 and i2 together if B and B
are equal.
The advantage of this formulation is that the problem is convex, and we are therefore guaranteed to find a global minima.
Note that this penalty shares some similar to convex clustering as in references [3] and [4]. However, one major difference is that we do not introduce latent vectors for each data point, and our method can jointly learn the classifier and the clustering.
Reference [3]: Eric C Chi and Kenneth Lange. Splitting methods for convex clustering.
Journal of Computational and Graphical Statistics, 24(4): 994{1013, 2015.
Reference [4]: Toby Dylan Hocking, Armand Joulin, Francis Bach, and Jean-Philippe Vert. Clusterpath an algorithm for clustering using convex fusion penalties. In 28th international conference on machine learning, page 1, 2011.
In order to enable feature selection, we can combine our method with another appropriate penalty. In general, we can add an additional penalty term g(B) which is controlled by the hyper-parameter γ:
For example, by placing an 12 group lasso penalty on the columns of B, we can achieve the selection of features. This means we set g as follows.
In more detail, this achieves that features that are irrelevant for the classification task are filtered out (i.e. the corresponding column in B is set to 0).
Another example is to place an additional 11 or 12 penalty on the entries of B, which can prevent over-fitting of the classifier. This means we set g follows.
The exponent is qε{1,2]. For example, consider the situation where feature i1 and i2 both occur only in training samples of class 1, and for simplicity that ∀j≠i1:Sj,i1=Si1,j=0and ∀j≠i2:Sj,i2=Si2,j=0 and Si1,i2=1. Then, without any additional penalty on the entries of B, the trained classifier will place an infinite weight on class 1 for these two features (i.e., B=∞, and B
=∞).
Next, operations performed by the regression apparatus 10 according to an embodiment of the present invention will be described with reference to
First, as shown in
Next, the acquire clustering result unit 12, using the named classifier, identifies feature clusters by grouping the features which regression weight is equal (step S2). Next, the acquire clustering result unit outputs the feature clusters identified (step S3).
We note that it is straight forward to apply our idea to ordinary regression. Let yεR denote the response variable. In order to jointly learn the regression parameter vector PEW and the clustering, we can use the following convex optimization problem:
The classifier that was trained using Equation (19) or Equation (25) can then be used for classification of a new data sample x*. Note that an ordinary logistic regression classifier will use each feature separately, and therefore it is difficult to identify features that are important. For example, in text classification there can be thousands of features (words), whereas an appropriate clustering of the words, reduces the feature space by a third or more. Therefore, inspecting and interpreting the clustered feature space can be much easier.
A program of the present embodiment need only be as program for causing a computer to execute steps A1 to A3 shown in
The program according to the present exemplary embodiment may be executed by a computer system constructed using a plurality of computers. In this case, for example, each computer may function as a different one of the train classifier unit 11 and the acquire clustering result unit 12.
Also, a computer that realizes the regression apparatus 10 by executing the program according to the present embodiment will be described with reference to the drawings.
As shown in
The CPU 111 carries out various calculations by expanding programs (codes) according to the present embodiment, which are stored in the storage device 113, to the main memory 112 and executing them in a predetermined sequence. The main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory). Also, the program according to the present embodiment is provided in a state of being stored in a computer-readable storage medium 120. Note that the program according to the present embodiment may be distributed over the Internet, which is connected to via the communication interface 117.
Also, specific examples of the storage device 113 include a semiconductor storage device such as a flash memory, in addition to a hard disk drive. The input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard or a mouse. The display controller 115 is connected to a display device 119 and controls display on the display device 119.
The data reader/writer 116 mediates data transmission between the CPU 111 and the storage medium 120, reads out programs from the storage medium 120, and writes results of processing performed by the computer 110 in the storage medium 120. The communication interface 117 mediates data transmission between the CPU 111 and another computer.
Also, specific examples of the storage medium 120 include a general-purpose semi-conductor storage device such as CF (Compact Flash (registered trademark)) and SD (Secure Digital), a magnetic storage medium such as a flexible disk, and an optical storage medium such as a CD-ROM (Compact Disk Read Only Memory).
The regression apparatus 10 according to the present exemplary embodiment can also be realized using items of hardware corresponding to various components, rather than using the computer having the program installed therein. Furthermore, a part of the regression apparatus 10 may be realized by the program, and the remaining part of the regression apparatus 10 may be realized by hardware.
The above-described embodiment can be partially or entirely expressed by, but is not limited to, the following Supplementary Notes 1 to 9.
A regression apparatus for optimizing a joint regression and clustering criteria, the regression apparatus comprising:
a train classifier unit that trains a classifier with a weight vector or a weight matrix, using labeled training data, a similarity of features, a loss function characterizing regression quality, and a penalty encouraging the similarity of features, wherein the strength of the penalty is proportional to the similarity of features,
an acquire clustering result unit that, uses the trained classifier, to identify feature clusters by grouping the features which regression weights are equal.
The regression apparatus according to Supplementary Note 1,
Wherein the loss function is the multi-logistic regression loss with regression weight vector for each and including a penalty,
the penalty is set for each pair of features, and consists o some distance measure between each pair of feature weights times the similarity between the features.
The regression apparatus according to Supplementary Note 1,
Wherein the loss function has a weight for each cluster, and an additional penalty, the additional penalty penalizes large weights, and is less for larger clusters.
A regression method for optimizing a joint regression and clustering criteria, the regression method comprising:
(a) a step of training a classifier with a weight vector or a weight matrix, using labeled training data, a similarity of features, a loss function characterizing regression quality, and a penalty encouraging the similarity of features, wherein the strength of the penalty is proportional to the similarity of features,
(b) a step of, by using the trained classifier, identifying feature clusters by grouping the features which regression weights are equal.
The regression method according to Supplementary Note 4,
Wherein the loss function is the multi-logistic regression loss with regression weight vector for each feature, and including a penalty,
the penalty is set for each pair of features, and consists of some distance measure between each pair of feature weights times the similarity between the features.
The regression method according to Supplementary Note 4,
Wherein the loss function has a weight for each cluster, and an additional penalty, the additional penalty penalizes large weights, and is less for larger clusters,
A computer-readable recording medium having recorded therein a program for optimizing a joint regression and clustering criteria using a computer, the program including an instruction to cause the computer to execute;
(a) a step of training a classifier with a weight vector or a weight matrix, using labeled training data, a similarity of features, a loss function characterizing regression quality, and a penalty encouraging the similarity of features, wherein the strength of the penalty is proportional to the similarity of features,
(b) a step of, by using the trained classifier, identifying feature clusters by grouping the features which regression weights are equal,
The computer-readable recording medium according to Supplementary Note 7,
Wherein the loss function is the multi-logistic regression loss with regression weight vector for each feature, and including a penalty,
the penalty is set for each pair of features, and consists of some distance measure between each pair of feature weights times the similarity between the features.
The computer-readable recording medium according to Supplementary Note 7,
Wherein the loss function has a weight for each cluster, and an additional penalty, the additional penalty penalizes large weights, and is less for larger clusters.
Risk classification is an ubiquitous problem ranging from detecting cyberattacks to diseases and suspicious emails. Past incidents, resulting in labeled data, can be used to train a classifier and allow (early) future risk detection. However, in order to acquire new insights and easy interpretable results, it is crucial to analyze which combination of factors (covariates) are indicative of the risks. By jointly clustering the covariates (e.g. words in a text classification task), the resulting classifier is easier to interpret and can help the human expert to formulate hypotheses about the types of risks (clusters of the covariates).
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/035745 | 9/29/2017 | WO | 00 |