The following relates to the machine learning arts and to applications of same such as multi-label classification, image denoising, and so forth.
In multi-view learning, an object can be described by two or more different feature sets. Each feature set corresponds to a “view” of the object.
By way of illustrative example, the object may be an electronic document, which may be described by a first feature set (first view) comprising a bag-of-words vector representing textual content of the document, and by a second feature set (second view) representing the document structure (its organization into books, sections, chapters, or so forth), and perhaps by a third feature set (third view) representing the images contained in the (illustrative multi-media) document, and so forth.
As another illustrative example, an object may be a three-dimensional human face, and the first view of the face may be a feature set describing a photograph of the face obtained for a certain pose and lighting condition, a second view of the face may be a feature set describing a photograph of the face obtained for a different pose and/or different lighting condition, and so forth.
As another illustrative example, an object may be an digitally recorded audio clip, and a first view may represent the digital recording characteristics (such as bit rate, sampling rate, or so forth) while a second view may represent audio characteristics (such as frequency spectrum, dynamic range, or so forth), while a third view may represent metadata associated with the audio clip (such as a title or filename, create date, and so forth).
As another illustrative example, an object may be the psychological profile of a person, and a first view may be results of a personality test, a second view may be results of a gambling addiction test, a third view may be results of a schizophrenia screening test, and so forth.
In a multi-view learning task, V views of a set of n objects can be represented in general fashion as a set of prediction matrices {Xk}k=1V for the V views, where in general the prediction matrix Xk has a dimension n corresponding to the n objects and another dimension dk corresponding to the number of features characterizing the kth view. Observations of the various views of objects obtained by experiments, tests, recording data available on certain objects, or by other means can similarly be represented in general as a set of incomplete observation matrices {Yk}k=1V where the observation matrix Yk analogously has a dimension n corresponding to the n objects and another dimension dk corresponding to the number of features characterizing the kth view. The observation matrices Yk are generally incomplete in that only a small sub-set of the n objects are actually observed, and/or not all views (or not all features of a given view) of an observed object may be available. By way of illustrative example, in the illustrative human face learning task, photographs of a given face may be available for only some poses, and/or for only some lighting conditions, and it may be desired to predict the features of one of the unavailable photographs of the face.
Disclosed herein are improved multi-view learning techniques that provide various advantages as disclosed herein.
In some embodiments disclosed herein, a non-transitory storage medium stores instructions readable and executable by a computer to perform a method for generating predictions by operations including: determining optimized prediction matrices {Xk}k=1V for multi-view learning of V views where V≧2 by optimizing an objective wherein is a set of parameters including at least the prediction matrices X1, . . . , XV and a matrix X0 comprising a concatenation of the prediction matrices X1, . . . , XV, and function comprises a sum including at least λ0∥X0λ*+Σk=1Vλk∥Xk∥*+Σk=1VEk(Xk=PkX0; Yk) where {Yk}k=1V are incomplete observation matrices for the V views, PkX0 denotes the sub-matrix of X0 corresponding to prediction matrix Xk, Ek is a cumulative loss summed over the observations of the observation matrix Yk, ∥•∥* denotes the trace norm, and {λk}k=0V are regularization parameters; and generating a prediction for view k of an object based on the optimized prediction matrix Xk. In some embodiments the set of parameters further includes regularization parameters {λk}k=0V. In some such embodiments the optimizing of the objective comprises tuning the regularization parameters {λk}k=0V using a grid optimization. In some embodiments the set of parameters further includes sparse matrices {Sk}k=1V and the function comprises a sum including at least λ0∥X0∥*+Σk=1Vλk∥Xk∥*+Σk=1Vαk∥Sk∥1,1+Σk=1VEk(Xk+PkX0; Yk) where ∥•∥1,1 denotes the element-wise penalty and {αk}k=0V are regularization parameters. In some embodiments the determining comprises optimizing the objective using Alternating Direction Method of Multipliers (ADMM). In some embodiments the determining comprises optimizing the objective using an Augmented Lagrangian method in which the Lagrangian function is augmented by a quadratic penalty.
In some embodiments, an apparatus comprises the non-transitory storage medium of the immediately preceding paragraph, and a computer configured to read and execute instructions stored on the non-transitory storage medium to generate a prediction for a view k of an object i.
In some embodiments disclosed herein, a method comprises determining optimized prediction matrices for multi-view learning of V views where V≧2, and generating a prediction of a view of an object based on the optimized prediction matrix for that view. The optimized prediction matrices are determined by optimizing an objective wherein is a set of parameters including at least the prediction matrices for the V views and for a set of objects and an aggregation of the prediction matrices for the V views, and function comprises a sum including at least (1) a loss function for each view comparing the prediction matrix for the view with an incomplete observation matrix for the view and (2) a penalty function for each view computed based on the prediction matrix for the view and (3) a penalty function computed based on the aggregation of the prediction matrices for the V views. The determining and generating operations are suitably performed by an electronic data processing device. In some embodiments the objective is optimized using an Augmented Lagrangian method in which the Lagrangian function is augmented by a quadratic penalty.
In some embodiments disclosed herein, an electronic data processing device is configured to perform a method including determining optimized prediction matrices for V views of n objects where V≧2, and generating a prediction of a view of an object based on the optimized prediction matrix for that view. The optimized prediction matrices are determined by optimizing an objective wherein is a set of parameters including at least the prediction matrices for the V views and a concatenated matrix comprising a concatenation of the prediction matrices for the V views, and function comprises a sum including at least (1) a loss function for each view comparing the prediction matrix for the view with an incomplete observation matrix for the view and (2) a trace norm of the prediction matrix for each view and (3) a trace norm of the concatenated matrix. In some embodiments the set of parameters further includes a sparse matrix for each view and the function comprises said sum further including (4) an element-wise norm of the sparse matrix for each view. In some embodiments the set of parameters further includes regularization parameters scaling the trace norms of the prediction matrices for the V views and a regularization parameter scaling the trace norm of the concatenated matrix. In some embodiments the objective is optimized using an Augmented Lagrangian method in which the Lagrangian function is augmented by a quadratic penalty.
The disclosed multi-view learning approaches comprise convex formulations of multi-view completion. The following notation is employed. Let be observed co-occurring data pairs corresponding to two views of an object. The number of features in View 1 (y1i) and View 2 (y2i) are respectively denoted by d1 and d2. The main reason for defining two vectors of observations rather than a single concatenated vector in the product space is that the nature of the data in each view might be different. For example, in a multi-lingual text application, the views suitably represent the features associated with two distinct languages. Another example is image labeling, where the first view may suitably correspond to the image signature features and the second view may suitably encode the image labels. In the illustrative examples presented herein, two-view problems are addressed, but the extension to an arbitrary number of views is straightforward.
In a suitable formulation, the observations {y1i}i=1n and {y2i}i=1n are stacked respectively into the matrices Y1:={y1ij}ε and Y2:={y2ij}ε In the illustrative notation, the rows of the matrix correspond to features and the columns correspond to objects; however, it is straightforward to employ transpositions of this formulation. In predictive tasks, the goal is to predict missing elements in the observation matrices Y1 and Y2. Multi-view learning leverages the likelihood that the dependencies between the views provide useful information for predicting the missing entries in one view given the observed entries in both views.
With reference to
The observation matrices {Yk}k=1V are incomplete, which motivates employing multi-view learning to predict missing elements in the observation matrices. In some applications, such as denoising, it may additionally or alternatively be desired to use multi-view learning to generate denoised values for existing observations. In the observation matrices Yk, each element (ikt,jkt) represents a pair of (row,column) indices in the k-th view.
The matrices construction module 12 also generates and initializes a set of prediction matrices {Xk}k=1V for multi-view learning of V views, where again V≧2 for multi-view learning and V=2 in the illustrative examples. For V=2, predictions are represented by the latent (prediction) matrices X1:={x1ij}ε and X2:={x2ij}ε more generally for the kth view the prediction matrix is Xk:={xkij}ε The elements of the prediction matrices {Xk}k=1V may be initialized to random values (optionally constrained by known ranges for the various represented view features) or may utilize a priori information where such information is available. For example, in a cross-validation approach, some available observations may be omitted from the observation matrices Yk and instead used to improve the initialization of the prediction matrices Xk.
The matrices construction module 12 also constructs a concatenated matrix, denoted herein as X0, which comprises a concatenation of the prediction matrices {Xk}k=1V for the V views. In the illustrative notation in which the rows of the prediction matrix correspond to features and the columns correspond to objects, the concatenation is suitably a down concatenation in which X0=[X1; . . . ; XV]ε so that the columns correspond to the objects. However, in other notational formalisms, a right concatenation may be suitable.
If the value ykij is not observed, the goal is to predict ykij such that the loss ek(xkij; ykij) is minimized on average. The view-specific losses ek: × are assumed to be convex in their first argument. Typical examples providing such convexity include the squared loss e(x,y)=½(x−y)2 for continuous observations and the logistic loss e(x,y)=log(1+e−xy) for binary observations, yε{−1,+1}. The cumulative training loss associated to view k is defined as Ek(Xk,Yk)=Σ(i,j)εΩ
Various convex multi-view matrix completion problems are considered herein. These various approaches are labeled I00, I0R, J00, JL0 or JLR. The sequence of three letters composing these labels has the following meaning. The first character (I or J) indicates if the method treats the views independently or jointly. The second character (L or 0) indicates if the method accounts for view-specific variations. “L” in this labeling scheme denotes low-rank as nuclear norm penalties are considered. The third character (R or 0) indicates if the method is robust, where robustness is facilitated by including an -penalized additional view-specific matrix. The various convex multi-view matrix completion problems considered herein are described below.
The first approach, denoted I00, is a baseline approach that treats the views as being independent, considering a separate nuclear norm penalty for each view. This yields a set of V independent objectives:
where ∥•∥* denotes the nuclear norm (also known as the trace norm).
A second baseline method (see, e.g. Goldberg et al., “Transduction with matrix completion: Three birds with one stone”, in NIPS (2010)), denoted J00, considers a nuclear norm penalty on the concatenated matrix X0=[X1; X2]ε. This approach yields the objective:
where Pk is a sub-matrix selection operator, so that P1X0 is the d1×n matrix composed by the first d1 rows of X0 and P2X0 is the d2×n matrix composed by the last d2 rows of X0. In Expression (2) a single regularization parameter λ0 is employed, but in some cases it might be beneficial to weight the loss associated to each view differently. For example, in an image labeling application, it might be more important to predict the labels correctly than the features.
Compared to the I00 method, the nuclear norm penalty of the objective given in Expression (2) applies to X0 such that the matrix to complete is the concatenated matrix Y0=[Y1; Y2] (or, more generally, Y0=[Y1; . . . ; YV]). This enables information sharing across views, while preserving a view-specific loss to handle different data types.
The baseline approaches I00 and J00 are described for comparative purposes. Disclosed herein are improved multi-view learning approaches comprising convex formulations of multi-view completion, in which the objective includes both view-specific penalties and a cross-view penalty. In general terms, the objective can be written as where is a set of parameters including at least the prediction matrices X1, . . . , XV and the concatenated matrix X0, and the function comprises a sum including at least: (1) a loss function for each view comparing the prediction matrix for the view with an incomplete observation matrix for the view; (2) a penalty function for each view computed based on the prediction matrix for the view; and (3) a penalty function computed based on the aggregation of the prediction matrices for the V views. The sum component (2) may be constructed as a penalty function for each view k=1, . . . , V comprising a trace norm of the prediction matrix Xk for the view scaled by a regularization parameter. The sum component (3) may be constructed as a trace norm of the concatenated matrix X0 scaled by a regularization parameter. In some such embodiments, function comprises a sum including at least:
In one such formulation, denoted herein as the JL0 method, each view is decomposed as the sum of a low rank view-specific matrix Xk (the prediction matrix), as in the I00 method, and a sub-matrix PkX0 of the shared (i.e. concatenated) matrix X0 of size (d1+d2)×n for the case of V=2, as in the J00 method. The resulting objective for the illustrative case of two views (V=2) is:
The objective of Expression (4), or more generally the objective with ={X0, . . . , XV} and comprising the sum given in Equation (3), is convex jointly in X0, X1 and X2 (or more generally is convex jointly in X0, X1, . . . , XV). In these expressions, Ek is a cumulative loss summed over the observations of the observation matrix Yk, ∥•∥* denotes the trace norm, and {λk}k=0V are regularization parameters. As for many nuclear norm penalized problems, for sufficiently large regularization parameters, the matrices X1, . . . , XV and X0 are of low-rank at the minimum of the objective.
A variant formulation denoted herein as the JLR method improves on the JL0 method by enhancing robustness by including an -penalized additional view-specific matrix. Robustness can be integrated into the JL0 formulation so as to reach the JLR method by adding a sparse matrix Skε to each latent view representation, leading to the prediction of Yk by PkX0+Xk=Sk. The objective function for JLR in the case of two views (V=2) is defined as follows:
where ∥•∥1,1 is the element-wise penalty. The level of sparsity is controlled by view-specific regularization parameters α1 and α2. Extreme observed values ykij will tend to be partly explained by the additional sparse variables skij. Again, the objective is jointly convex in all its arguments. While Expression (5) is appropriate for V=2, the objective for the more general case of V≧2 is can again be written as , but here with the set of parameters including the concatenated matrix X0, the prediction matrices X1, . . . , XV, and also including sparse matrices {Sk}k=1V and function comprises a sum including at least:
where again ∥•∥1,1 denotes the element-wise penalty and {αk}k=0V are regularization parameters.
With reference again to
ADMM is a variation of the Augmented Lagrangian method in which the Lagrangian function is augmented by a quadratic penalty term to increase robustness. See Bertsekas, “Constrained optimization and lagrange multiplier methods”, Computer Science and Applied Mathematics, Boston: Academic Press 1982, 1, 1982. ADMM ensures the augmented objective remains separable if the original objective was separable by considering a sequence of optimizations with respect to an adequate split of the variables. See Boyd, supra.
In a suitable approach, an auxiliary variable Zk is introduced such that it is constrained to be equal to Xk+Sk+PkX0. The augmented Lagrangian of this problem can be written as:
where ∥•∥2,2 is the element-wise norm (or Frobenius norm). Parameters Bk and μ>0 are respectively the Lagrange multiplier and the quadratic penalty parameter.
With continuing reference to
Depending on the type of loss, the optimization of the augmented Lagrangian with respect to Zk is different. In the following, specialization of the ADMM algorithm for the squared and logistic loss functions, respectively, is described as illustrative examples, and these can be readilly generalized to any convex differentiable loss.
In the squared loss case, the minimization of the augmented Lagrangian with respect to Zk has a closed-form solution:
where 1k is a matrix of ones and the projection operator selects the entries in Ω and sets the others entries to 0.
In the case of the logistic loss, the minimization of the augmented Lagrangian of Expression (7) with respect to Zk has no analytical solution. However, around a fixed
and τ is the Lipschitz continuity of the logistic function. This leads to the following solution:
Parameter 1/τ plays the role of a step size. See Toh et al., supra. In practice, it can be increased as long as the bound inequality holds. A line search is then used to find a smaller value for τ satisfying the inequality.
With continuing reference to
With reference again to
It will further be appreciated that the disclosed multi-view learning techniques may be embodied as a non-transitory storage medium storing instructions readable and executable by the computer 14 or other electronic data processing device to perform the disclosed multi-view learning techniques. The non-transitory storage medium may, for example, comprise one or more of: a hard drive or other magnetic storage medium; a flash drive or other electronic storage medium; an optical disk or other optical storage medium; various combinations thereof; or so forth.
In the following, experimental tests are described, which were performed to assess performance of the JL0 method (using the objective of Expression (4)) and the JLR method (using the objective of Expression (5)). For comparison, the baseline J00 method using the objective of Expression (2) with V=2 was also tested. For further comparison, I0R and J0R methods were also tested. The I0R method corresponds to the baseline I00 approach using the set of V independent objectives given in Expression (1) with V=2, with the sparse matrices S1 and S2 added to facilitate robustness analogously to the described JLR approach. The J0R method corresponds to the baseline J00 approach using the objective of Expression (2) with V=2, with the sparse matrices S1 and S2 added to facilitate robustness analogously to the described JLR approach. The experimental tests were performed on synthetic data for an experimental matrix completion task, and on real-world data for experimental image denoising and multi-label classification tasks. In the following, the parameter tuning and evaluation criteria used in the experiments are described, followed by discussion of the results.
Parameter tuning (that is, optimizing the regularization parameters λ0, λ1, and λ2 of the objective function) was performed separately from the ADMM algorithm, using a grid approach. That is, the parameter tuning component of optimizing the objective respective to the set of parameters including regularization parameters {λk}k=0V=2 was performed using a grid optimization. In the experiments, five-fold cross-validation on a grid was employed to obtain the optimum values for the regularization parameters. However, to simplify the optimization, a slightly different formulation of the models was considered. For example, for the JLR method the objective was optimized with respect to λ and c where 0<c<1 instead of respective to λ0, λ1, λ2. The resulting objective function is of the form:
To evaluate the performance on matrix completion, normalized prediction test error (called “test error” herein) was used. One part of the data was used as training data, and the optimized prediction matrices Xk generated by this training were tested on the remaining part of the data and the prediction error reported. For multi-label classification performance, the transductive label error (i.e., the percentage of incorrectly predicted labels) and the relative feature reconstruction error were used. See Goldberg et al., “Transduction with matrix completion: Three birds with one stone”, in NIPS (2010).
Results for comparison the prediction capabilities of the JLR, JL0, J0R, J00, and I00 on synthetic datasets are as follows. Randomly generated square matrices of size n were used in these experiments. Matrices X0, X1, and X2 were generated with different rank (r0, r1, and r2) as a product of UVT where U and V are generated randomly with Gaussian distribution and unitary noise. Noise matrices E1 and E2 were generated randomly with Gaussian distribution and unitary noise. Sparse matrices S1 and S2 were generated by choosing a sparse support set of size k=0.1*n2 uniformly at random, and whose non-zero entries were generated uniformly in a range [−a, a]. For each setting, 10 trials were repeated and the mean and standard deviation of the test error reported.
Tables 1 and 2 show the comparison of the JLR, JL0, J0R, J00, and I00 methods for two different settings. Table 1 shows test error performance for the synthetic datasets where n=2000 and d1=d2=1000. Table 2 shows test error performance for the synthetic datasets where n=200 and d1=d2=100. In Tables 1 and 2, each cell shows the mean and standard deviation of the test error over 10 simulations. The test prediction performances of JLR is seen to be superior compared to the other approaches. It is also seen that the training loss is lower in JLR approach. Note that the stopping criteria for these tests was a fixed number of iterations, and the times in the CPU time column do not include the cross-validatoin time.
Image denoising test results are next presented. The performance of JLR for image denoising was evaluated and compared against J0R and I0R. The image denoising is based on the Extended Yale Face Database B available at cvc.yale.edu/projects/yalefacesB.html. This database contains image faces from 28 individuals under 9 different poses and 64 different lighting conditions. Two different lighting conditions (+000E+00 and +000E+20) were defined as two views of a face. The intuition is that each view has low rank latent structure (due to the view-specific lightning condition), while each image shares the same global structure (the same person with the same pose). Each image was down-sampled to 100×100. So the dimension of the datasets based on the notation used herein is: d1=10000, and d2=10000. Noise in the amount of 5% was added to randomly selected pixels of view 1 and view 2 as well as to missing entries in both views. The goal was to reconstruct the image by filling in missing entries as well as removing the noise.
Results are presented in Table 3, which tabulates the average test error (squared error) over five random train-test splits for the Yale Face Dataset processed as just described. The standard deviation was less than 10−3 in all cases. It was found that the J0R method was successful in removing the noise, but the quality of the reconstruction was visually inferior to the JLR method which effectively captured the specific low-rank variations of each image. The visual intuition was confirmed by the fact that the best performances were obtained by JLR and JL0. Quantitatively, JLR only slightly outperforms JL0, but there was substantial visual qualitative improvement.
The multi-label classification experiments are next presented. These experiments evaluated the applicability of the JLR method with a logistic loss on the second view in the context of a multi-label prediction task and compared it with the approach of Goldberg et al., “Transduction with matrix completion: Three birds with one stone”, in NIPS (2010). In this task, View 1 represents the feature matrix and View 2 the label matrix. In many practical situations, the feature matrix is partially observed. One way to address this is to first impute the missing data in the feature matrix and then further proceed with the multi-label classification task. Another way is to treat the feature matrix and the label matrix as two views of the same object, and treating the labels to be predicted as missing entries. For comparison, the J00 method using the approach of Goldberg et al., supra and using ADMM were tested, along with the J0R, JL0, and JLR methods. Two different datasets were considered, both of which were also used in Goldberg et al., supra, namely: Yeast Micro-array data and Music Emotion data available at: mulan.sourceforge.net/datasets.html.
The Yeast dataset contains n=2417 samples in d1=103 dimensional space. Each sample can belong to one of d2=14 gene functional classes, and the goal was to classify each gene based on its function. In the experiments, the percentage of observed value was varied between 40%, 60%, and 80% (denoted herein as π=40%, π=60%, and π=80%). Parameters were tuned by cross validation on optimizing the label error prediction. For each π, 10 repetitions were performed, and the mean and standard deviation (in parenthesis) are reported in Table 4, where the first data row labeled “J00 (1)” is from Goldberg et al., supra, and the second data row labeled “J00 (2)” solves the same problem using the ADMM algorithm disclosed herein.
The left columns of Table 4 show the label prediction error on the Yeast dataset. It is seen that J00 using Goldberg et al. and the ADMM algorithm produce very similar results. A slightly lower label prediction error is obtained for J0R and JLR. The right columns in Table 4 show the relative feature reconstruction error. JLR outperforms the other algorithms in relative feature reconstruction error. This is believed to be due to JLR being a richer model that is better able to capture the underlying structure of the data.
The Music dataset consists of n=593 songs in d1=72 dimension (i.e., 8 rhythmic and 64 timbre-based) each one labeled with one or more of d2=6 emotions (amazed-surprised, happy-pleased, relaxing-calm, quiet-still, sad-lonely, and angry-fearful). Features were automatically extracted from a 30-second audio clip. Table 5 presents the results for the same methods as were tested against the Yeast datasaet, using the same table format and notation as Table 4.
For the label error percentage, and similar to the results for the Yeast dataset, the results for the Music dataset show that J00 performed using the method of Goldberg et al., supra and using ADMM have similar label error performance. In the Music dataset, it is seen that JLR and JL0 produce similar results which suggests that the low-rank structure defined on the label matrix is sufficient to improve the prediction performance. The right columns of Table 5 again show the relative feature reconstruction error. Here it may be noted that J00 performed using the ADMM algorithm has better results in relative feature reconstruction error as compared to J00 performed using the method of Goldberg et al., supra. This suggest the efficiency of ADMM implemented for J00. Second, it is seen that JLR outperforms the other algorithms in terms of relative feature reconstruction error.
With reference to
It will be appreciated that various of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
20060034495 | Miller | Feb 2006 | A1 |
20110119210 | Zhang | May 2011 | A1 |
20120331025 | Gemulla | Dec 2012 | A1 |
20140114889 | Dagum | Apr 2014 | A1 |
20140156579 | Bouchard | Jun 2014 | A1 |
20140180760 | Karatzoglou | Jun 2014 | A1 |
20150052090 | Lin | Feb 2015 | A1 |
20150161441 | Robinson | Jun 2015 | A1 |
20160140425 | Kulkarni | May 2016 | A1 |
Entry |
---|
Amini, et al., “Learning from multiple partially observed views—an application to multilingual text categorization”. In NIPS, (2010). |
Bach, et al., “Kernel independent component analysis,” Journal of Machine Learning Re-search, 3: pp. 1-48 (2002). |
Bach, et al., “A Probabilistic Interpretation of Canonical Correlation Analysis,” Technical Report 688, Dept. of Statistics, University of California, Berkley, pp. 1-9 (Apr. 21, 2005). |
Bertsekas, “Constrained optimization and lagrange multiplier methods,” Computer Science and Applied Mathematics, Boston Academic Press, pp. 95-157(1982). |
Borga, “Learning Multidimensional Signal Processing,” Linkoping Studies in Science and Technology Dissertations, No. 531, Dept. of Electrical Engineering, Linkoping University, S-581 83 Linkoping, Sweden, pp. 1-181 (1998). |
Boyd, et al., “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” Foundations and Trends in Machine Learning, vol. 3, No. 1 pp. 1-122 (2010). |
Cai, et al., “A Singular Value Thresholding Algorithm for Matrix Completion,” Applied and Computational Mathematics, pp. 1-25 (Oct. 2008). |
Candes, et al., “Exact Matrix Completion via Convex Optimization,” Foundations of Computational Mathematics, 9: pp. 717-772 (2009). |
Candes, et al., “Robust Principal component Analysis?” Journal of the ACM, 58(3), pp. 1-56 (Feb. 2014). |
Cannon, et al., “Robust nonlinear canonical correlation analysis,” Non-linear Processes in Geophysics, 15, pp. 221-232 (2008). |
Goldberg, et al., “Transduction with matrix completion: Three birds with one stone,” in NIPS, pp. 1-7 (2010). |
Hardoon, et al., “Canonical correlation analysis; An overview with application to learning methods,” Neural Computation 16(12): pp. 2639-2664 (2004). |
Hotelling, “Relations between two sets of variates,” Biometrika, 28 (3/4): pp. 321-377 (1936). |
Jia, et al., “Factorized Latent Spaces with Structured Sparsity,” NIPS, pp. 1-7 (2010). |
Jolliffe, et al., “Principal Component Analysis,” Second Edition, Springer-Verlag, pp. 1-4 (1986). |
Lin, et al., “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” Technical Report, UIUC, pp. 1-18 (2009). |
Salakhutdinov, et al., “Probabilistic Matrix Factorization,” NIPS, pp. 1-6 (2008). |
Sturm, “Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones,” Optimization methods and software, vol. 11 (1-4), pp. 625-653 (1999). |
Tipping, et al., “Probabilistic principal component analysis,” Journal of the Royal Statistical Society: Series B, vol. 61, No. 3, pp. 611-622 (1999). |
Toh, et al., “An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems,” Pacific Journal of Optimization (6), pp. 615-640 (2010). |
Virtanen, “Bayesian CCA via Group Sparsity,” ICML, pp. 1-6 (2011). |
White, et al., “Convex Multi-view Subspace Learning,” NIPS, pp. 1-12 (2012). |
Number | Date | Country | |
---|---|---|---|
20160026925 A1 | Jan 2016 | US |