System and method for processing images using online tensor robust principal component analysis

Information

  • Patent Grant
  • 10217018
  • Patent Number
    10,217,018
  • Date Filed
    Tuesday, September 15, 2015
    9 years ago
  • Date Issued
    Tuesday, February 26, 2019
    5 years ago
Abstract
A set of input images are acquired sequentially as image tensors. A low-tubal rank tensor and a sparse tensor are initialized using the image tensor, wherein the low-tubal rank tensor is a tensor product of a low-rank spanning tensor basis and corresponding tensor coefficients, and for each image, updating iteratively the image tensor, the tensor coefficients, and the sparse tensor using the image tensor and the low-rank spanning basis from a previous iteration. The spanning tensor basis is updated using the tensor coefficients, the sparse tensor, and the low rank tubal tensor, wherein the low rank tubal tensor represents a set of output images and the sparse tensor representing a set of sparse images.
Description
FIELD OF THE INVENTION

This invention relates generally to computer vision, and more particularly to processing a sequence of images online.


BACKGROUND OF THE INVENTION

In many computer vision applications, images can be processed to detect objects, or to improve the quality of the input images by, e.g., background subtraction, removing or reducing unwanted artifacts, noise and occlusions. In image processing, principal component analysis (PCA) is commonly applied for dimensionality reduction. However, when the image data contains unintended artifacts, such as gross corruptions, occlusions or outliers, the conventional PCA can fail. To solve this problem, robust PCA (RPCA) models can be used.


An online recursive RPCA can separate data samples in an online mode, i.e., with only a previous estimate and newly acquired data. Unlike the conventional RPCA methods, which first saves all the data samples and then processes them, the online RPCA significantly reduces the required memory requirement and improves computational efficiency and convergence.


For multidimensional data (tensors) of order greater than 2, it is common to embed the data into a vector space by vectorizing the data such that conventional matrix-based approaches can still be used. Although this vectorization process works well in most cases, it restricts the effectiveness of the tensor representation in extracting information from the multidimensional perspective.


Alternatively, tensor algebraic approaches exhibit significant advantages in preserving multidimensional information when dealing with high order data. However, it is very time-consuming for the tensor RPCA to operate in batch mode because all of the high dimensional data needs to be stored and processed.


SUMMARY OF THE INVENTION

Tensor robust principal component analysis (PCA) is used in many image processing applications such as background subtraction, denoising, and outlier and object detection, etc. The embodiments of the invention provide an online tensor robust PCA where multi-dimensional data, representing a set of images in the form of tensors are processed sequentially. The tensor PCA updates tensors based on the previous estimation and newly acquired data.


Compared to the conventional tensor robust PCA operating in batch mode, the invention significantly reduces the required amount of memory and improves computational efficiency. In addition, the method is superior in convergence speed and performance compared to conventional batch mode approaches. For example, the performance is at least 10% better than for matrix-based online robust PCA methods according to a relative squared error, and the speed of convergence is at least three times faster than for the matrix-based online robust PCA methods.


To reduce memory and increase computational efficiency, we provide an online tensor RPCA algorithm, which extends an online matrix PCA method to high dimensional data (tensor). The online tensor RPCA is based in part on a tensor singular value decomposition (t-SVD) structure.


The key idea behind this tensor algebraic is to construct group-rings along tensor tubes subsequently. For example, regard a 2-D array as a vector of tubes and 3-D tensor as a matrix of tubes; such a tensor framework has been used in high dimensional data compression and completion. The embodiments extend the batch tensor RPCA problem and provide the benefit of sequential data collection, which reduces the required memory and increases efficiency.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic of a system for enhancing images according to embodiments of the invention;



FIG. 2 is a flow diagram of a method for enhancing images according to embodiments of the invention;



FIG. 3, is a schematic of a multiplication operation (t-product) used by embodiments of the invention;



FIG. 4 is a schematic of an element of a free module generated by t-linear combination of a spanning basis and coefficients;



FIG. 5 is a schematic of a t-SVD X=U*S*VT used by embodiments of the invention;



FIG. 6 is a block diagram of pseudocode for the online tensor robust PCA according to embodiments of the invention;



FIG. 7 is a block diagram of pseudocode for projecting data samples according to embodiments of the invention;



FIG. 8 is a block diagram of pseudocode for updating the spanning basis according to embodiments of the invention; and



FIG. 9 is a schematic of an input image separated into an enhanced image and a sparse cloud only images according to embodiments of the invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS


FIG. 1 is a schematic of a system for processing images according to embodiments of our invention. A sensor 101, e.g., a camera in a satellite, sequentially captures a set of T input images 102 of a scene 103. The images can be obtained by a single moving sensor at time steps t. Sequential acquisition reduces memory requirements to store the images, because the images are processed online as they are acquired by the sensor and received by a processor. The images can overlap to facilitate registering the images with each other. The images can be gray scale images or color images. In addition, the images can be multi-temporal images or multi-angle view images acquired sequentially.


In the example application shown in FIG. 1, the sensor is arranged in a moving space or airborne platform (satellite, airplane or drone), and the scene 103 is ground terrain. The scene can include occlusions due to structures in the scene, such as buildings, and clouds between the scene and sensors. The goal is to produce a set of enhanced output images 104, without occlusions. As a by-product, the system also produces a set of sparse images 105 including just the occlusions, e.g., the clouds.


As shown in FIG. 2, the method operates in a processor 100, which can be connected to the sensor 101, either electrically or wirelessly. Prior to processing the images, a spanning tensor is initialized 222 as L0, and tensor coefficients R0 and a sparse tensor E0 are initialized 223, see FIG. 6 described below.


The set of input images 101 are acquired 210 by the processor either directly or indirectly, e.g., the image can be acquired 106 by a camera, a video camera, or be obtained by other means or from other sources, e.g., a memory transfer, or wireless or wireless communication. For the purpose of the processing described herein, each image is represented by the image tensor Zt.


For each image at time step t, the following steps are performed. Data samples in the image are projected 220 to the tensor coefficients Rt and the sparse tensor Et using a previous spanning tensor basis Lt−1, where Rt(t, :, :)={right arrow over (R)} denotes the coefficients corresponding the spanning basis Lt−1, and Et(:, t, :)={right arrow over (E)}, t=1,2, . . . , T.


The spanning tensor basis is updated 225 by using the previous basis Lt−1 as the starting point. The updated spanning tensor basis Lt is saved for the next image to be processed. A low rank tubal tensor X=Lt*RtT and the sparse tensor Et are updated 230 to produce a set of output images Xt 104 and a set of sparse images Et 105, so that Xt+El=Zt.


Overview of Tensor Framework


We describe the tensor structure used by the embodiments of the invention, taking third-order tensor as an example. Instead of vectorizing images of size n1×n3 into a vector of dimension n1n3 as in conventional image processing, we consider each image as a vector of tubal scalars normal to the image plane. All the vectors form a free module over a commutative ring with identity, and the free module behaves similarly to vector spaces with basis and dimension, which are well-defined.


Notations and Definitions


For a third-order tensor A of size n1×n2×n3, A(i, j, k) denotes the (i, j, k)th element of A and A(i, j, :) denotes the (i, j)th tubal-scalar. A(i, :, :) is the ith horizontal slice, A(:, j, :) is the jth lateral slice, and A(:, :, k) or A(k) denotes the kth frontal slice of A respectively.


t-Product


Let {right arrow over (v)} ∈custom character1×1×n3 be a n3-tuple, perpendicular to the image plane. As shown in FIG. 3, a multiplication operation (t-product) between two n3-tuples {right arrow over (u)} and {right arrow over (v)} via circular convolution results in another n3-tuple {right arrow over (w)} represented by

{right arrow over (w)}(i)={right arrow over (u)}*{right arrow over (v)}=Σk=0n3−1{right arrow over (u)}(k){right arrow over (v)}((i−k)mod(n3)),  (1)

where i=0, 1, . . . , n3−1.


Given two third-order tensors A ∈custom charactern1×n2×n3 and B ∈ custom charactern2×n4×n3, the result of the t-product of A and B is a third-order tensor C of size n1×n4×n3 defined as

C(i, l,:)=A*B=Σj=1n2A(i, j,:)*B(j, l,:),  (2)

where i=1, 2, . . . , n1 and l=1, 2, . . . , n4. This is consistent with the multiplication between matrices with the t-product ‘*’ corresponding to the multiplication operation.


Commutative Ring


Under the defined multiplication (t-product) and the addition, the set of n3-tuples forms a commutative ring custom character(custom charactern3) with an identity {right arrow over (e)}=[1, 0, 0, . . . , 0].


Free-Module over the Commutative Ring


Let custom charactern3n1 be a set of all 2-D lateral slices of size n1×1×n3. Every element in custom charactern3n1 can be characterized as a vector of tubal-scalars. Because for any element X ∈custom charactern3n1 and coefficient custom charactercustom character(custom charactern3), Y=X*v is also an element of custom charactern3n1, custom charactern3n1 is closed under the tubal-scalar multiplication.


Moreover, custom charactern3n1 is a free module of dimension n3 over the commutative ring custom character(custom charactern3). We can construct a spanning basis {B1, B2, . . . , Bn3} for this module using the relation between Fourier transform and circular convolution.



FIG. 4 shows, given the spanning basis, that any element X ∈ custom charactern3n1 can be uniquely represented as a t-linear combination with tubal-scalar coefficients {right arrow over (c)}.

{right arrow over (X)}=Σi=1n3{right arrow over (B)}*{right arrow over (c)}i.  (3)


Tensor-PCA and Tensor Singular Value Decomposition (t-SVD)


Similar to the matrix PCA that identifies the lower-dimensional subspace approximately containing the data, we consider a tensor PCA for high-order tensor data. We focus on third-order tensors. Suppose the 2-D data samples come from a lower dimensional free submodule of the free module custom charactern3n1, where a free submodule is a subset of custom charactern3n1 with a spanning basis of dimension d<n. Our goal is to identify the free submodule containing the 2-D data samples.


t-SVD


Given n2 2-D data samples X1, . . . , Xn2 of size n1×n3, we arrange the samples as lateral slices to make a 3-D tensor X of size n1×n2×n3. Then, the t-SVD method is used to determine the spanning basis (principal components) of the submodule.


As shown in FIG. 5, t-SVD is defined as

X=U*S*VT,  (4)

where U ∈custom charactern1×d×n3 and V ∈custom charactern2×d×n3 are called orthogonal tensors which satisfy U*UT=I and V*VT=I, where I is the identity tensor whose frontal slices are all zeros except the first one an identity matrix, and superscript T is the transpose operator.


S ∈custom characterd×d×n3 is a tensor whose frontal slices are diagonal matrices. The tubal scalars S(i, i,: ), i=1, 2, . . . , d on the image plane are called singular tubes, where d is the rank of the tubal scalars.


Based on the relation between the circular convolution and the discrete Fourier transform (DFT), we can determine the t-SVD via an SVD in the Fourier domain. Let {circumflex over (X)} be the DFT along the third dimension of tensor X represented by {circumflex over (X)}=fft(X, [ ], 3). Given SVD in the Fourier domain


[U(:, :, k), S(:, :, k), V(:, :, k)]=SVD({circumflex over (X)}(:, :, k)), for k=1, . . . , n3, we can determine t-SVD in Eqn. (4) by

U=ifft(U, [ ], 3), S=ifft(S, [ ], 3), V=ifft(V, [ ], 3),   (5)

where fft and ifft represent the fast Fourier transform and its inverse, respectively.


Note that many properties of matrix SVD are retained in t-SVD, among which an important one is the optimality of the truncated t-SVD for provably optimal dimension reduction.


Online Tensor Robust PCA


Now we consider the problem of recovering a tensor of low dimensional submodule from sparsely corrupted data. Suppose we have a third-order tensor Z, which can be decomposed as,

Z=X+E,  (6)

where X is a tensor with low tensor tubal rank and E is a sparse tensor. The problem of recovering X and E separately, termed tensor RPCA, can be formulated as an optimization problem












min

X
,
E









X


TNN

+

λ




E


1



,




s
.
t
.





Z
=

X
+
E


,







(
7
)








where λ is a predetermined weighting factor, ∥X∥TNN denotes the tensor nuclear norm defined as the summation of all the singular values of tensor X in the t-SVD sense:

E∥1i,j,k|E(i, j, k)|;and λ>0.


Note that Eqn. (7) is equivalent to the following problem,












min

X
,
E





1
2






Z
-
X
-
E



F
2



+


λ
1









X


TNN


+


λ
2









E


1



,




(
8
)








with λ1, λ2>0.


Now, we describe an implementation of tensor PCA that operates online. Suppose the 2-D data samples Z(:, i, :), i=1, 2, . . . , T representing the set of images 102 are acquired 210 sequentially. Our goal is to estimate the spanning basis (principal components) of X online as the images are received at the processor 100, and separate the sparse tensor concurrently. In order to proceed, we rely on the following lemma.


For a third-order tensor X ∈custom charactern1×n×n3, suppose its rank is upper bounded by r, then we have












X


TNN

=


inf


L





n
1

×
r
×

n
3





R





n
2

×
r
×

n
3









{




n
3

2



(




L


F
2

+



R


F
2


)



:


X

=

L
*

R
T



}

.






(
9
)







Using the above lemma, we re-write Eqn. (8) as













min

L
,
R
,
E





1
2






Z
-

L
*

R
T


-
E



F
2



+




n
3



λ
1


2



(




L


F
2

+



R


F
2


)


+


λ
2





E


1







s
.
t
.




X



=

L
*

R
T



,




(
10
)








where L ∈ custom charactern1×r×n3, R ∈ custom charactern2×r×n3.


For sequentially acquired data {{right arrow over (Z)}1, {right arrow over (Z)}2, . . . , {right arrow over (Z)}T} ∈ custom charactern1×1×n3, we define a loss function for each sample based on Eqn. (10) as













(



Z


i

,
L

)


=



min


R


,

E







1
2








Z


i

-

L
*


R
T




-

E





F
2



+




n
3


λ

2






R




F
2


+


λ
2







E




1

.







(
11
)








FIG. 6 shows the pseudocode of Algorithm 1 for solving the online tensor RPCA problem described above. FIG. 7 shows the pseudocode of Algorithm 2 for projecting 220 data samples, i.e., images tensors Zt used in Algorithm 1. FIG. 8 shows the pseudocode of Algorithm 3 for updating 225 the spanning tensor basis used in Algorithm 1. All variables used by these algorithms are defined herein.


Input to algorithm 1 includes the acquired data AT and the number of time rounds T. For simplicity, we use  to denote the fft(A, [ ], 3), and A ∈ custom charactern1n3×n2n3 to denote the block diagonal matrix of  defined by

A=blkdiag(Â)=diag(Â(1), Â(2), . . . , Â(n3)).  (12)


One key idea of our online tensor RPCA algorithm is that at new image Zt, we minimize a loss function over Zt given the previous estimation of the spanning tensor basis Lt−1, to produce the optimal Rt and Et. Then, we alternately use the latest estimated components to update 225 the spanning basis Lt by minimizing a cumulative loss.


Specifically, Rt and Et are optimized in step 3 with details given in algorithm 3. In the data projection 220 step in Algorithm 2, Sλ[·] is a soft-thresholding operator defined by











S
λ



[
x
]


=

{






x
-
λ

,





if





x

>

+
λ








x
+
λ

,





if





x

<

-
λ






0


otherwise



.






(
13
)







To update 225 the spanning basis Lt, we have














t

=




argmin







i
=
1

·







(



1
2








𝒵


i

-


*





i
T


-





i




F
2


+




n
3



λ
1


2










i



F
2


+















λ
2










i



1


)

+




n
3



λ
1


2








F
2








=





argmin




1
2







𝒵
t

-


t

-


*


t
T





F
2


+




n
3



λ
1


2








F
2









=





argmin




1
2








𝒵
_

t

-



_

t

-



_









_

t
T





F
2


+




n
3



λ
1


2







_



F
2









=





argmin




1
2



tr


(



(



𝒵
_

t

-



_

t

-



_









_

t
T



)

T



(



𝒵
_

t

-



_

t

-



_









_

t
T



)


)



+













n
3



λ
1


2



tr


(




_

T




_


)










=





argmin




1
2



tr


(




_



(





_

t
T





_

t


+


n
3



λ
1


I


)






_

T


)



-

tr


(





_

T



(



𝒵
_

t

-



_

t


)






_

t


)




,







(
14
)








where tr is the trace operator.


Let At=At−1+{right arrow over (R)}t*{right arrow over (R)}T and Bt=Bt−1+({right arrow over (Z)}−{right arrow over (E)}t)*{right arrow over (R)}t, where {right arrow over (R)} ∈ custom characterr×1×n3, {right arrow over (E)}t custom charactern1×1×n3, as indicated in step 4 of algorithm 1. We update At, Bt each time new data comes and save the updated At, Bt such that we can update the spanning basis Lt in the Fourier domain with block-coordinate descent, as indicated in step 5 of Algorithm 1 with details in Algorithm 3. Note that our algorithm needs a prior information about estimated upper bound of the rank of the overall data samples.


For the batch tensor robust PCA, all the data samples up to image T, i.e., the total number of entries in {Zi}i=1T, are required. Therefore, the memory requirement for the batch tensor robust PCA is n1n3T.


For online tensor robust PCA, we need to save Lt−1 custom charactern1×r×n3, RT custom characterT×r×n3 (AT can be determined through RT), BT custom charactern1×r×n3. Therefore, the total storage requirement is n3rT+n1n3r, which is much smaller than that of the batch tensor robust PCA when r<<T. In other words, the memory requirement for processing the data samples online is about r/T of that of processing the data using batch mode tensor robust PCA, where n is a rank of the low rank tubal tensor, and T is a number of input images.


Other Image Processing Applications


The invention can also be used for other applications. In the case of background subtraction, also known as foreground detection, a foreground of an image is extracted for further processing such as object detection and recognition, e.g., pedestrians, vehicles, etc. Background subtraction can be used for detecting moving objects in a sequence of images (video). Background subtraction provides important cues for numerous applications in computer vision, for example surveillance tracking or human poses estimation.


In the processing method according to embodiments of the invention, the output background images would be constructed from the low rank tubal tensor X, and the foreground images are constructed from the sparse tensors E.


In the case of noise reduction, the reduced noise images would be derived from the low rank tubal tensor X The sparse tensor E representing the noise can essentially be discarded.


Although the invention has been described by way of examples of preferred embodiments, it is understood that various other adaptations and modifications can be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.

Claims
  • 1. A method for online processing sequence of images of a scene, wherein the sequence of images represents an image tensor, comprising: acquiring the sequence of images of the scene from at least one sensor from an input interface device, such that the acquired sequences of images of the scene include occlusions caused by structures in the scene, wherein the structures include buildings, clouds, or both, between the scene and the at least one sensor;initializing a low-tubal rank tensor and a sparse tensor using a first image of the sequence of images, and storing the low-tubal rank tensor and the sparse tensor in a memory, wherein the low-tubal rank tensor is a tensor product of a low-rank spanning tensor basis and corresponding tensor coefficients;iteratively, updating a previously stored low-tubal rank tensor and a previously stored sparse tensor from a previous iteration stored in memory, by using each sequential image of the sequentially acquired images, to obtain an updated low-tubal rank tensor and an updated sparse tensor, wherein the updated low-tubal rank tensor is a tensor product of an updated low-rank spanning tensor basis and updated corresponding tensor coefficients; andoutputting one of the updated low-tubal rank tensor, the updated sparse tensor, or both, upon completion of the iteration of all the sequentially acquired images, and wherein the steps are performed in a processor in communication with the memory.
  • 2. The method of claim 1, wherein the low-tubal rank tensor represents a background of the scene shared across all the sequentially acquired images, and the sparse tensor represents objects in the scene from one or more occlusions in an image of the sequentially acquired images.
  • 3. The method of claim 1, wherein the updated low-tubal rank tensor represents a set of output images and the updated sparse tensor represents a set of sparse images.
  • 4. The method of claim 1, wherein the first image is a set of images within the sequence of images used for initializing the low-tubal rank tensor and the sparse tensor.
  • 5. The method of claim 1, wherein the images are multi-temporal images or multi-angle view images.
  • 6. The method of claim 1, wherein the image tensor is decomposed into the low-tubal rank tensor and the sparse tensor using tensor robust principle component analysis (PCA).
  • 7. The method of claim 1, wherein the low-tubal rank tensor represents a background in the scene, and the sparse tensor represents a foreground in the scene.
  • 8. The method of claim 1, wherein the low-tubal rank tensor represents reduced noise images, and the sparse tensor represents noise.
  • 9. The method of claim 1, wherein the at least one sensor is moving when the sequences of images of the scene are acquired.
  • 10. The method of claim 9, wherein the at least one sensor is arranged in a satellite or an airplane.
  • 11. The method of claim 1, wherein the at least one sensor is two or more sensors, such that the sequences of images of the scene acquired by the two or more sensors include occlusions caused by clouds between the two or more sensors and scene, and structures in the scene, and wherein the set of output images represent the scene without the occlusions, and the output set of sparse images represent only the occlusions.
  • 12. A system for processing images, wherein each image is represented as an image tensor, comprising: an input interface to receive sequentially a set of input images acquired by at least one sensor, such that the acquired sequentially set of input images of the scene include occlusions caused by structures in the scene, wherein the structures include buildings, clouds, or both, between the scene and the at least one sensor; anda processor in communication with a memory, the processor is configured to: initialize a low-tubal rank tensor and a sparse tensor using a first image of the sequence of images, and store the low-tubal rank tensor and the sparse tensor in the memory;iteratively, update a previously stored low-tubal rank tensor and a previously stored sparse tensor from a previous iteration stored in memory, by using each sequential image of the sequentially acquired images, to obtain an updated low-tubal rank tensor and an updated sparse tensor; andoutput one of the updated low-tubal rank tensor, the updated sparse tensor, or both, upon completion of the iteration of all the sequentially acquired images.
  • 13. The system of claim 12, wherein the low-tubal rank tensor represents a background of the scene shared across all the sequentially acquired images, and the sparse tensor represents objects in the scene from one or more occlusions in an image of the sequentially acquired images.
  • 14. The system of claim 12, wherein the updated low-tubal rank tensor represents a set of output images and the updated sparse tensor represents a set of sparse images.
  • 15. The system of claim 12, wherein the first image is a set of images within the sequence of images used for initializing the low-tubal rank tensor and the sparse tensor.
  • 16. The method of claim 12, wherein the low-tubal rank tensor is a tensor product of a low-rank spanning tensor basis and corresponding tensor coefficients, and, wherein the updated low-tubal rank tensor is a tensor product of an updated low-rank spanning tensor basis and updated corresponding tensor coefficients.
  • 17. The method of claim 12, wherein the image tensor is decomposed into the low-tubal rank tensor and the sparse tensor using tensor robust principle component analysis (PCA).
  • 18. The method of claim 12, wherein the low-tubal rank tensor represents a background in the scene, and the sparse tensor represents a foreground in the scene.
  • 19. The method of claim 12, wherein the low-tubal rank tensor represents reduced noise images, and the sparse tensor represents noise.
  • 20. A method for online processing sequence of images of a scene, wherein the sequence of images represents an image tensor, comprising: acquiring the sequence of images of the scene from at least two sensors from an input interface device, such that the acquired sequences of images of the scene include occlusions caused by clouds in the scene between the at least two sensors and scene, and structures in the scene; initializing a low-tubal rank tensor and a sparse tensor using a first image of the acquired sequence of images, and storing the low-tubal rank tensor and the sparse tensor in a memory, wherein the low-tubal rank tensor is a tensor product of a low-rank spanning tensor basis and corresponding tensor coefficients;iteratively, updating a previously stored low-tubal rank tensor and a previously stored sparse tensor from a previous iteration stored in memory, by using each sequential image of the sequentially acquired images, to obtain an updated low-tubal rank tensor and an updated sparse tensor, wherein the updated low-tubal rank tensor is a tensor product of an updated low-rank spanning tensor basis and updated corresponding tensor coefficients; andoutputting one of the updated low-tubal rank tensor, the updated sparse tensor, or both, upon completion of the iteration of all the sequentially acquired images, and wherein the steps are performed in a processor in communication with the memory.
US Referenced Citations (9)
Number Name Date Kind
20060165308 Chakraborty Jul 2006 A1
20080247608 Vasilescu Oct 2008 A1
20140181171 Dourbal Jun 2014 A1
20150074158 Kimmel Mar 2015 A1
20150301208 Lewis Oct 2015 A1
20160013773 Dourbal Jan 2016 A1
20160232175 Zhou Aug 2016 A1
20160299243 Jin Oct 2016 A1
20170076180 Liu Mar 2017 A1
Non-Patent Literature Citations (7)
Entry
Lu et al., “Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization”, IEEE, 2016.
Qiu et al., “Recursive Projected Sparse Matrix Recovery (REPROSMR) With Application in Real-Time Video Layer Separation”, IEEE, 2014.
Jiashi Feng, Huan Xu, and Shuicheng Yan, “Online robust PCA via stochastic optimization,” in Advances in Neural Information Processing Systems 26, pp. 404-412. Curran Associates, Inc., 2013.
Zemin Zhang, Gregory Ely, Shuchin Aeron, Ning Hao, and Misha Elena Kilmer, “Novel methods for multilinear data completion and de-noising based on tensor-SVD,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3842-3849.
Misha E. Kilmer and Carla D. Martin, “Factorization strategies for third-order tensors,” Linear Algebra and its Applications, vol. 435, No. 3, pp. 641-658, Aug. 2011.
Carmeliza Navasca, Michael Opperman, Timothy Penderghest, and Christino Taman, “Tensors as module homomorphisms over group rings,” CoRR, vol. abs/1005.1894, 2010.
Kilmer et al. “Third-Order Tensors as Operators on Matrices: A Theoretical and Computational Framework with Applications in Imaging,” SIAM. J. Matrix Anal. & Appl., vol. 34, Issue 1, 148-172. Feb. 28, 2013.
Related Publications (1)
Number Date Country
20170076180 A1 Mar 2017 US