1. Field of the Invention
This invention relates generally to the estimation of the system transfer function for certain linear systems, for example the estimation of the pupil image function matrix for plenoptic imaging systems.
2. Description of the Related Art
As an example of one class of systems, the plenoptic imaging system has recently received increased attention. It can be used to recalculate a different focus point or point of view of an object, based on digital processing of the captured plenoptic image. The plenoptic imaging system also finds application in multi-modal imaging, using a multi-modal filter array in the plane of the pupil aperture. Each filter is imaged at the sensor, effectively producing a multiplexed image of the object for each imaging modality at the filter plane. Other applications for plenoptic imaging systems include varying depth of field imaging and high dynamic range imaging.
Many of these applications depend strongly on an accurate characterization of the plenoptic imaging system. This is especially true for applications that utilize more advanced image processing, such as deconvolution and super-resolution. For example, if the point response of a plenoptic imaging system can be accurately estimated, then an inverse problem can be solved to obtain higher resolution images from the captured plenoptic images.
The behavior of a plenoptic imaging system can be calculated based on the design data for the system. However, the physical system will deviate from the design data and even small deviations can affect the overall performance of the system. For example, the use of micro-elements, such as microlens arrays, increases the sensitivity to deviations and imposes a stricter requirement on the characterization of the plenoptic imaging system. Thus, calibration of physical systems, rather than modeling based on design data, is desirable in certain situations.
The point response for a plenoptic imaging system will be referred to as the “pupil image function” (PIF) of the plenoptic imaging system, as described in more detail below. Unlike conventional optical imaging systems, the PIF for a plenoptic imaging system is strongly spatially variant. Therefore, the PIF cannot be estimated for only a single field point, with the assumption that the PIF will then be the same or similar for all other field points. Rather, the PIF preferably is estimated over the entire field of view of the plenoptic imaging system. In one approach, a point source object is physically translated across the entire field of view, and the system's response is observed separately at each field point. The PIF is then assembled from the observed images. However, this approach is very time consuming.
Thus, there is a need for better methods for estimating the PIF of a plenoptic imaging system and, more generally, for estimating the system transfer function for a certain class of linear systems.
The present invention overcomes the limitations of the prior art by taking advantage of the low rank property of the PIF matrix. In one approach, the low rank is used advantageously by identifying a subspace for the PIF matrix and then estimating the PIF matrix within that subspace. This can lead to a significant reduction in the number of objects (e.g., calibration patterns) used to estimate the PIF matrix.
One aspect concerns producing an estimate  of the PIF matrix A for a plenoptic imaging system. The plenoptic imaging system can be characterized by y=Ax+e, where x is an object, e is noise and y is the plenoptic image resulting from object x. A set of calibration objects X is selected and applied to the plenoptic imaging system. The resulting plenoptic images Y are observed. The PIF estimate  is calculated from the plenoptic images Y and calibration objects X, taking advantage of the fact that A has low rank. In one approach, a subspace of A is identified and then  is estimated within the subspace. For example, the subspace may be identified based on the assumption that colspan(A)=rowspan(A).
A specific implementation is based on the singular value decomposition (SVD) of Y. Let SVD(Y)=(WΛZ). Then, in one approach Â=Y({circumflex over (V)}T X)†{circumflex over (V)}T, where {circumflex over (V)}=W(:,1:{circumflex over (k)}) and {circumflex over (k)} is greater than or equal to the rank of A. Preferably, {circumflex over (k)} is an estimate of the rank of A. Furthermore, the calibration objects X are selected as random calibration patterns, such as Gaussian or Bernoulli independent and identically distributed (i.i.d.) calibration patterns. This approach can be computationally efficient and reduce the number of calibration objects in X.
A similar approach can be applied to other linear systems that have a system transfer matrix with similar characteristics, for example if colspan(A)=rowspan(A).
Other aspects of the invention include methods, devices, systems, software, improvements, components, applications and other aspects related to the above.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The invention has other advantages and features which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
The figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
The figures and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed. Consider first the plenoptic imaging system as an example.
Sample Configuration.
The spatial coordinates (ξ,η) will be used at the object plane and (t, w) at the sensor plane. Note that the different components do not have to be designed exactly as shown in
Ignoring the filter module 125 for the moment, in imaging subsystem 1, the object 150 is imaged by the primary lens 110 to produce an image that will be referred to as the “primary image.” This primary lens 110 may be a camera imaging lens, microscope objective lens or any other such imaging system. The lenslet array 120 is placed approximately at the location of the primary image. Each lenslet then images the pupil of the primary lens to the sensor plane. This is imaging subsystem 2, which partially overlaps with imaging subsystem 1. The “image” created at the sensor array 130 will be referred to as the “plenoptic image” in order to avoid confusion with the “primary image.”
The plenoptic image can be divided into an array of subimages, corresponding to each of the lenslets. Note, however, that the subimages are images of the pupil of imaging subsystem 1, and not of the object 150. In
The plenoptic imaging system can be modeled by
where IfT,W is the intensity at sensor element T,W; IO,M,N is the intensity from object element M,N; and PIFM,NT,W is the plenoptic imaging system response, which we refer to as the pupil image function or PIF. T,W is the discretized version of sensor coordinates (t,w) and M,N is the discretized version of object coordinates (ξ,η). Eq. 1A can be written more compactly as
y=Ax+e, (1B)
where x is the vectorized object Io, y is the vectorized plenoptic image If, e is noise, and A is the two-dimensional PIF matrix.
In the following, we address the calibration problem, namely the estimation of the PIF matrix A. We denote the ground truth PIF matrix as an N×N matrix A. To estimate A, we begin by choosing a set of calibration patterns as objects. Each calibration pattern, which itself is a vectorized object, is denoted as xi εN. We denote the number of calibration patterns as m, and denote the N×m matrix formed by stacking the xi into a matrix as X=[x1, . . . , xm]. Given these calibration patterns X, the corresponding observations Y are obtained by
Y=AX+E, (2)
where E is an N×m matrix representing noise. The goal of the calibration process is to estimate A from the calibration patterns X and the corresponding plenoptic images Y. In a brute force approach, N calibration patterns can be used to estimate the PIF matrix A.
As described in the following sections, the underlying PIF matrix A exhibits a low-rank structure. Additionally, the column span of A coincides with the row span of A. We can use these properties to accurately estimate A using a reduced number m of calibration patterns. In general, it is desirable to reduce m as this will allow us to reduce the calibration time.
Properties of PIF Matrix A
First consider some of the relevant properties of A. To do so, we will make use of an example PIF matrix as shown in
Four Quadrant Symmetry.
To see how the symmetry about x=25 affects A, focus on the top half portion of A, and denote this matrix as Atop, i.e., Atop=A[1:25,:]. Let us further divide Atop into three portions: Atop=[AL aC AR] as shown by the dashed lines in
rowspan(A)=colspan(A).
Another property of A is that the column span of A (denoted as colspan(A)) coincides with the row span of A (denoted as rowspan(A)). Formally, this property means that when writing the singular value decomposition (SVD) of A as A=UΣVT and assuming that the singular values are ordered such that σ1≧ . . . ≧σ49, we will find that the matrix
is block diagonal.
rowspan(A)=colspan(V[:,1:25])=colspan(U[:,1:25])=colspan(A) (4)
This is an important property that can be used to advantage.
Generalization.
The properties of A described above are common among PIF matrices of systems with radial symmetry of the main lens and microlens imaging systems when the microlens system is positioned to image the pupil of the main lens. See Appendix A for a formal proof. For example,
In order to see whether or not the column span of A is equivalent to the row span of A,
In addition, the rank of the PIF matrices remains almost constant. For example, for a PIF matrix of size 2209×2209, we found that the rank of this matrix was only 256, which is only slightly higher than the rank of 245 for the 729×729 matrix. For a PIF matrix of size 4489×4489, we found that the rank was only 265. Therefore, if we are able to design an algorithm that requires a number m of calibration patterns that depends on the rank of the PIF matrix, we will be able to recover the underlying PIF matrix using far fewer calibration patterns than N.
Estimating the PIF Matrix
In this section we present approaches to estimating the PIF matrix A.
Noiseless Case.
First consider the noiseless case such that we have Y=AX. For this example, use the 49×49 PIF matrix of
Y=AX=U[:,1:25]Σ[1:25,1:25]V[:,1:25]TX (5)
Let us assume for now that we know V[:,1:25] and that X is full rank. Then we can recover A as:
Â=Y(V[:,1:25]TX)†V[:,1:25]T (6)
where the hat denotes that it is an estimate. When m>25 the recovery is exact.
However, the approach described above requires that we know V, which is a k×N matrix where the rows of VT span the row space of A. Note that the choice of V is not unique. In fact we may choose any {circumflex over (V)} which satisfies rowspan({circumflex over (V)}T)=rowspan(A). To do this, we make use of the fact that rowspan(A)=colspan(A). This means that instead of attempting to acquire the row span of A directly, we may equivalently obtain {circumflex over (V)} via the column span of A. The identification of the column span is a simpler problem as we have colspan(A)=colspan(Y). Overall we have,
colspan(Y)=colspan(A)=rowspan(A) (7)
This says we are able to infer the row span of A via the column span of Y. The task of computing the basis vectors that span the column space of Y can be achieved by computing the SVD of Y=WΛZT and setting {circumflex over (V)} to be the largest k left singular vectors of Y, which means setting {circumflex over (V)} to be the first k column vectors of W: {circumflex over (V)}=W[:,1:k]. Using these vectors we can follow the above discussion and project our measurements onto {circumflex over (V)}TX. This is summarized in Table 1 shown below.
Note that we do not need to compute the estimates of the rows one by one. Instead, we are able to compute the estimate of the entire matrix A at once by multiplying the pseudo-inverse to the entire observation matrix Y. Also note that one input is the rank parameter {circumflex over (k)}. That parameter can be estimated when the lens files for the plenoptic imaging system is available through simulation of the system matrix.
Noisy Case.
Now consider when there is noise in the measurements, i.e., Y=AX+E. Let us first see how the presence of noise affects the subspace identification step 442. Since there is added noise in the measurements, we will no longer have that colspan(Y)=colspan(A). However, according to the literature in perturbation theory, it is reasonable to believe that colspan(Y)≈colspan(A). Therefore, even in the presence of noise we will keep the subspace identification step 442 the same as the noiseless case.
Next, consider the matrix estimation step 444. The main objective of this step is to project the measurements onto the row space of {circumflex over (V)}TX while ensuring not to amplify noise. To do so it is important for the matrix of {circumflex over (V)}TX to have a “good” condition number, i.e., condition number close to 1. One way to ensure this is to populate X with random entries so that X satisfies the Restricted Isometry Property (RIP) for matrices and/or vectors. This property is a fundamental condition in the compressive sensing framework, and defined for matrices in B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization”, SIAM Rev., 52(3), 471-501. (31 pages), which is incorporated by reference. It has been shown that when the matrix X is populated with (i) i.i.d. Gaussian entries, i.e., Xij˜N(0,1/m) where N( ) is the normal distribution, or (ii) i.i.d symmetric Bernoulli entries taking ±sqrt(1/m) with probability ½, or (iii) i.i.d. random variables taking ±sqrt(3/m) with probability ⅙ and zero with probability ⅔; X will satisfy the RIP with high probability. Therefore, we can choose X populated with random entries chosen from any of these three distributions (i)-(iii). The RIP for vectors is an analog to the RIP for matrices. It has also been shown that the above three distributions (i)-(iii) also satisfy the RIP for vectors given sufficient number of measurements. In one approach, we keep the matrix estimation step 444 the same, but we use calibration patterns that are generated with Gaussian random variables.
Table 2 shows an estimation approach for the noisy case. Table 2 differs from Table 1 only in the selection of calibration objects. Note that the full-rank condition on X is not mentioned in Table 2, because a random Gaussian matrix will be full-rank with probability 1. Further note that the condition m≧k is the minimum number of measurements required so that the pseudo-inverse is invertible, and it does not guarantee the RIP of X.
In addition, we have observed that the number of calibration patterns should scale linearly with the rank of A, rather than linearly with N. The simulation results in the next section support this observation as we are able to obtain accurate estimates of A using a number m of calibration objects that is slightly higher than the rank of A and much smaller than N. Remembering the fact that the rank of A remains almost constant with increasing N, this implies that we are able to significantly save in the number of calibration patterns even when N gets very large.
Simulation Results
In this section we present simulation results to recover a 729×729 PIF matrix A (i.e. N=729) using random calibration patterns X. In the following simulations we populate X with Gaussian measurements where each entry in X follows
Then, we compute Y=AX+E. We compare the results from the approach shown in Table 2 to a low-rank recovery algorithm presented in M. Yuan, A. Ekici, Z. Lu and R. Monteiro, “Dimension reduction and coefficient estimation in multivariate linear regression,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 69, no. 3, pp. 329-346, 2007. For a given noise level or SNR, we increase the number of calibration patterns m from 2 to 739 and for each m we run the two estimation approaches to recover A, given Y and X. Then, we compute the squared normalized distance between A and A, i.e., ∥A−Â∥F2/N2.
The results are shown in
In
The low-rank minimization approach, however, is not able to recover A as accurately as our approach, and we can see that the error of the approach decreases roughly linearly with m. Overall, we were able to decrease the number of calibration patterns significantly below N, while simultaneously producing accurate estimates of A.
Although the detailed description contains many specifics, these should not be construed as limiting the scope of the invention but merely as illustrating different examples and aspects of the invention. It should be appreciated that the scope of the invention includes other embodiments not discussed in detail above. For example, other systems which have a transfer system matrix A with the properties shown in Eqn. (7) above (i.e., colspan(A)=rowspan(A)), can also be estimated using the approach described above. Various other modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus of the present invention disclosed herein without departing from the spirit and scope of the invention as defined in the appended claims. Therefore, the scope of the invention should be determined by the appended claims and their legal equivalents.
In alternate embodiments, the invention is implemented in computer hardware, firmware, software, and/or combinations thereof. Apparatus of the invention can be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor; and method steps of the invention can be performed by a programmable processor executing a program of instructions to perform functions of the invention by operating on input data and generating output. The invention can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program can be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Generally, a computer will include one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits) and other forms of hardware.
Symmetry
The PIF function characterizes the propagation of light originating from an object point (ξ, i) through a plenoptic system onto a location (t, w) at the sensor plane. In one wave optics formulation, the PIF function is given by
where [ƒ] is the Fourier transform of the function ƒ.
In the above equations, P1 describes the pupil function of the main lens with focal length ƒ1, and P2 is the pupil function of the microlens with diameter D2 and focal length ƒ2. The object distance is z1, the distance between the main lens and microlens plane is z2, and the distance between the microlens plane and sensor plane is z3. The parameter k denotes the wave number.
Assuming a circular aperture on the lenslet to eliminate crosstalk from light passing through neighboring lenslets, the PIF function for an on-axis lenslet (i.e. μ=0, v=0) reduces to
We obtain the following symmetry between object and sensor points for a main lens system with h1(u,v)=h1(−u,−v) and a microlens system with P2 (u,v)=P2(−u,−v):
For the configuration that the microlens system focuses an image of the pupil onto the sensor plane
the PIF function reduces to
In the case that the main lens is radially symmetric, the function h1 is radially symmetric and real-valued (since it is the Fourier transform of a radially symmetric function (ƒ(r))=2π∫ƒ(r)rJ0(ρr)dr, where J0 is the zero-order Bessel function).
Therefore, the PIF function itself is the Fourier transform of a real function, i.e.
As a result we obtain the following symmetry relationship for radially symmetric lens systems and a focused microlens system:
(t,w,ξ,n)=(t,w,−ξ,−η)=(−t,−w,ξ,η)=(−t,−w,−ξ,−η). (14)
Linear System Model for Image Formation
Discrete Image Formation Via Linear Model with Transfer Matrix.
For given sampling intervals Δp, Δq at the sensor, we can define the discretized version of the PIF function P as the function :4→with
(p,q,r,l)=(pΔp,qΔq,rΔr,lΔl) (15)
that describes the intensity value of the response of a point source located at (rΔr,lΔl) at sensor location (pΔp,qΔq).
Under the assumption of a finite number of lenslets, the function (p,q,r0,l0) for a fixed object location (r0,l0) is a linear function which can be represented by a response matrix Br
y=P·x, (16)
where x are the intensities of samples in the object region [−M/2·Δr,M/2·Δr]×[−N/2·Δl,N/2·Δl] vectorized into a vector in (M+1)·(N+1), y are the intensities at the sensor plane locations in [−T/2·Δp,T/2·Δp]×[−W/2·Δq,W/2·Δq] vectorized into a vector in (T+1)·(W+1), and P being the (M+1)(N+1)×(T+1)(W+1) matrix that contains in each row the values of the vectorized optical response at sensor locations for a specific object location.
Symmetry Properties of the PIF Matrix.
Using the symmetry properties P from Eq. (14), we derive the following symmetry of P,
P=PJμ, and P=JvP, (17)
where Jm εRm×m and JnεRn×n are counterdiagonal matrices (μ=(M+1)(N+1), v=X+1Y+1 (M×N are the sensor pixel dimensions, X×Y are the object locations dimensions). A counterdiagonal matrix is a matrix that is 1 on the counter diagonal and zero everywhere else. Therefore, P is a special case of a centrosymmetric matrix. The following is a short proof for the symmetry properties in Eq. (17).
Proof.
Without loss of generality, we consider the on-axis lenslet, i.e. μ=v=0. The mapping between object and sensor coordinates for a given object location (r,l) in its vectorized form is denoted by
for pε[0,T], qε[0,W], rε[−X/2,X/2], lε[−Y/2,Y/2] and bp,q[r,l] being of dimension (W+1)(T+1). The index k of an entry bk of the vector bp,q[r,l] is expressed as k=(W+1)p+q due to the row-wise placement of vectorized system responses. By replacing p with (T−p) and q with (W−q), we obtain for the entry bk of the vector bp,q[r,l] with {tilde over (k)}=(W+1) (T−p)+W−q with the sensor symmetry characteristic from Eq. (14)
b{tilde over (k)}=b(W+1)(T−p)+(W−q)=bWT+W+T−((W+1)p+q)=b(W+1)(T+1)−1−k (19)
As a consequence, we obtain the flip symmetry of the vector bp,q[r,l], expressed as
b[r,l]=J·b[r,l] (20)
where J is a counterdiagonal matrix of dimension (W+1) (T+1). Consequently, we obtain the following flip/mirror symmetry across the vertical and horizontal axis of P.
P=JmP with m=(W+1)(T+1). (21)
With a similar argument we obtain the symmetry
P=PJn with n=(X+1)(Y+1). (22)
using the property from Eq. (14).
Now we analyze the reconstruction of an object from sensor data, where the reconstruction is performed through the pseudoinverse of the PIF matrix. Given the singular value decomposition of P=UΣV*, the centrosymmetric property of P results in P=JμPJv=JμJΣV*Jv=UΣV*.
The pseudo inverse of P is given by
P+=VΣ+U* (23)
with the property that the pseudo inverse of a centrosymmetric matrix is centrosymmetric, i.e. P+=JP+J. The reconstruction of an object vector x from sensor data y using the pseudo inverse is given by
{circumflex over (x)}=P+y (24)
The reconstruction vector {circumflex over (x)} has the following symmetry property given the symmetric properties of P from Eq. (17).
Jv{circumflex over (x)}=JvP+y=JvP+JμPx=P+y={circumflex over (x)} (25)
That means for a given not necessarily symmetric object x we obtain a symmetric reconstruction {circumflex over (x)}.
P is Special Centrosymmetric Matrix.
In this section we assume the matrix P to be a square m×m matrix. From Eqs. (21)-(22), we obtain the following form.
Where J is a m x m counter diagonal matrix, which is a special case of a centrosymmetric matrix. Any centrosymmetric matrix A can be written as
Setting C=JB, the matrix P satisfies this condition.
We show now that the rank of the P is equal to the rank of B. We know that a centrosymmetrix matrix A is orthogonal similar to a block diagonal matrix
with transformation matrix
That means P is orthogonal similar to
As a consequence, me rank of P is equal to the rank of B.
Now we show the block-diagonal structure of the matrix UTV. Let P be expressed by the singular value decomposition P=UΣVT for orthogonal matrices U, V and diagonal matrix Σ. Since P is orthogonal similar to
the rank of P is equal to the rank of B. That means
with Σ1 being a diagonal matrix of rank ≦m/2. We now define the singular value decomposition of B as
B=UBΣBVB (28)
Now we can construct the matrices U, V such that they will provide a suitable SVD for P. Let
Then the matrix
has the property CJ=C, JC=C. If we set U1=UB, and V1=VB, then we obtain
The matrices U and V are orthogonal if UTU=VTV=UUT=VVT=Im. With
we obtain orthogonality of the matrices U and V if U1, U2, V1, V2 are orthogonal matrices. Since U1=UB, V1=VB we have orthogonality of U1, V1. The matrices U2, V2 can be chosen as any orthogonal matrices.
The matrix UTV then results in
(utilizing the properties JJ=I and JT=J) which shows its block-diagonal structure.
Number | Name | Date | Kind |
---|---|---|---|
8761534 | Shroff et al. | Jun 2014 | B2 |
8824779 | Smyth | Sep 2014 | B1 |
9185272 | Ebe | Nov 2015 | B2 |
9288389 | Shroff | Mar 2016 | B2 |
9307146 | Shroff | Apr 2016 | B2 |
9392153 | Myhre | Jul 2016 | B2 |
20030202573 | Yamaguchi et al. | Oct 2003 | A1 |
20040145810 | Ranalli | Jul 2004 | A1 |
20050141649 | Tanabe | Jun 2005 | A1 |
20110161059 | Jain et al. | Jun 2011 | A1 |
20120062760 | Klapp et al. | Mar 2012 | A1 |
20130113981 | Knight et al. | May 2013 | A1 |
20130127901 | Georgiev et al. | May 2013 | A1 |
20130215236 | Shroff et al. | Aug 2013 | A1 |
20130216123 | Shroff et al. | Aug 2013 | A1 |
20130216124 | Shroff et al. | Aug 2013 | A1 |
20130216125 | Shroff et al. | Aug 2013 | A1 |
20130235261 | Berkner | Sep 2013 | A1 |
20130300912 | Tosic | Nov 2013 | A1 |
20140139705 | Ebe | May 2014 | A1 |
20140241615 | Shroff et al. | Aug 2014 | A1 |
20150181091 | Myhre | Jun 2015 | A1 |
Entry |
---|
Biship et al, Light Fiedl Superresolution, 2009. |
Broxton, M. et al., “Wave Optics Theory and 3-D Deconvolution for the Light Field Microscope,” Optics Express, Oct. 21, 2013, pp. 25418-25439, vol. 21, No. 21. |
Cai, J. et al., “A Singular Value Thresholding Algorithm for Matrix Completion,” SIAM J. on Optimization, Mar. 2010, pp. 1956-1982, vol. 20, No. 4. |
Candes, E. J. et al., “Decoding by Linear Programming,” IEEE Transactions on Information Theory, Dec. 2005, pp. 4203-4215, vol. 51, No. 12. |
Davis, C. et al., “The Rotation of Eigenvectors by a Perturbation, III,” SIAM J. Numer. Anal., Mar. 1970, pp. 1-46, vol. 7. |
Delbracio, M. et al., “The Non-Parametric Sub-Pixel Local Point Spread Function Estimation Is a Well Posed Problem,” International Journal of Computer Vision, Jun. 2011, pp. 175-194, vol. 96, No. 2. |
Donoho, D. L., “Compressed Sensing,” IEEE Transactions on Information Theory, Apr. 2006, pp. 1289-1306, vol. 52, No. 4. |
Eftekhari, A. et al., “The Restricted Isometry Property for Random Block Diagonal Matrices,” Applied and Computational Harmonic Analysis, 2015, pp. 1-31, vol. 38. |
Lee, K. et al., “ADMiRA: Atomic Decomposition for Minimum Rank Approximation,” IEEE Transactions on Information Theory, Sep. 2010, pp. 4402-4416, vol. 56, No. 9. |
Needell, D. et al., “CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples,” Commun. ACM, Dec. 2010, pp. 93-100, vol. 53, No. 12. |
Recht, B. et al., “Guaranteed Minimum-Rank Solutions of Linear Matrix Equations Via Nuclear Norm Minimization,” SIAM Review, Sep. 2010, pp. 471-501, vol. 52, No. 3. |
Shroff, S. A. et al., “Image Formation Analysis and High Resolution Image Reconstruction for Plenoptic Imaging Systems,” Applied Optics, Apr. 1, 2013, pp. D22-D31, vol. 52, No. 10. |
Tosic, I. et al., Dictionary Learning for Incoherent Sampling with Applications to Plenoptic Imaging, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013, 5 pages. |
Waters, A. E. et al., “SpaRCS: Recovering Low-Rank and Sparse Matrices from Compressive Measurements,” in Advances in Neural Information Processing Systems, 2011, pp. 1089-1097. |
Wedin, P., “Perturbation Bounds in Connection with Singular Value Decomposition,” BIT Numerical Mathematics, 1972, pp. 99-111, vol. 12, No. 1. |
Yuan, M. et al., “Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2007, pp. 329-346, vol. 69, No. 3. |
Number | Date | Country | |
---|---|---|---|
20150215604 A1 | Jul 2015 | US |