Compressive Sense Imaging

Information

  • Patent Application
  • 20150382026
  • Publication Number
    20150382026
  • Date Filed
    June 30, 2014
    10 years ago
  • Date Published
    December 31, 2015
    8 years ago
Abstract
Systems and methods for compressive sense imaging are provided. In one aspect, incident light reflecting from an object is received via an aperture array and a sensor and intermediate compressive measurements are generated using compressive sequence matrices that are determined based on the properties of the aperture array and the sensor. The intermediate compressive measurements are further processed to generate compressive measurements representing the compressed image of the object. An uncompressed image of the object is generated from the compressive measurements using a determined reconstruction matrix that is different from the sequence matrices used to acquire the intermediate compressive measurements.
Description
CROSS-REFERENCE

The present application references subject matter of the following U.S. applications, each of which is incorporated by reference herein in its entirety: U.S. application Ser. No. 13/658,904 filed on Oct. 24, 2012 and entitled “Resolution and Focus Enhancement”; U.S. application Ser. No. 13/658,900 filed on Oct. 24, 2012 and entitled “Lensless Compressive Image Acquisition”; U.S. application Ser. No. 13/367,413 filed on Feb. 7, 2012 and entitled “Lensless Compressive Image Acquisition”; and, U.S. application Ser. No. 12/894,855 filed on Sep. 30, 2010 and entitled “Apparatus and Method for Generating Compressive Measurements of Video Using Spatial and Temporal Integration”, which issued as U.S. Pat. No. 8,644,376 on Feb. 4, 2014.


TECHNICAL FIELD

This disclosure is directed to systems and methods for compressive sense image processing.


BACKGROUND

This section introduces aspects that may be helpful in facilitating a better understanding of the systems and methods disclosed herein. Accordingly, the statements of this section are to be read in this light and are not to be understood or interpreted as admissions about what is or is not in the prior art.


Digital image/video cameras acquire and process a significant amount of raw data. In order to store or transmit image data efficiently, the raw pixel data for each of the N pixels of an N-pixel image is first captured and then typically compressed using a suitable compression algorithm for storage and/or transmission. Although compression after capturing the raw data for each of the N pixels of the image is generally useful for reducing the size of the image (or video) captured by the camera, it requires significant computational resources and time. In addition, compression of the raw pixel data does not always meaningfully reduce the size of the captured images.


A more recent approach, known as compressive sense imaging, acquires compressed image (or video) data using random projections without first collecting the raw data for all of the N pixels of an N-pixel image. For example, a compressive measurement basis is applied to obtain a series of compressive measurements which represent the encoded (i.e., compressed) image. Since a reduced number of compressive measurements are acquired in comparison to the raw data for each of the N pixel values of a desired N-pixel image, this approach can significantly eliminate or reduce the need for applying compression after the raw data is captured.


BRIEF SUMMARY

Systems and methods for compressive sense imaging are provided. In some embodiments, incident light reflecting from an object and passing through an aperture array is detected by a sensor. Intermediate compressive measurements are generated based on the output by the sensor using compressive sequence matrices that are determined based on the properties of the aperture array and the sensor. The intermediate compressive measurements are further processed to generate compressive measurements representing the compressed image of the object. An uncompressed image of the object is generated from the compressive measurements using a determined reconstruction matrix that is different from the sequence matrices used to acquire the intermediate compressive measurements.


In one aspect, a compressive sense imaging system and method includes generating a plurality of sequence matrices; determining a plurality of intermediate compressive measurements using the plurality of sequence matrices; and, generating a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements.


In some aspects, the system and method includes generating an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.


In some aspects, the system and method includes determining a kernel matrix based on properties of an aperture array of aperture elements and a sensor, and, generating a sensing matrix using the kernel matrix and a reconstruction basis matrix.


In some aspects, the system and method includes decomposing the sensing matrix to generate the plurality of sequence matrices.


In some aspects, the system and method includes determining a sensitivity function for the sensor;


determining at least one characteristic function for at least one of the aperture elements of the aperture array; computing a kernel function by performing a convolution operation using the sensitivity function and the at least one characteristic function; and, determining the kernel matrix using the kernel function and an image.


In some aspects, the system and method includes applying a sparsifying operator to generate the uncompressed image of the object from the plurality of compressive measurements using the reconstruction basis matrix.


In some aspects, the system and method includes selectively enabling or disabling one or more aperture elements of an aperture array based on at least one basis in a sequence matrix to determine at least one of the plurality of intermediate compressive measurements during a time period, where the at least one of the plurality of intermediate compressive measurements is determined based on an aggregated sum of light detected by the sensor during the time period.


In some aspects, the aperture array is an array of micro-mirrors. In some aspects, the aperture array is an array of LCD elements.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an example of a compressive sense imaging system in accordance with various aspects of the disclosure.



FIG. 2 illustrates an example of a camera unit for acquiring compressive measurements of an object using a sequence matrix in accordance with one aspect of the disclosure.



FIG. 3 illustrates an example process for compressive sense imaging in accordance with various aspects of the disclosure.



FIG. 4 illustrates an example apparatus for implementing aspects of the disclosure.





DETAILED DESCRIPTION

Various aspects of the disclosure are described below with reference to the accompanying drawings, in which like numerals refer to like elements in the description of the figures. The description and drawings merely illustrate the principles of the disclosure; various structures, systems and devices are described and depicted in the drawings for purposes of explanation only and so as not to obscure the present invention with details that are well known to those skilled in the art, who will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles and are included within spirit and scope of the disclosure.


As used herein, the term, “or” refers to a non-exclusive or, unless otherwise indicated (e.g., “or else” or “or in the alternative”). Furthermore, words used to describe a relationship between elements should be broadly construed to include a direct relationship or the presence of intervening elements unless otherwise indicated. For example, when an element is referred to as being “connected” or “coupled” to another element, the element may be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Similarly, words such as “between”, “adjacent”, and the like should be interpreted in a like fashion.


The singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,”, “includes” and “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.



FIG. 1 illustrates a schematic example of a compressive imaging acquisition and reconstruction system 100 (“system 100”). Incident light 105 reflecting from an object 110 is received by the camera unit 115, which generates a plurality of intermediate compressive measurements using a determined number of compressive sequence matrices 120. The intermediate compressive measurements are further processed to generate compressive measurements 125 representing the compressed image of the object 110. The compressive measurements 125 representing the compressed image of the object 110 may be stored (or transmitted) by a storage/transmission unit 130. The reconstruction unit 135 generates an uncompressed image 140 (e.g., for display on a display unit) of the object 110 from the compressive measurements 125 using a determined reconstruction matrix 150.


Although the units are shown separately in FIG. 1, this is merely to aid understanding of the disclosure. In other aspects the functionality of any or all of the units described above may be implemented using fewer or greater number of units. Furthermore, the functionality attributed to the various units may be implemented by a single processing device or distributed amongst multiple processing devices. Some examples of suitable processing devices include cameras, camera systems, mobile phones, personal computer systems, tablets, set-top boxes, smart phones or any type of computing device configured to acquire, process, or output data.


In one embodiment, a single processing device may be configured to provide the functionality of each of the units of system 100. The single processing device may include, for example, a memory storing one or more instructions, and a processor for executing the one or more instructions, which, upon execution, may configure the processor to provide functionality ascribed to the units. The single processing device may include other components typically found in computing devices, such as one or more input/output components for inputting or outputting information to/from the processing device, including a camera, a display, a keyboard, a mouse, network adapter, etc.


In another embodiment, a local processing device may be provided at a first location that is communicatively interconnected with a remote processing device at a remote location via network. The local processing device may be configured with the functionality to generate and provide the compressive measurements 125 of the local object 110 to a remote processing device over the network. The remote processing device, in turn, may be configured to receive the compressive measurements from the local processing device, to generate the reconstructed image 140 from the compressive measurements 125 using the reconstruction basis matrix 150, and to display the reconstructed image to a remote user in accordance with the aspects described below. The local processing device and the remote processing device may be respectively implemented using an apparatus similar to the single processing device, and may include a memory storing one or more instructions, a processor for executing the one or more instructions, and various input/output components as in the case of the single processing device. The network may be an intranet, the Internet, or any type or combination of one or more wired or wireless networks.



FIG. 2 illustrates an example of a lensless camera unit 115 for acquiring compressive measurements 125 representing the compressed image of the object 110 using compressive sense imaging. Although a particular embodiment of the lensless camera unit 115 is described, this is not to be construed as a limitation, and the principles of the disclosure may be applied to other embodiments of compressive sense imaging systems.


Incident light 105 reflected off the object 110 is received at the camera unit 115 where the light 105 is selectively permitted to pass through an aperture array 220 of N individual aperture elements and strike a sensor 230. The camera unit 115 processes the output of the sensor 230 to produce intermediate compressive measurements using a plurality of sequence matrices that are determined based on one or more properties of the aperture array 220 and the sensor 230. The compressive measurements 125 collectively represent the compressed image of the object 110 and are determined using the intermediate compressive measurements.


To achieve compression, the number M of the compressive measurements 125 that are acquired as the compressed image of the object 110 is typically significantly less than the N raw data values that are acquired in a conventional camera system having an N-pixel sensor for generating an N-pixel image, thus reducing or eliminating the need for conventional compression of the raw data values after acquisition. In practice, the number of compressive measurements M may be pre-selected relative to the N aperture elements of the array 220 based upon a desired balance between the level of compression and the quality of the N-pixel image 140 that is reconstructed using the M compressive measurements.


The example array 220 illustrated in FIG. 2 is a two dimensional, 8×8 array of sixty-four (N=64) discrete aperture elements, which are arranged in two dimensional row and column format such that individual elements of the array 220 may be uniquely identified using a tabular notation form “[row, column]”. Thus, the first element in the first row of array 220 is exemplarily referenced as 220[1,1], and the last element in the last row of the array 220 is referenced as 220[8,8].


In practice, the size and format of the array 220 may have a significantly greater (or fewer) number of elements, depending on the desired resolution of the image 140. By way of example only, the array 220 may be a 640×480 (N=307,200) element array for a desired image resolution of 640×480 pixels for the image 140, or may be a 1920×1080 (N=2,073,600) element array for a correspondingly greater desired resolution of the image 140.


The overall transmittance of light 105 passing through the array 220 and reaching the sensor 230 at a given time may be varied by setting the transmittance of one or more of the individual aperture elements of the array. For example, the overall transmittance of array 220 may be adjusted by selectively and individually changing the transmittance of one or more of the aperture elements 220[1,1] to 220[8,8] to increase or decrease the amount of light 105 passing through the array 220 and reaching the sensor 230 at a given time.


Aperture elements that are fully opened (e.g., fully enabled or activated) allow light 105 to pass through those opened elements and reach the sensor 230, whereas aperture elements that are fully closed (e.g., fully disabled or deactivated) prevent or block light 105 from passing through the closed elements of the array 220 and reaching the photon detector 230. The aperture elements may be partially opened (or partially closed) to pass only some, but not all, of the light 105 to reach the sensor 230 via the partially opened (or partially closed) elements. Thus, the collective state of the individual aperture elements (e.g., opened, closed, or partially opened or closed) determines the overall transmittance of the aperture array 220 and therefore determines the amount of light 105 reaching the sensor 230 at a given time.


In one embodiment, the aperture array 220 is a micro-mirror array of N individually selectable micro-mirrors. In another embodiment, the aperture array 120 may be an N element LCD array. In other embodiments, the aperture array 220 may be any suitable array of electronic or optical components having selectively controllable transmittance.


The camera unit 115 is configured to generate intermediate compressive measurements by selectively adjusting the overall transmittance of the aperture array 220 in accordance with compressive bases information in a plurality of sequence matrices. Each of the intermediate compressive measurements may be understood as the determined sum (or aggregate) of the light 105 reaching the sensor 230 through the array 220 during a particular time when particular ones of the N aperture elements of the array 220 are selectively opened and closed (either fully or partially) in accordance with a pattern indicated by a particular compressive basis of a sequence matrix 120.


One feature of the present disclosure is that a M number of intermediate compressive measurements are acquired using each of a S number of sequence matrices that are determined as described further below. Since S≧2, at least 2M number of intermediate compressive measurements are determined, which are processed into M compressive measurements 125 representing the compressed image of the object 110 as described further below. The M compressive measurements 125 are used in conjunction with the reconstruction matrix 150 to reconstruct or generate the uncompressed image 140 of the object 110. Another feature of the present disclosure is that the sequence matrices are determined based on a kernel function, where the kernel function is determined based on the properties of the array 220 and the sensor 230. These and other aspects of the present disclosure are described in detail further below.


In general, a determined sequence matrix 120 is a set of M compressive bases b1, b2, . . . bM, each of which is applied in turn to the array 220 to produce a respective one of M intermediate compressive measurements. Each measurement basis b1, b2, . . . bM in the sequence matrix 120 is itself an array of N values corresponding to the number N of aperture elements of the array 220, as indicated mathematically below:







[





b
1



[
1
]






b
1



[
2
]









b
1



[
N
]








b
2



[
1
]






b
2



[
2
]









b
2



[
N
]








b
3



[
1
]






b
3



[
2
]









b
3



[
N
]






















b
M



[
1
]






b
M



[
2
]









b
M



[
N
]





]






For example, in the embodiment illustrated in FIG. 2, each compressive basis bk(kε[1 . . . M]) of a given sequence matrix 120 is a set of values bk [1] to bk [64] where each value is normalized to a set [0,1] as described later below. Accordingly, each value of a given compressive basis may be a “0”, “1”, or a real value between “0” and “1”, which respectively determines the corresponding state (e.g., fully closed, fully opened, or a state in-between) of a respective aperture element in the 8×8 aperture array 220.


A given compressive basis bk is applied to the array 220 to produce a corresponding intermediate compressive measurement for a time tk as follows. The respective values bk[1] to bk[64] are used to set the state (fully opened, fully closed or partially opened or closed) of the corresponding elements of array 220, and the detected sum or aggregate of light 105 reaching the sensor 230 is determined as the value of the corresponding intermediate compressive measurement. A total number of M×S intermediate compressive measurements are produced in this manner, where M is the number of compressive bases in each sequence matrix 120 and S is the number of sequence matrices (where S≧2).


An example operation of system 100 is now described in conjunction with the process 300 of FIG. 3. As an overview for aiding the reader, steps 302-308 describe the determination of the sequence matrices 120. Step 310 describes determination of the compressive measurements 125 representing the compressed image of the object 110 from the intermediate compressive measurements acquired using the sequence matrices 120. Step 312 describes generating the uncompressed image of the object 110 from the compressive measurements 125 using the reconstruction matrix 150.


It is to be understood that the steps described below are merely illustrative and that existing steps may be modified or omitted, additional steps may be added, and the order of certain steps may be altered.


Turning now to the process 300 of FIG. 3, the determination of the sequence matrices 120 begins in step 302 with the computation of a N×N kernel matrix K that is determined based on the geometry and properties of the array 220 and the sensor 230. The kernel matrix may be determined as follows.


In one embodiment, the kernel matrix is computed based on a sensitivity function for the sensor 230 and a characteristic function of the array 220. First, a sensitivity function F(x,y) of the sensor 230 is determined, where F(x,y) is the response of the sensor 230 when light strikes a point x,y on the sensor in Cartesian coordinates. Preferably, but not necessarily, the sensor 230 is selected such that it has a large sensing area and a uniform (or close to uniform) sensitivity function F(x,y), such that the sensor response (or, in other words, sensor sensitivity) does not vary (or does not vary very much) based on the where the light strikes the sensor.


Next, a characteristic function for each of the aperture elements of the array is defined, such that the characteristic function E(x,y) of a given aperture element E is E(x,y)=1 if a point x,y in Cartesian coordinates falls within the area of the aperture element and E(x,y)=0 if the point x,y in lies outside the area of the aperture element.


Next, a kernel function k(x,y) is defined using the sensitivity function of the sensor 230 and the characteristic function of the aperture elements of the array 220 as k(x,y)=E*F, where, the * operator indicates two-dimensional (2D) convolution operation. A discrete kernel function k(row,column) is determined as:






k(row,column)=∫∫Erow,columnk(x,y)dxdy,


where Erow,column identifies a particular aperture element E of the array 220 using the row, column notation.


It is noted here that alternatively, in another embodiment, the discrete kernel function may also be obtained by calibrating the camera unit 115 using a point lighting source (e.g., a laser source or another lighting source that is in effect a point lighting source with respect to the camera unit 115).


Finally, the N×N kernel matrix K is computed from the discrete kernel function as:






K·I
1D=(k(row,column)*I2D)1D,


where 1D indicates the one-dimensional (1D) vector form of a 2D array, and I is any N-pixel image.


In step 304, the determination of the sequence matrices 120 continues by specifying the reconstruction matrix 150. The reconstruction matrix may be any M×N matrix that has a property suitable for use in compressive sense imaging, such as, for example, the Restricted Isometry Property. In one embodiment, accordingly, the reconstruction matrix 150 is a M×N matrix whose rows are selected from randomly or pseudo-randomly permuted N×N Hadamard matrix, having the known properties that the entries or values of such reconstruction matrix are either +1 or −1 and the rows are mutually orthogonal.


In step 306, the determination of the sequence matrices 120 continues by computing a M×N sensing matrix A, where the sensing matrix is computed as:






A=[α
ij
]=RK
−1


where, R is the M×N reconstruction matrix computed in step 304 and K−1 is the N×N inverse matrix of the N×N kernel matrix K that was determined in step 302 based on the properties of the sensor 230 and the array 220, and where [αij] are the values of the sensing matrix A for i=1, . . . M and j=1, . . . N.


It is pointed out that while sensing matrix A is a M×N matrix that is determined based on the properties of the array 220 and the sensor 230, it is not suitable for use as a sequence matrix 120 directly. This is because, as will be apparent at least from the negatively values of the reconstruction matrix R, one or more of values [αij] of the sensing matrix A do not satisfy 0≦αij≦1. In fact, the sensing matrix A may include large negative and positive values, which are impractical (or perhaps not possible) to use as a pattern for setting the condition of the aperture elements of the array 220.


As a result, in step 308, the sensing matrix A is further decomposed into the sequence matrices 120 that have values that are within the set [0,1] as follows. It is also noted that while the description below is provided for the sequence matrices to have values within the set [0,1], the disclosure below is applicable to decomposing the sensing matrix A to have values within other sets.


Given the sensing matrix A, define:








A
+

=

[

a

i
,
j

+

]


,


where






a

i
,
j

+


=

{







a

i
,
j


,





for






a

i
,
j



>
0







0
,










for






a

i
,
j



<
0









and

,






A
-

=

[

a

i
,
j

-

]


,


where






a

i
,
j

-


=

{




-

a

i
,
j







for






a

i
,
j



<
0







0
,










for






a

i
,
j




0













for i=1, . . . M and j=1, . . . N.


Next, A+ is decomposed into a P+ number of M×N sequence matrices Ak+=[αi,j(k)+] where, i=1, . . . M, j=1, . . . N, and k=1, . . . , P3+ using the following pseudo-code algorithm:
















 for i = 1, . . . M, j = 1, . . . , N



  let p = 0, aij(0)+ = 0






  
whileaij+-k=1paij(k)+>1







   
aij(p+1)+=clip(aij+-k=1paij(k)+,1)







   p ← p + 1



  end






  
aij(p+1)+=aij+-k=1paij(k)+







  P+ (i, j) = p + 1



 end







defineP+=maxi,jP(i,j)







 for k = 1, . . . , P+



  Ak+ = [aij(k)+], where aij(k)+ = 0 if k > P+ (i, j)










where
,


clip


(

x
,
u

)


=

{




x
,





if





0


x

µ






µ
,



otherwise


















Next, matrix A may be similarly decomposed into a P number of M×N sequence matrices Ak=[αi,j(k)−] where, i=1, . . . M, j=1, . . . N, and k=1, . . . , P based on the algorithm above.


It is noted that all of the values of the resulting P+ number of M×N sequence matrices Ak+=[αi,j(k)+] satisfy 0≦αij+≦1, and, similarly, all of the values of the each of the resulting P number of M×N sequence matrices Ak=[αi,j(k)−] also satisfy 0≦αij≦1.


The decomposition of the sensing matrix into the sequence matrices described above leads to the equation:






A
=





k
=
1


P
+








A
k
+


-




k
=
1


P
-








A
k
-







In step 310, each of the determined sequence matrices Ak+ and Ak are applied to the array 220 to acquire the intermediate compressive measurements as described previously. For example, in one embodiment, each M×N sequence matrix Ak+ (k=1, . . . , P+) is applied to the array 220 to generate a measurement vector yk+ of the corresponding set of M intermediate compressive measurements. Similarly, each M×N sequence matrix Ak (k=1, . . . , P) is also applied to the array 220 to generate a measurement vector yk of the corresponding set of M intermediate compressive measurements.


In step 312, the process includes determining the compressive measurements 125 representing the compressed image of the object 110 from the intermediate compressive measurements determined in step 310, and reconstructing the uncompressed image of the object 110 from the compressive measurements 125.


In particular, the M number of compressive measurements 125 are determined using the intermediate compressive measurements yk+ and yk as:






y
=





k
=
1


P
+








y
k
+


-




k
=
1


P
-








y
k
-







The uncompressed image I of the object 110 may be determined using the compressive measurements 125 and the reconstruction matrix 150 as:


min∥W·I∥1, subject to: R·I=y=Σk=1P+yk+−Σk=1P−yk


where W is a sparsifying operator, I is the one-dimensional matrix representation of the N valued image 140, R is the reconstruction basis matrix determined in step 304, and y=Y1, Y2, Y3 . . . YM is a column vector of the compressive measurements 125 acquired based on the intermediate compressive measurements acquired using the sequence matrices. The sparsifying operator W may be generated, for example, by using wavelets, or by using total variations.


Steps 304 to 312 of the process described above may be repeated or performed once per image or video frame. Step 302 need not be repeated unless a different kernel matrix K is desired, for example, if there is a change in the array 220 or the sensor 230.


The present disclosure is believed to incur a number of advantages. To begin with, it describes an improved lensless camera unit suitable for compressive sense imaging that provides better images in low-light having a higher signal-to-noise ratio due to a larger number of measurements (at least 2×M) acquired using the array 220 to produce the M number of compressive measurements. In addition, the measurements are acquired in a manner that takes particular properties of the aperture array and the sensor into account. To continue, the present disclosure is suited for images in all spectrum of light, including the visible and the invisible spectrum. In addition, the present disclosure also provides for capturing images that sharper (e.g., having a greater amount of detail) for a given sensor geometry and size, and particularly for sensor and aperture arrays that are relatively large, which are otherwise known to produce soft (relatively blurrier) images.


It will be appreciated that one or more aspects of the disclosure may be implemented using hardware, software, or a combination thereof. FIG. 4 depicts a high-level block diagram of an example processing device or apparatus 400 suitable for implementing one or more aspects of the disclosure. Apparatus 400 comprises a processor 402 that is communicatively interconnected with various input/output devices 404 and a memory 406.


The processor 402 may be any type of processor such as a general purpose central processing unit (“CPU”) or a dedicated microprocessor such as an embedded microcontroller or a digital signal processor (“DSP”). The input/output devices 404 may be any peripheral device operating under the control of the processor 402 and configured to input data into or output data from the apparatus 400 in accordance with the disclosure, such as, for example, a lens or lensless camera or video capture device which may include a aperture array and a sensor. The input/output devices 404 may also include conventional network adapters, data ports, and various user interface devices such as a keyboard, a keypad, a mouse, or a display.


Memory 406 may be any type of memory suitable for storing electronic information, including data and instructions executable by the processor 402. Memory 406 may be implemented as, for example, as one or more combinations of a random access memory (RAM), read only memory (ROM), flash memory, hard disk drive memory, compact-disk memory, optical memory, etc. In addition, apparatus 400 may also include an operating system, queue managers, device drivers, or one or more network protocols which may be stored, in one embodiment, in memory 406 and executed by the processor 402.


The memory 406 may include non-transitory memory storing executable instructions and data, which instructions, upon execution by the processor 402, may configure apparatus 400 to perform the functionality in accordance with the various aspects and steps described above. In some embodiments, the processor 402 may be configured, upon execution of the instructions, to communicate with, control, or implement all or a part of the functionality with respect to the acquisition or the reconstruction of the compressive measurements as described above. The processor may be configured to determine the sequence matrices, the intermediate compressive measurements, the compressive measurements, and to generate the uncompressed images or video using a determined reconstruction matrix as described above.


In some embodiments, the processor 402 may also be configured to communicate with and/or control another apparatus 400 to which it is interconnected via, for example a network. In such cases, the functionality disclosed herein may be integrated into each standalone apparatus 400 or may be distributed between one or more apparatus 400. In some embodiments, the processor 402 may also be configured as a plurality of interconnected processors that are situated in different locations and communicatively interconnected with each other (e.g., in a cloud computing environment).


While a particular apparatus configuration is shown in FIG. 4, it will be appreciated that the present disclosure not limited to any particular implementation. For example, in some embodiments, all or a part of the functionality disclosed herein may be implemented using one or more application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or the like.


Although aspects herein have been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present disclosure. It is therefore to be understood that numerous modifications can be made to the illustrative embodiments and that other arrangements can be devised without departing from the spirit and scope of the disclosure.

Claims
  • 1. A compressive sense imaging system, the system comprising: a processing device configured to: generate a plurality of sequence matrices;determine a plurality of intermediate compressive measurements using the plurality of sequence matrices; and,generate a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements.
  • 2. The compressive sense imaging system of claim 1, wherein the processing device is further configured to: generate an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.
  • 3. The compressive sense imaging system of claim 1, wherein the processing device is further configured to: determine a kernel matrix based on properties of an aperture array of aperture elements and a sensor, and,generate a sensing matrix using the kernel matrix and a reconstruction basis matrix.
  • 4. The compressive sense imaging system of claim 3, wherein the processing device is configured to: decompose the sensing matrix to generate the plurality of sequence matrices.
  • 5. The compressive sense imaging system of claim 3, wherein the processing device is configured to: determine a sensitivity function for the sensor;determine at least one characteristic function for at least one of the aperture elements of the aperture array;compute a kernel function by performing a convolution operation using the sensitivity function and the at least one characteristic function; and,determine the kernel matrix using the kernel function and an image.
  • 6. The compressive sense imaging system of claim 2, wherein the processing device is further configured to: apply a sparsifying operator to generate the uncompressed image of the object from the plurality of compressive measurements using the reconstruction basis matrix.
  • 7. The compressive sense imaging system of claim 1, further comprising: a lensless camera unit including an aperture array of aperture elements and a sensor for detecting light passing through the aperture elements of the aperture array.
  • 8. The compressive sense imaging system of claim 7, wherein the processing device is further configured to: selective enable or disable one or more of the aperture elements of the aperture array based on at least one basis in a sequence matrix to acquire at least one of the plurality of intermediate compressive measurements during a time period, the at least one of the plurality of intermediate compressive measurements being determined based on an aggregated sum of light detected by the sensor during the time period.
  • 9. The compressive sense imaging system of claim 7, wherein the aperture array is a micro-mirror array.
  • 10. The compressive sense imaging system of claim 7, wherein the aperture array is a LCD array.
  • 11. A method for compressive sense imaging, the method comprising: generating, using a processor, a plurality of sequence matrices;determining a plurality of intermediate compressive measurements using the plurality of sequence matrices; and,generating a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements.
  • 12. The method of claim 11, further comprising: generating an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.
  • 13. The method of claim 11, further comprising: determining a kernel matrix based on properties of an aperture array of aperture elements and a sensor, and,generating a sensing matrix using the kernel matrix and a reconstruction basis matrix.
  • 14. The method of claim 13, further comprising: decomposing the sensing matrix to generate the plurality of sequence matrices.
  • 15. The method of claim 13, further comprising: determining a sensitivity function for the sensor;determining at least one characteristic function for at least one of the aperture elements of the aperture array;computing a kernel function by performing a convolution operation using the sensitivity function and the at least one characteristic function; and,determining the kernel matrix using the kernel function and an image.
  • 16. The method of claim 12, further comprising: applying a sparsifying operator to generate the uncompressed image of the object from the plurality of compressive measurements using the reconstruction basis matrix.
  • 17. The method of claim 11, further comprising: selectively enabling or disabling one or more aperture elements of an aperture array based on at least one basis in a sequence matrix to determine at least one of the plurality of intermediate compressive measurements during a time period, the at least one of the plurality of intermediate compressive measurements being determined based on an aggregated sum of light detected by a sensor during the time period.
  • 18. A non-transitory computer-readable medium including one or more instructions for configuring a processor for: generating a plurality of sequence matrices;determining a plurality of intermediate compressive measurements using the plurality of sequence matrices; and,generating a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements.
  • 19. The non-transitory computer-readable medium of claim 18, including one or more instructions for further configuring the processor for: generating an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.
  • 20. The non-transitory computer-readable medium of claim 18, including one or more instructions for further configuring the processor for: determining a kernel matrix based on properties of an aperture array of aperture elements and a sensor;generating a sensing matrix using the kernel matrix and a reconstruction basis matrix; and,decomposing the sensing matrix to generate the plurality of sequence matrices.