Method And Apparatus For Compressive Sensing With Reduced Compression Complexity

Information

  • Patent Application
  • 20120203810
  • Publication Number
    20120203810
  • Date Filed
    February 04, 2011
    13 years ago
  • Date Published
    August 09, 2012
    12 years ago
Abstract
Various methods and devices are provided to address the need for reduced compression complexity in the area of compressive sensing. In one method, a vector x is compressed to obtain a vector y according to y=ΦRDx, where ΦRD=UΦRM·ΦRM is a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U is a unitary matrix from the real or complex Clifford group G. In another method, vector y is decompressed to obtain vector x also according to y=ΦRDx. In some embodiments, decompression may involve computing y′=U−1y and then determining the vector x using the computed y′.
Description
FIELD OF THE INVENTION

The present invention relates generally to compressive sensing techniques and, in particular, to reducing compression complexity.


BACKGROUND OF THE INVENTION

This section introduces aspects that may help facilitate a better understanding of the inventions. Accordingly, the statements of this section are to be read in this light and are not to be understood as admissions about what is prior art or what is not prior art.


A compressive sensing scheme allows compression of a sparse vector x of real or complex numbers (that is, a vector whose entries are primarily zeros, only few being non-zero) into a short vector y. The vector x can then be reconstructed from y with high accuracy. Such compressive sensing schemes have numerous applications.


Typically the number of entries of y (say M) is much smaller than the number of entries of x (say N). The number N/M is the compression ratio. Thus, instead of keeping in memory (or instead of transmitting, working with, etc.) N real (complex) numbers we have to keep only M real (complex) numbers.


Below is a list of references that are referred to throughout the present specification:

  • [1] A. R. Calderbank, S. Howard, S. Jafarpour, “Sparse reconstruction via the Reed-Muller Sieve,” IEEE International Symposium on Information Theory, pp. 1973-1977, 2010.
  • [2] A. R. Calderbank, S. Howard, S. Jafarpour, “Construction of a Large Class of Deterministic Sensing Matrices That Satisfy a Statistical Isometry Property,” IEEE Journal of Selected Topics in Signal Processing, pp. 358-374, Vol. 4., no. 2, 2010.
  • [3] A. R. Calderbank, E. Rains, P. W. Shor, N. J. A. Sloane, “Quantum Error Correction Via Codes Over GF(4),” IEEE Trans. on Information Theory, vol. 44, pp. 1369-1387, 1998.


The compressive sensing scheme proposed in [1,2] has good performance. In particular, it has a good compression ratio N/M, it affords a low-complexity decompression algorithm (i.e., reconstruction of x from y), and it has a good accuracy of decompression. However, it does have a high compression complexity, that is, the complexity of computing y from x.


Thus, new techniques that are able to reduce compression complexity would meet a need and advance compression technology in general.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a logic flow diagram of compression-related functionality in accordance with various embodiments of the present invention.



FIG. 2 is a logic flow diagram of decompression-related functionality in accordance with various embodiments of the present invention.



FIG. 3 is a block diagram depiction of an apparatus in accordance with various embodiments of the present invention.





Specific embodiments of the present invention are disclosed below with reference to FIGS. 1-3. Both the description and the illustrations have been drafted with the intent to enhance understanding. For example, the dimensions of some of the figure elements may be exaggerated relative to other elements, and well-known elements that are beneficial or even necessary to a commercially successful implementation may not be depicted so that a less obstructed and a more clear presentation of embodiments may be achieved. In addition, although the logic flow diagrams above are described and shown with reference to specific steps performed in a specific order, some of these steps may be omitted or some of these steps may be combined, sub-divided, or reordered without departing from the scope of the claims. Thus, unless specifically indicated, the order and grouping of steps is not a limitation of other embodiments that may lie within the scope of the claims.


Simplicity and clarity in both illustration and description are sought to effectively enable a person of skill in the art to make, use, and best practice the present invention in view of what is already known in the art. One of skill in the art will appreciate that various modifications and changes may be made to the specific embodiments described below without departing from the spirit and scope of the present invention. Thus, the specification and drawings are to be regarded as illustrative and exemplary rather than restrictive or all-encompassing, and all such modifications to the specific embodiments described below are intended to be included within the scope of the present invention.


SUMMARY OF THE INVENTION

Various methods and devices are provided to address the need for reduced compression complexity in the area of compressive sensing. In one method, a vector x is compressed to obtain a vector y according to y=ΦRDx, where ΦRD=UΦRM·ΦRM is a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U is a unitary matrix from the real or complex Clifford group G. In another method, vector y is decompressed to obtain vector x also according to y=ΦRDx. In some embodiments, decompression may involve computing y′, U−1y and then determining the vector x using the computed y′. An article of manufacture is also provided, the article comprising a processor-readable storage medium storing one or more software programs which when executed by one or more processors performs the steps of any of these methods.


A first and a second apparatus is also provided. Both apparatuses include interface circuitry and a processing device, coupled to the interface circuitry. In the first apparatus, the processing device is adapted to compress a vector x to obtain a vector y according to y=ΦRDx, wherein ΦRD=UΦRD, ΦRM being a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U being a unitary matrix from the real or complex Clifford group G. In the second apparatus, the processing device is adapted to decompress a vector y to obtain a vector x according to y=ΦRDx, wherein ΦRD=UΦRM, ΦRM being a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U being a unitary matrix from the real or complex Clifford group G.


DETAILED DESCRIPTION OF EMBODIMENTS

To provide a greater degree of detail in making and using various aspects of the present invention, a description of our approach to reducing compression complexity and a description of certain, quite specific, embodiments follows for the sake of example.


The approach described herein is able to reduce the complexity of a compressive sensing scheme proposed by Caldebank et. al [1,2], while not sacrificing performance. In particular, our approach is able to achieve a significantly smaller (i.e., approximately 30% less) compression complexity (the complexity of computing y from x) while exhibiting the same performance.


A compressive sensing scheme is organized as follows. We would like to compress a vector x=(x1, . . . , xN) that has only a few nonzero components. In other words, it is a priory known that only few entries xj of x are not zeros. In order of doing this we compute the vector





y=Φx,  (1)


where Φ is an M×N compressive sensing matrix. Typically, N is much larger than M. If the matrix Φ satisfies certain properties, then the vector x can be reconstructed from the vector y with high accuracy.


It is known that a randomly chosen matrix Φ provides a good compression ratio N/M and good accuracy of decompression (reconstruction of x from y). At the same time, if one uses a random Φ the decompression is high.


Recently, Calderbank et. al [1,2] suggested using the well-known second-order Reed-Muller error correcting codes RM(2,m) for construction of the matrix Φ. The code RM(2,m) consists of










2
k

,

k
=

1
+
m
+

(



m




2



)



,




(
2
)







2m-tuples with entries 1 or −1. For instance, RM(2,3) consists of








2
k

=
128

,

k
=


1
+
3
+

(



3




2



)


=
7


,




23=8-tuples such that each tuple has an enve number of −1s. The following 8-tuples:







(



1




1




1




1




1




1




1




1



)

,

(



1




1




1





-
1





1




1




1





-
1




)

,

(



1




1




1




1





-
1






-
1






-
1






-
1




)





are typical instances of 8-tuples from RM(2,3).


In [1,2] Calderbank et. al it is also suggested to use subcodes of RM(2,m) for construction of the matrix Φ. In particular, they suggest to use the Kerdock code and Delsarte-Goethals codes, which are subcodes of RM(2,m) codes, for construction of the matrix Φ. We will consider only RM(2,m) codes; however, subcodes of RM(2,m) can be treated in a similar way.


In [1,2] it is suggested to use 2m-tuples from RM(2,m) as columns of the compressive sensing matrix ΦRM. Thus, we have M=2m. If all 2k 2m-tuples from RM(2,m) are used as columns of ΦRM we have N=2k and the ratio rate is N/M=2k/2m, where k is defined in (2).


In [1,2] it is also suggested to use only some 2m-tuples from RM(2,m) to form columns of a compressive sensing matrix Φ. In this case, we will have N<2k. Hence in this case, the compression becomes smaller, but it is shown in [1,2] that the quality of decompression becomes better. Below, we will consider only the case when all 2k 2m-tuples from RM(2,m) are used to form columns of compressive sensing matrix ΦRM. At the same time, we would like to point out that the proposed approach can be applied in exactly the same way if only some of the 2k 2m-tuples are used to form a compressive sensing matrix.


In [1,2] it is shown that the matrix ΦRM gives a good compression ratio N/M=2k/2m, affords a low complexity decompression algorithm (reconstruction of x from y), and has a good accuracy of reconstruction of x. The disadvantage of the matrix ΦRM is that all its entries are nonzero. Hence, the compression complexity of y from x requires about M·N summations or real numbers. Typically, the numbers M and N can be very large (thousands or even tens of thousands). Therefore, the complexity M·N becomes prohibitively large for many practical applications.


We address the following problem, then. How to construct a reduced density compressive sensing matrix ΦRD (that is, a matrix with multiple zero entries) that would have the same advantages as the matrix ΦRM, namely the same compression ratio, the same simple decompression algorithms, and the same accuracy of decompression.


Obviously, the matrix ΦRD would have the same advantages as ΦRM, and in addition it would have smaller complexity of compression (computing y from x). In particular, if ΦRD would have t zero entries, the compression complexity of computing y would be approximately M·N−t instead of M·N. Thus, we would get a pure gain—the same performance with smaller complexity.


We suggest to take a unitary matrix U from the real (or complex) Clifford group G defined, for instance, in [3] (section II) and the references therein. In [3] the real Clifford group is denoted by LR and the complex Clifford group is denoted by L. Although the group is not invented by the authors of [3], it is described with references to other papers where this group was also considered. Below we consider only the real Clifford group. The case of complex Clifford group can be done similarly. According to the description in [3] the real Clifford group is generated by unitary matrices from sets S1, S2 defined below.


1. S1={matrix of the form












I
2





I
2



H
2



I
2





I
2





m





factors


}






where






I
2


=

(



1


0




0


1



)





is the 2×2 identity matrix, and








H
2

=


1

2




(



1


1





-
1



1



)



,




and custom-character denotes the tensor product of matrices.


2. Let A be a 2m×2m binary symmetric matrix and let j=(j0, . . . , jm-1) be the binary representation of an integer j j,0≦j≦m−1. For instance







A
=

(



0


1


1




1


0


1




1


1


1



)


,




and if j=6 then j=(0,1,1). Denote by jT the transposition of the vector j.






S={diagonal matrices with diagonals(d0,d1, . . . , d2m-1),dj=(−1)jAjT}


In other words, matrix U from G can be obtained as U=U1U2, U1εS1, U2εS2. We further suggest to form the compressive compression matrix ΦRD by





ΦRD=UΦRM  (3)


and to use for compression sparse vectors x according to (1). Typically, the matrix ΦRD has reduced density, in other words ΦRD has many zeros. Typically, ΦRD is approximately 30% zeros. Since ΦRD has many zeros, the compression y=ΦRDx may be completed with small complexity. Since U is unitary, the matrix ΦRD has exactly the same compression capabilities as the matrix ΦRM. Moreover, ΦRD has the other advantages of ΦRM.


Now consider an example. It is easy to check that in the case m=3 the matrix







U
1

=


1
2



(



1


1


1


1
























1



-
1



1



-
1

























1


1



-
1




-
1

























1



-
1




-
1



1












































1


1


1


1
























1



-
1



1



-
1

























1


1



-
1




-
1

























1



-
1




-
1



1



)






belongs to the set S1 and that the identity matrix I8 belongs to the set S2. Hence, we can form the matrix U=U1I8=U1. Applying the matrix U to ΦRM we obtain a matrix ΦRD with 384 zero entries, which is about 37% of the total number of entries (8*128=1024) of the matrices ΦRM and ΦRD.


As mentioned above, it is shown in [1,2] that the matrix ΦRM has relatively simple decompression algorithms. We propose the following decompression algorithm for matrix ΦRD. Let y=ΦRDx. For reconstruction of x from y, we can use the following simple algorithm:





Compute y′=U−1y=U−1ΦRDx=U−1RMx=ΦRMx


Then, use any algorithms suggested in [1,2], or in any other place, for reconstruction of x from y′.


Compressive sensing, in general, has quite a few applications. For instance, if we transmit a movie then typically frame Fj and frame Fj+1 are different only by a few pixels. So we can compute the vector x=Fj+1−Fj and it will be equal to zero everywhere except where the pixels in Fj and Fj+1 are different. Next, we can compress the sparse vector x into a short vector y and transmit y instead of transmitting x or long Fj+1.


This is just one example of how compressive sensing may be used. Generally, any application in which sparse vectors represent information that is stored or transmitted may benefit from this type of compression. Compressive schemes suggested in [1,2] are very attractive for such practical applications. However, our approach can provide a 30-50% reduction in the compressive complexity, without any performance loss, over the schemes from [1,2].


The detailed and, at times, very specific description above is provided to effectively enable a person of skill in the art to make, use, and best practice the present invention in view of what is already known in the art. In the examples, specifics are provided for the purpose of illustrating possible embodiments of the present invention and should not be interpreted as restricting or limiting the scope of the broader inventive concepts.


Aspects of additional embodiments of the present invention can be understood with reference to FIGS. 1-3. Diagram 100 of FIG. 1 is a logic flow diagram of functionality in accordance with various embodiments of the present invention. In the method depicted in diagram 100, a vector x is compressed (101) to obtain a vector y according to y=ΦRDx, where ΦRD=UΦRM·ΦRM is a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U is a unitary matrix from the real or complex Clifford group G. Depending on the embodiment, vector y may be stored (102) as a compressed form of vector x and/or vector y may be transmitted to convey the information of vector x. This transmission may take any form of communication. For example, it may be transmitted wirelessly or transmitted via a communication bus or network or some combination of all of these forms.


Diagram 200 of FIG. 2 is also a logic flow diagram of functionality in accordance with various embodiments of the present invention. In the method depicted in diagram 200, communication conveying a vector y may be received (201) or vector y may be obtained from a storage device in which vector y was stored as a compressed form of vector x. Vector y is decompressed (202) to obtain vector x according to y=ΦRDx. In some embodiments, decompression may involve computing y′=U−1 y and then determining the vector x using the computed y′.


A person of skill in the art would readily recognize that steps of various above-described methods can be performed by programmed computers. Herein, some embodiments are also intended to cover program storage devices, e.g., digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, wherein said instructions perform some or all of the steps of said above-described methods. The program storage devices may be, e.g., digital memories, magnetic storage media such as a magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. The embodiments are also intended to cover computers programmed to perform said steps of the above-described methods.


Diagram 300 of FIG. 3 is a block diagram depiction of an apparatus in accordance with various embodiments of the present invention. Diagram 300 depicts an apparatus 310 that includes a processing device 301 and interface circuitry 302. Depending on the embodiment, interface circuitry 302 may interface with a storage device 303, a transmission device 304, and/or a receiving device 305. While apparatus 310 may perform only compression or only decompression operations and while it may operate without storage device 303, transmission device 304, or receiving device 305, for the sake of illustration, embodiments will be described in which apparatus 310 performs both compression and decompression and in which at least one of the devices 303-305 is also included.


In some embodiments, processing device 301 compresses a vector x to obtain a vector y according to y=ΦRDx, wherein ΦRD=UΦRM, ΦRM being a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U being a unitary matrix from the real or complex Clifford group G. Vector y may then be stored in storage device 303 and/or transmitted via transmission device 304.


In some embodiments, communication conveying a vector y may be received by receiving device 305 or vector y may be obtained from storage device 3003 in which vector y has been stored as a compressed form of vector x. Vector y is decompressed by processing device 301 to obtain vector x according to y=ΦRDx. In some embodiments, decompression may involve computing y′=U−1y and then determining the vector x using the computed y′.


The functions of the various elements shown in the FIGs., including any functional blocks labeled as “processing device”, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processing device” or “processor” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, a network processor, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a read only memory (ROM) for storing software, a random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the FIGs. are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.


Moreover, storage device 303 may comprise virtually any device able to store information, depending on the embodiment. This would include, without limitation, all varieties of memory devices and magnetic and optical storage devices. Similarly, since communication may take any form (e.g., wireless, electrical, and/or optical), transmission device 304 and receiving device 305 may comprise any device able to either transmit or receive communication, according to the needs of the particular embodiment.


Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments of the present invention. However, the benefits, advantages, solutions to problems, and any element(s) that may cause or result in such benefits, advantages, or solutions, or cause such benefits, advantages, or solutions to become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims.


As used herein and in the appended claims, the term “comprises,” “comprising,” or any other variation thereof is intended to refer to a non-exclusive inclusion, such that a process, method, article of manufacture, or apparatus that comprises a list of elements does not include only those elements in the list, but may include other elements not expressly listed or inherent to such process, method, article of manufacture, or apparatus. The terms a or an, as used herein, are defined as one or more than one. The term plurality, as used herein, is defined as two or more than two. The term another, as used herein, is defined as at least a second or more. Unless otherwise indicated herein, the use of relational terms, if any, such as first and second, top and bottom, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.


The terms including and/or having, as used herein, are defined as comprising (i.e., open language). The term coupled, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. Terminology derived from the word “indicating” (e.g., “indicates” and “indication”) is intended to encompass all the various techniques available for communicating or referencing the object/information being indicated. Some, but not all, examples of techniques available for communicating or referencing the object/information being indicated include the conveyance of the object/information being indicated, the conveyance of an identifier of the object/information being indicated, the conveyance of information used to generate the object/information being indicated, the conveyance of some part or portion of the object/information being indicated, the conveyance of some derivation of the object/information being indicated, and the conveyance of some symbol representing the object/information being indicated.

Claims
  • 1. A method comprising: compressing a vector x to obtain a vector y according to y=ΦRDx, wherein ΦRD=UΦRM, ΦRM being a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U being a unitary matrix from the real or complex Clifford group G.
  • 2. The method of claim 1, further comprising storing vector y as a compressed form of vector x.
  • 3. The method of claim 1, further comprising transmitting vector y to convey the information of vector x.
  • 4. An article of manufacture comprising a processor-readable storage medium storing one or more software programs which when executed by one or more processors performs the steps of the method of claim 1.
  • 5. A method comprising: decompressing a vector y to obtain a vector x according to y=ΦRDx, wherein ΦRD=UΦRM, ΦRM being a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U being a unitary matrix from the real or complex Clifford group G.
  • 6. The method of claim 5, wherein decompressing the vector y to obtain the vector x comprises computing y′=U−1y;determining the vector x using the computed y′.
  • 7. The method of claim 5, further comprising obtaining vector y from a storage device in which vector y was stored as a compressed form of vector x.
  • 8. The method of claim 5, further comprising receiving communication conveying vector y.
  • 9. An article of manufacture comprising a processor-readable storage medium storing one or more software programs which when executed by one or more processors performs the steps of the method of claim 5.
  • 10. An apparatus comprising: interface circuitry; anda processing device, coupled to the interface circuitry, adapted to compress a vector x to obtain a vector y according to y=ΦRDx, wherein ΦRD=UΦRM, ΦRM being a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U being a unitary matrix from the real or complex Clifford group G.
  • 11. The apparatus of claim 10, further comprising a storage device, operatively coupled to the interface circuitry, adapted to store vector y as a compressed form of vector x.
  • 12. The apparatus of claim 10, further comprising a transmission device, operatively coupled to the interface circuitry, adapted to transmit vector y to convey the information of vector x.
  • 13. An apparatus comprising: interface circuitry; anda processing device, coupled to the interface circuitry, adapted to decompress a vector y to obtain a vector x according to y=ΦRDx, wherein ΦRD=UΦRM, ΦRM being a compressive sensing matrix constructed using a second-order Reed-Muller code or a subcode of a second-order Reed-Muller code and U being a unitary matrix from the real or complex Clifford group G.
  • 14. The apparatus of claim 13, wherein being adapted to decompress the vector y to obtain the vector x comprises being adapted to compute y′=U−1y andto determine the vector x using the computed y′.
  • 15. The apparatus of claim 13, further comprising a storage device, operatively coupled to the interface circuitry, adapted to store vector y as a compressed form of vector x.
  • 16. The apparatus of claim 13, further comprising a receiving device, operatively coupled to the interface circuitry, adapted to receive communication conveying vector y.