This disclosure generally relates to processing detector output. More particularly, this disclosure relates to devices and methods for altering a resolution or focus of a detector output.
Various devices are known for detecting a selected input. For example, a variety of cameras and other imaging devices are used for image acquisition. Conventional cameras were, for many years, based on capturing images on film. More recently, devices such as cameras have included digital imaging components. Many contemporary digital image or video devices are configured for acquiring and compressing large amounts of raw image or video data.
One drawback associated with many digital systems is that they require significant computational capabilities. Another potential drawback is that multiple expensive sensors may be required. Efforts to increase the resolution of detecting devices such as cameras typically include adding more components to obtain more pixels for an image. It is typically not desirable to increase the cost or complexity of a device by introducing additional components. Moreover, many scenarios may include physical and practical limitations that prevent a desired detection or image gathering capability.
An exemplary system includes at least one detector configured to provide an output based on a detected input. A plurality of input control elements control the input detected by the detector. A processor is configured to determine at least one point spread function based on a condition of the detector, a condition of the input control elements and a selected distance associated with the output. The controller is configured to generate data based on the output and the at least one point spread function, the generated data having at least one aspect.
An exemplary detector output enhancement method includes determining an output of at least one detector. The output is dependent on a condition of a plurality of input control elements configured to control input detected by the detector. At least one point spread function is determined based on a condition of the detector, a condition of the input control elements and a selected distance associated with the output. Data is generated based on the output and the at least one point spread function, the generated data having at least one aspect.
Various embodiments and their features will become apparent to those skilled in the art from the following detailed description of an exemplary embodiment. The drawings that accompany the detailed description can be briefly described as follows.
In one example, the processor 26 is configured to achieve a selected resolution of the data. In another example, the processor 26 is configured to achieve a selected focus of the data. In another example, the processor 26 is configured to achieve a selected resolution and focus of the generated data. In the following description, the generated data comprises an image and the processor 26 is capable of achieving a desired resolution of the image, a desired focus of the image or both.
In the example of
The processor 26 is configured to generate the image with at least one selected aspect. The processor 26 in the illustrated example is configured to achieve a desired or selected resolution of the image. The processor 26 is capable of enhancing the resolution of the image beyond the resolution that is provided by the physical limitations of the shutter array 30. As schematically shown at 38, the processor 26 uses at least one point spread function associated with at least one of the shutter elements 30 for purposes of enhancing the resolution of the image.
There are known techniques for determining a point spread function. For example, the detector output, the detector size, a distance between the detector 22 and the shutter elements 30, a distance to the object 32 and the dimensions or size of the shutter elements provides enough information to determine a point spread function. In this example, a point spread function may be determined for each shutter element utilized at a particular instant for gathering image data.
For purposes of discussion, let I(x,y) be the image 52, j be an index of the input control (i.e., shutter) elements 30 and gj(x,y) be the point spread function associated with each shutter element 30. In examples where there are multiple detectors, i may be the index for each detector and yn are the measurements made by the detectors. In such a case the measurements made by the plurality of detectors can be described by the following equation.
yn=Σanj∫gj(x,y)I(x,dxdy), (1)
which can be rewritten as
yn=∫Gn(x,y)I(x,y)dxdy (2)
where Gn(x,y)=Σj anjgj(x,y), which is referred to as a collective point spread function associated with all of the shutter elements 30 that were open or active during the detector measurements.
In this example the processor reconstructs the image with finer resolution according to the following
I(x,y)=argmin{∫|W(I(x,y))|dxdy|∫Gn(x,y)I(x,y)dxdy=yn, n=1,2 . . . } (3)
where W is a sparsifying operator. Reconstructing the image in this way allows for achieving a selected resolution of the image. Any desired resolution may be obtained by quantizing x,y and replacing integration with summation in equation (3).
The maximum resolution possible with this technique is based upon the region 56 where the collective point spread function Gn(x,y) has a constant value. The minimum resolution pixel size of the image is determined by the region 56. Selecting a finer resolution (i.e., a pixel size smaller than the region 56) does not provide any additional information because the collective point spread function has a constant value within the region 56.
It is desirable to utilize the region 56 for an increased resolution because the point spread function has a constant value in that region. If the point spread function does not have a constant value over a given region, the image information may be blurred based on the information from the detector in that region.
In examples wherein the processor 26 is also capable of adjusting a focus of the image, the processor 26 utilizes a point spread function to achieve a desired focus the image, for example, on a selected object within the image. Given information regarding a distance between a particular object and the sensor, the processor 26 is able to determine a point spread function that is based upon the distance to that object. Utilizing that point spread function for reconstructing the image focuses the image on the object associated with that point spread function.
In the illustrated example, the detecting system 20 comprises a compressive measurement camera that measures visual information whether for a still image or moving video (e.g., a sequence of images). The output 28 of the detector 22 may be stored in various manners in a selected location, which may be remote from the detector 22. The measured visual information are later used by the processor 26, which may be located remotely from the detector 22 or incorporated into the same device, to reconstruct the image (or video). The processor 26 uses an appropriate point spread function that depends on the geometry or condition of the compressive measurement camera and the desired focal point of the image (or video) being reconstructed (or generated). This approach allows for achieving a selected resolution of the image (or video), a selected focus of the image (or video), or both.
The preceding description is exemplary rather than limiting in nature. Variations and modifications to the disclosed examples may become apparent to those skilled in the art that do not necessarily depart from the essence of the disclosed embodiments. The scope of legal protection can only be determined by studying the following claims.
Number | Name | Date | Kind |
---|---|---|---|
3775602 | Alexandridis et al. | Nov 1973 | A |
5070403 | Wilkinson | Dec 1991 | A |
5166788 | Lee | Nov 1992 | A |
5262854 | Ng | Nov 1993 | A |
5519206 | Uwira | May 1996 | A |
5555023 | Maenaka et al. | Sep 1996 | A |
5572552 | Dent et al. | Nov 1996 | A |
5870144 | Guerrera | Feb 1999 | A |
6057909 | Yahav et al. | May 2000 | A |
6148107 | Ducloux et al. | Nov 2000 | A |
6271876 | McIntyre et al. | Aug 2001 | B1 |
6356324 | Nishiguchi et al. | Mar 2002 | B1 |
6718287 | Oostveen et al. | Apr 2004 | B2 |
7345603 | Wood et al. | Mar 2008 | B1 |
7602183 | Lustig et al. | Oct 2009 | B2 |
7680356 | Boyce et al. | Mar 2010 | B2 |
7928893 | Baraniuk et al. | Apr 2011 | B2 |
8125883 | Aulin | Feb 2012 | B2 |
8204126 | Tsuda et al. | Jun 2012 | B2 |
8644376 | Jiang et al. | Feb 2014 | B2 |
20030002746 | Kusaka | Jan 2003 | A1 |
20030043918 | Jiang et al. | Mar 2003 | A1 |
20030197898 | Battiato et al. | Oct 2003 | A1 |
20040174434 | Walker et al. | Sep 2004 | A1 |
20040264580 | Chiang Wei Yin et al. | Dec 2004 | A1 |
20050207498 | Vitali et al. | Sep 2005 | A1 |
20060157640 | Perlman et al. | Jul 2006 | A1 |
20060203904 | Lee | Sep 2006 | A1 |
20060239336 | Baraniuk et al. | Oct 2006 | A1 |
20070009169 | Bhattacharjya | Jan 2007 | A1 |
20070285554 | Givon | Dec 2007 | A1 |
20080025624 | Brady | Jan 2008 | A1 |
20080062287 | Agrawal et al. | Mar 2008 | A1 |
20080152296 | Oh et al. | Jun 2008 | A1 |
20090066818 | Lim et al. | Mar 2009 | A1 |
20090095912 | Slinger et al. | Apr 2009 | A1 |
20090136148 | Lim et al. | May 2009 | A1 |
20100091134 | Cooke et al. | Apr 2010 | A1 |
20100111368 | Watanabe | May 2010 | A1 |
20100165163 | Matsuda | Jul 2010 | A1 |
20100189172 | Pateux et al. | Jul 2010 | A1 |
20100201865 | Han et al. | Aug 2010 | A1 |
20110150084 | Choi et al. | Jun 2011 | A1 |
20110150087 | Kim et al. | Jun 2011 | A1 |
20110157393 | Zomet et al. | Jun 2011 | A1 |
20110199492 | Kauker | Aug 2011 | A1 |
20120057072 | Yamashita | Mar 2012 | A1 |
20120069209 | Gudlavalleti et al. | Mar 2012 | A1 |
20120076362 | Kane et al. | Mar 2012 | A1 |
20120082208 | Jiang et al. | Apr 2012 | A1 |
20120105655 | Ishii et al. | May 2012 | A1 |
20120189047 | Jiang et al. | Jul 2012 | A1 |
20120213270 | Baraniuk et al. | Aug 2012 | A1 |
20130002968 | Bridge et al. | Jan 2013 | A1 |
20130070138 | Baraniuk et al. | Mar 2013 | A1 |
20150049210 | Rachlin et al. | Feb 2015 | A1 |
Number | Date | Country |
---|---|---|
2008028538 | Feb 2008 | JP |
2011166255 | Aug 2011 | JP |
2006041219 | Apr 2006 | WO |
2012001463 | Jan 2012 | WO |
2013007272 | Jan 2013 | WO |
Entry |
---|
Wikipedia article on “Shutter (photography)”, dated Oct. 5, 2011. |
International Search Report and Written Opinion of the International Searching Authority for International Application No. PCT/US2013/064962 dated Dec. 16, 2013. |
Huang, et al., “Lensless Imaging by Compressive Sensing,” accepted for presentation at IEEE International Conference on Image Processing, ICIP 2013, May 2013. |
Jiang, et al., “Multi-View in Lensless Compressive Imaging,” Picture Coding Symposium 2013, Dec. 2013. |
Candes, et al., “Stable Signal Recovery From Incomplete and Inaccurate Measurements,” Applied and Computational Mathematics, Caltech, Pasadena, CA 91125; Department of Mathematics, University of California, Los Angeles, CA 90095, Feb. 2005; Revised Jun. 2005, pp. 1-15. |
Chan, et al., “A Single-Pixel Terahertz Imaging System Based on Compressed Sensing, ” Applied Physics Letters, vol. 93, No. 12, pp. 121105-121105-3, Sep. 2008. |
Park, et al., “A Geometric Approach to Multi-View Compressive Imaging,” EURASIP Journal on Advances in Signal Processing 2012, pp. 1-15, http://asp.eurasipjournals.com/content/201211137. |
Zomet, et al., “Lensless Imaging With a Controllable Aperture,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2006, 0-7695-2597-0/06. |
Heidari, et al., “A 2D Camera Design With a Single-Pixel Detector,” in IRMMW-THz 2009, IEEE, pp. 1-2, 2009. |
Takhar, et al., “A New Compressive Imaging Camera Architecture Using Optical-Domain Compression,” Proc. IS&T/SPIE Computational Imaging IV, Jan. 2006. |
Jiang, et al., “Surveillance Video Processing Using Compressive Sensing, ” Manuscript submitted to AIMS' Journals, pp. 1-1, arXiv:1302.1942v1 [cs.CV] Feb. 8, 2013. |
Romberg, “Imaging Via Compressive Sampling,” IEEE Signal Processing Magazine, pp. 14-20, Mar. 2008. |
Babacan, et al., “Compressive Passive Millimeter-Wave Imaging,” 2011 18th IEEE International Conference on Impage Processing, pp. 2705-2708. |
Duarte, et al., “Single-Pixel Imaging Via Compressive Sampling,” IEEE Signal Processing Magazine, vol. 25, No. 2, pp. 83-91, Mar. 1, 2008, XP011225667. |
Goyal, et al., “Compressive Sampling and Lossy Compression,” IEEE Signal Processing Magazine, pp. 48-56, Mar. 2008. |
Jiang, et al., “Scalable Video Coding Using Compressive Sensing,” Bell Labs Technical Journal, vol. 16, No. 4, pp. 149-169, 2012. |
Li, et al., “A New Compressive Video Sensing Framework for Mobile Broadcast,” IEEE Transactions on Broadcasting, vol. 59, No. 1, Mar. 2013. |
Donoho, “Compressed Sensing,” IEEE Transactions on Information Theory, vol. 52, No. 4, Apr. 2006. |
Robucci, et al., “Compressive Sensing on a CMOS Separable-Transform Image Sensor,” vol. 98, No. 6, Jun. 2010, Proceedings of the IEEE, pp. 1089-1101. |
International Search Report and Written Opinion of the International Searching Authority for International application No. PCT/US2011/051730 mailed Dec. 15, 2011. |
International Search Report and Written Opinion of the International Searching Authority for International application No. PCT/US2011/051726 mailed Nov. 14, 2011. |
Cossalter, et al., “Joint Compressive Video Coding and Analysis,” IEEE Transactions on Multimedia, IEEE Service Center, Piscataway, NJ, US vol. 12, No. 3, Apr. 1, 2010, pp. 168-183, XLP011346672. |
Dadkhah, et al., “Compressive Sensing With Modified Total Variation Minimization Algorithm,” Acoustic Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, IEEE, Piscataway, NJ, US, Mar. 14, 2010, pp. 1310-1313, XP031697373. |
Huihui, et al., “Compressive Sensing for DCT Image,” Computational Aspects of Social Networks (CASON), 2010 International Conference on, IEEE, Piscataway, NJ, US, Sep. 26, 2010, pp. 378-381. |
Chengbo Li, “An Efficient Algorithm for Total Variation Regularization With Applications to the Single Pixel Camera and Compressive Sensing,” Thesis Submitted in Partial Fulfillment of the Requirement for the Degree Master of Arts, Sep. 20, 2009, pp. 1-93, XP55010819, Retrieved from the internet: URL:http://scholarship.rice.edu/bitstream/handle/1911/62229/1486057.PDR?sequence=1 (Retrieved Oct. 31, 2011). |
Li, et al., “Video Coding Using Compressive Sensing for Wireless Communications,” Wireless Communications and Networking Conference (WCNC), 2011 IEEE, IEEE, Mar. 28, 2011., pp. 2077-2082, XP031876593. |
Jiang, et al., “Arbitrary Resolution Video Coding Using Compressive Sampling, ”Workshop on Picture Coding and Image Processing 2010, Dec. 7, 2010, pp. 1-8, XP030082080. |
Park, et al., “A Multiscale Framework for Compressive Sensing of Video,” in Picture Coding Simposium, Chicago, IL, May 2009, 4 pages. |
Drori, Iddo, “Compressed Video Sensing,” BMVA symposium on 3D video analysis, display, and applications, 2008, two pages. |
Wakin, et al., “Compressive Imaging for Video Representation and Coding,” in Picture Coding Symposium (Beijing, China), Apr. 2006, six pages. |
Dugad, et al., “A Fast Scheme for Image Size Change in the Compressed Domain,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, No. 4, Apr. 1, 2011, pp. 461-474. |
Deng, et al., “Robust Image Compression Based on Compressive Sensing,” Multimedia and Expo (ICME), Jul. 19, 2012, pp. 462-467. |
Hyder, et al., “A Scalable Distributed Video Coder Using Compressed Sensing,” India conference, Dec. 8, 2009, pp. 1-4. |
Gao, et al., “A Robust Image Transmission Scheme for Wireless Channels Based on Compressive Sensing,” Aug. 18, 2012, pp. 334-341. |
CCD and CMOS Sensor Technology, 2010, AXIS Communications, pp. 1-8. |
Bogaerts, et al., “High-End CMOS Active Pixel Sensors for Space-Borne Imaging Instruments,” 2005, FillFactory, pp. 109. |
Doering, Roger William, “A Tri-Color-Pixel Digital-Micromirror Video Chip,” A2001, UCLA, pp. 1-180. |
Ouni, et al., “New Low Complexity DCT Based Video Compression Method,” 2009, ICT 09, pp. 202-207. |
Turchetta, et al., “Monolithic Active Pixel Sensors (MAPS) in a VLSI CMOS Technology,” 2003, Science Direct, pp. 251-259. |
Park, Sahng-Gyu, “Adaptive Lossless Video Compression,” A Thesis Submitted to the Faculty of Purdue University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy, Dec. 2003, pp. 1-105. |
Robucci, et al., “Compressive Sensing on a CMOS Separable Transform Image Sensor,” School of Electrical and Computer engineering, Atlanta, GA, 2008 IEEE, pp. 5125-5128. |
Raskar, et al., “Coded Exposure Photography: Motion Deblurring Using Fluttered Shutter,” ACM Transactions on Graphics, vol. 25, No. 3, Jul. 1, 2006, pp. 795-804, XP002467982, ISSN: 0730-0301, DOI: 10.1145/1141911/1141957. |
Wang, et al., “Optical Engineering: Superresolution Imaging by Dynamic Single-Pixel Compressive Sensing System,” Optical Engineering, Soc. Of Photo-Optical Instrumentation Engineers, Bellingham, vol. 52, No. 6, Jun. 1, 2013, p. 63201, XP060025857, ISSN: 0091-3286, DOI: 10.1117/1.OE.52.6.063201. |
Shi, et al., “High Resolution Image Reconstruction: A New Imager Via Movable Random Exposure,” 16th IEEE International Conference on Image Processing, 2009, IEEE, Piscataway, NJ, Nov. 7, 2009, pp. 1177-1180, XP031628383, DOI: 10.1109/ICIP.2009.5413684, ISBN: 978-1-4244-5653-6. |
International Search Report for International application No. PCT/US2015/035979 dated Sep. 8, 2015. |
Number | Date | Country | |
---|---|---|---|
20140112594 A1 | Apr 2014 | US |