Reference is made to commonly assigned, co-pending U.S. patent application Ser. No. 12/612,135, filed Nov. 4, 2009, entitled “Image deblurring using a combined differential image”, by Sen Wang, et al, co-pending U.S. patent application Ser. No. 12/770,810, filed Apr. 30, 2010, entitled “Range measurement using coded aperture”, by Paul J. Kane, et al, co-pending U.S. patent application Ser. No. 12/770,822, filed Apr. 30, 2010 entitled “Range measurement using multiple coded apertures”, by Paul J. Kane, et al, co-pending U.S. patent application Ser. No. 12/770,830, filed Apr. 30, 2010 entitled Range measurement using a zoom camera, by Paul J. Kane, et al, co-pending U.S. patent application Ser. No. 12/770,894, filed Apr. 30, 2010, entitled Digital camera with coded aperture rangefinder, by Paul J. Kane, et al, and co-pending U.S. patent application Ser. No. 12/770,919, filed Apr. 30, 2010, entitled “Range measurement using symmetric coded apertures”, by Paul J. Kane, et al the disclosures of which are all incorporated herein by reference.
The present invention relates to an image capture device that is capable of determining range information for objects in a scene, and in particular a capture device that uses a coded aperture and computational algorithms to efficiently determine the range information.
Optical imaging systems are designed to create a focused image of scene objects over a specified range of distances. The image is in sharpest focus in a two dimensional (2D) plane in space, called the focal or image plane. From geometrical optics, a perfect focal relationship between a scene object and the image plane exists only for pairs of object and image distances that obey the thin lens equation:
where f is the focal length of the lens, s is the distance from the object to the lens, and s′ is the distance from the lens to the image plane. This equation holds for a single thin lens, but it is well known that thick lenses, compound lenses and more complex optical systems are modeled as a single thin lens with an effective focal length f. Alternatively, complex systems can be modeled using the construct of principal planes, with the object and image distances s, s′ measured from these planes and using the effective focal length in the above equation, hereafter referred to as the lens equation.
It is also known that once a system is focused on an object at distance s1, in general only objects at this distance are in sharp focus at the corresponding image plane located at distance s1′. An object at a different distance s2 produces its sharpest image at the corresponding image distance s2, determined by the lens equation. If the system is focused at s1, an object at s2 produces a defocused, blurred image at the image plane located at s1′. The degree of blur depends on the difference between the two object distances, s1 and s2, the focal length f of the lens, and the aperture of the lens as measured by the f-number, denoted f/#. For example,
As on axis point P1 moves farther from the lens, tending towards infinity, it is clear from the lens equation that s′1=f. This leads to the usual definition of the f-number as f/#=f/D. At finite distances, the working f-number can be defined as (f/#)w=f/s′1. In either case, it is clear that the f-number is an angular measure of the cone of light reaching the image plane, which in turn is related to the diameter of the blur circle d. In fact, it is known that
Given that the focal length f and f-number of a lens or optical system is accurately measured, and given that the diameter of the blur circle d is measured for various objects in a two dimensional image plane, in principle one can obtain depth information for objects in the scene by inverting the above blur circle equation, and applying the lens equation to relate the object and image distances. Unfortunately, such an approach is limited by the assumptions of geometrical optics. It predicts that the out-of-focus image of each object point is a uniform circular disk. In practice, diffraction effects, combined with lens aberrations, lead to a more complicated light distribution that is more accurately characterized by a point spread function (psf), a 2D function that specifies the intensity of the light in the image plane due to a point light source at a corresponding location in the object plane. Much attention has been devoted to the problems of measuring and reversing the effects of the psf on images captured from scenes containing objects spread over a variety of distances from the camera.
For example, Bove (V. M. Bove, Pictorial Applications for Range Sensing Cameras, SPIE vol. 901, pp. 10-17, 1988) models the defocusing process as a convolution of the image intensities with a depth-dependent psf:
idef(x,y)=i(x,y)*h(x,y,z), (3)
where idef(x,y) is the defocused image, i(x,y) is the in-focus image, h(x,y,z) is the depth-dependent psf and * denotes convolution. In the spatial frequency domain, this is written:
Idef(Vx,Vy)=I(Vx,Vy),H(Vx,Vy,z), (4)
where Idef(Vx,Vy) is the Fourier transform of the defocused image, I(Vx, Vy) is the Fourier transform of the in-focus image, and H(Vx,Vy,z) is the Fourier transform of the depth-dependent psf. Bove assumes that the psf is circularly symmetric, i.e. h(x,y,z)=h(r,z) and H(Vx,Vy,z)=H(ρ,z), where r and ρ are radii in the spatial and spatial frequency domains, respectively. In Bove's method of extracting depth information from defocus, two images are captured, one with a small camera aperture (long depth of focus) and one with a large camera aperture (small depth of focus). The Discrete Fourier Transform (DFT) is applied to corresponding blocks of pixels in the two images, followed by a radial average of the resulting power spectra within each block. Then the radially averaged power spectra of the long and short depth of field (DOF) images are used to compute an estimate for H(ρ, z) at corresponding blocks. This assumes that each block represents a scene element at a different distance z from the camera, and therefore the average value of the spectrum is computed at a series of radial distances from the origin in frequency space, over the 360 degree angle. The system is calibrated using a scene containing objects at known distances [z1, z2, . . . zn] to characterize H(ρ, z), which then is then taken as an estimate of the rotationally-symmetric frequency spectrum of the spatially varying psf. This spectrum is then applied in a regression equation to solve for the local blur circle diameter. A regression of the blur circle diameter vs. distance z then leads to a depth or range map for the image, with a resolution corresponding to the size of the blocks chosen for the DFT. Although this method applies knowledge of the measured psf, in the end it relies on a single parameter, the blur circle diameter, to characterize the depth of objects in the scene.
Other methods which infer depth from defocus seek to control the behavior of the psf as a function of defocus, i.e. the behavior of h(x,y,z) as a function of z, in a predictable way. By producing a controlled depth-dependent blurring function, this information is used to deblur the image and infer the depth of scene objects based on the results of the deblurring operations. There are two main parts to this problem: the control of the psf behavior and deblurring of the image, given the psf as a function of defocus.
The psf behavior is controlled by placing a mask into the optical system, typically at the plane of the aperture stop. For example,
In practice, finding a unique solution for deconvolution is well known as a challenging problem. Veeraraghavan et. al. solves the problem by first assuming the scene is composed of discrete depth layers, and then forming an estimate of the number of layers in the scene. Then, the scale of the psf is estimated for each layer separately, using the model
h(x,y,z)=m(k(z)x/w,k(z)y/w) (5)
where m(x,y) is the mask transmittance function, k(z) is the number of pixels in the psf at depth z, and w is the number of cells in the 2D mask. The authors apply a model for the distribution of image gradients, along with Eq. (5) for the psf, to deconvolve the entire image once for each assumed depth layer in the scene. The results of the deconvolutions are desirable only for those psfs whose scale they match, thereby indicating the corresponding depth of the region. These results are limited in scope to systems behaving according to the mask scaling model of Eq. (5), and masks composed of uniform, square cells.
Levin et. al. in Image and Depth from a Conventional Camera with a Coded Aperture, ACM Transactions on Graphics 26 (3), July 2007, paper 70) follow a similar approach to Veeraraghavan, however, Levin et. al. relies on direct photography of a test pattern at a series of defocused image planes, to infer the psf as a function of defocus. Also, Levin et. al. investigated a number of different mask designs in an attempt to arrive at an optimum coded aperture. They assume a Gaussian distribution of sparse image gradients, along with a Gaussian noise model, in their deconvolution algorithm. Therefore, the optimized coded aperture solution is dependent on assumptions made in the deconvolution algorithm.
The solutions proposed by both Veeraraghavan and Levin have the feature that they proceed by performing a sequence of deconvolutions with a single kernel over the entire image area, followed by subsequent image processing to combine the results into a depth map.
Hiura and Matsuyama in Depth Measurement by the Multi-Focus Camera, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 1998, pp. 953-959, disclose digital camera-based methods for depth measurement using identification of edge points and coded aperture techniques. The coded aperture techniques employ Fourier or deconvolution analysis. In all cases, the methods employed require multiple digital image captures by the camera sensor at different focal planes.
In accordance with the present invention, there is provided a method of using an image capture device to identify range information for objects in a scene, comprising:
a) providing an image capture device having an image sensor, a coded aperture, and a lens;
b) using the image capture device to capture a digital image of the scene on the image sensor from light passing through the lens and the coded aperture, the scene having a plurality of objects;
c) dividing the digital image into a set of blocks;
d) assigning a point spread function (psf) value to each of the blocks; e) combining contiguous blocks in accordance with their psf values;
f) producing a set of blur parameters based upon the psf values of the combined blocks and the psf values of the remaining blocks;
g) producing a set of deblurred images based upon the captured image and each of the blur parameters; and
h) using the set of deblurred images to determine the range information for the objects in the scene.
It is a feature of the present invention that depth information is derived from captured digital images by detecting image blocks of similar point spread function value in an initial estimate and grouping these blocks together for further refinement and assignment of range values to scene objects. Using this method, improved range values are obtained and processing time and complexity is reduced.
In the following description, some arrangements of the present invention will be described in terms that would ordinarily be implemented as software programs. Those skilled in the art will readily recognize that the equivalent of such software can also be constructed in hardware. Because image manipulation algorithms and systems are well known, the present description will be directed in particular to algorithms and systems forming part of, or cooperating more directly with, the method in accordance with the present invention. Other aspects of such algorithms and systems, together with hardware and software for producing and otherwise processing the image signals involved therewith, not specifically shown or described herein are selected from such systems, algorithms, components, and elements known in the art. Given the system as described according to the invention in the following, software not specifically shown, suggested, or described herein that is useful for implementation of the invention is conventional and within the ordinary skill in such arts.
The invention is inclusive of combinations of the arrangements described herein. References to “a particular arrangement” and the like refer to features that are present in at least one arrangement of the invention. Separate references to “an arrangement” or “particular arrangements” or the like do not necessarily refer to the same arrangement or arrangements; however, such arrangements are not mutually exclusive, unless so indicated or as are readily apparent to one of skill in the art. The use of singular or plural in referring to the “method” or “methods” and the like is not limiting. It should be noted that, unless otherwise explicitly noted or required by context, the word “or” is used in this disclosure in a non-exclusive sense.
An image capture device includes one or more image capture devices that implement the methods of the various arrangements of the present invention, including the example image capture devices described herein. The phrases “image capture device” or “capture device” are intended to include any device including a lens which forms a focused image of a scene at an image plane, wherein an electronic image sensor is located at the image plane for the purposes of recording and digitizing the image, and which further includes a coded aperture or mask located between the scene or object plane and the image plane. These include a digital camera, cellular phone, digital video camera, surveillance camera, web camera, television camera, multimedia device, or any other device for recording images.
Returning to
The step of dividing the image into a set of blocks 400 is illustrated in
The step of assigning a psf value for each of the blocks 500 in the image is based on the following principles. It is assumed, as explained in the Background, that the effect of defocus on the image of a scene object that is located out of the plane of focus is modeled in the image plane by a depth-dependent psf h(x,y,z). It is further assumed that the depth variations within the area of a particular scene object are minor compared to the depth differences between scene objects, so that the former is neglected. Referring to
Since the camera contains a coded aperture, to a good approximation if it is assumed that the psf resembles a scaled version of the coded aperture mask transmittance function. In any case, the psf as a function of defocus is measured under controlled conditions prior to image capture.
The step of assigning a psf value to each of the blocks 500 in the image includes analyzing the image content to determine which of the measured psfs matches the defocus present in each block of the captured image. Before this is done, an initial psf estimate is formed. There are a variety of methods available for accomplishing this task. In one arrangement, each block is analyzed for edge content, using an edge detector such as the Sobel, Canny or Prewitt methods known in the art. Once edges are located, a psf estimate is derived which, when convolved with a perfect edge profile, matches the edge profile in the captured image. The psf estimate is derived by Fourier Transformation of the Spatial Frequency Response, which in turn is computed from the edge profile using methods known in the art. One such method is outlined in Slanted-Edge MTF for Digital Camera and Scanner Analysis by P. D. Burns, Proc. IS&T 2000 PICS Conference, pp. 135-138. Several one-dimensional edge profile analyses, oriented in different directions, yield slices through the two-dimensional psf, in accordance with the Projection-slice Theorem (The Fourier Transform and Its Applications, 3rd ed. R. Bracewell, McGraw-Hill, Boston, 2000) From the series of slices, the full 2D psf estimate is constructed. This psf is then compared to the measured psfs, which in some arrangements are stored in the camera memory 230, and the closest matching psf becomes the psf estimate for that block.
In another arrangement, a coded aperture having circular symmetry is used, which results in the psf having circular symmetry. In this case, a single edge at any orientation is sufficient to infer the two-dimensional psf.
In yet another arrangement, each block is analyzed using a frequency-content or sharpness metric. Sharpness metrics are described by Seales and Cutts in Active Camera Control from Compressed Image Streams, SPIE Proceedings vol. 2589, 50-57 (1995). In that reference, a sharpness metric is computed from the Discrete Cosine Transform coefficients of an image area. The metric obtained in a particular block is compared to a table of focus metrics previously computed over a set of representative images, at the same series of defocus positions used in the calibration of the camera, using the same tabulated psfs. The closest match to the tabulated focus metrics then yields the associated psf, which becomes the psf estimate for that block. In other arrangements, sharpness metrics are computed from Fourier Transform coefficients, or coefficients from other mathematical transforms, and a similar analysis performed.
Once the psf estimates for each block are obtained, the final value for the psf in a particular block is assigned by comparison to the measured values obtained at the relevant object and defocus distances. Although the psf is a continuous function of the depth z, as explained previously, the measurement of the psf is performed for a series of discrete object focus and focal range, or depth positions. A psf estimate ĥk(x,y) associated with a particular block of index k is very close to a value of the depth-dependent psf, h(x,y,zm), or can fall in between two values of the depth-dependent psf, h(x,y,zm), and h(x,y,zn), where m and n correspond to two depth positions at which the psf has been characterized. In one arrangement, the mean square errors (MSE) between the psf estimate and each of the candidate values of the depth-dependent psf are computed as follows:
Here the spatial coordinates x, y have been replaced by pixel indices 1, j which refer to the discretely sampled pixel locations in the image, and the sums are carried out over the kth block. Using Eqs. (6a) and (6b), the psf value for the kth block is given by:
Once the psf values have been assigned for each block in the image, contiguous blocks are combined according to their psf values 600. This means that adjoining blocks having the same psf value are combined into groups. Since the psf varies slowly within the boundaries of scene objects as compared to between scene objects, the groups of similar psfs tend to cluster around and overlay the individual scene objects. This is illustrated in
Note that in
With the psf values and groupings determined, one can correlate each psf value with the measured data that describes the depth dependent psf of the camera system, which in turn is dominated by the coded aperture mask. The psf dependence on defocus then leads directly to a range map estimate. However, it is an object of the present invention to proceed through additional steps, which when executed, lead to an improved range map. This is useful because the psf values, formed over local regions in the captured image, are affected by noise in the capture process, as well as statistical estimation error, depending on the size of the blocks chosen.
The step of producing blur parameters 700 refers to forming a representation of the psf hk(x,y) for each grouping of index k. Producing the blur parameters 700 includes forming a digitized representation of the psf, specified by discrete code values in a two dimensional matrix. It also includes deriving mathematical parameters from a regression or curve fitting analysis that has been applied to the psf data, such that the psf values for a given pixel location are readily computed from the parameters and the known regression or fitting function. The blur parameters [psf1, psf2, . . . psfm] can be stored inside the camera in a memory 230, along with the measured psf data [p1, p2, . . . pn]. Such memory can include computer disk, ROM, RAM or any other electronic memory known in the art. Such memory can reside inside the camera, or in a computer or other device electronically linked to the camera.
In some arrangements, blur parameters are formed for each grouping of index k and all remaining individual blocks, i.e. each block in the image. It may be of interest to obtain a range estimate for each and every block in the image, depending on the resolution desired and the variations of the depth across the x,y plane. In other arrangements, certain blocks or groupings may be deemed of higher interest, so that one or more blocks not assigned to groupings are not assigned blur parameters. For example, referring to
Returning to
The digitally represented psfs 245 are used in a deconvolution operation 810 to provide a set of deblurred images 820. The captured image 305 is deconvolved K×M times, once for each of the K groups in the image 305, and once for each of the M digitally represented psfs, to create a set of deblurred images 820. The deblurred image set 820, whose elements are denoted [I11, I12, . . . IKM], is then further processed with reference to the original captured image 305, to determine the range information 900 for the objects in the scene.
Any method of deconvolution is used in step 810 to create the set of deblurred images 820. In one arrangement, the method of deconvolution described in previously U.S. patent application Ser. No. 12/612,135 entitled “Image deblurring using a combined differential image” is applied. This is an iterative algorithm in which a candidate deblurred image, initially set to the captured image 305, is deblurred in steps, and compared to a reference image until an error criterion is met. Other methods known in the art include the Wiener and Richardson Lucy methods.
In some arrangements employing iterative deblurring algorithms, where the captured image 305 is part of an image sequence, a difference image is computed between the current and previous captured images in the image sequence. If the difference between successive images in the sequence is currently small, the iterative deblurring algorithm for the current image in the sequence can begin with the last iteration of the deblurred image from the previous frame in the image sequence. In some arrangement, this deblurred image can be saved and the deblurring step omitted until a significant difference in the captured image sequence is detected. In some arrangements, selected regions of the current captured image are deblurred, if significant changes in the captured image sequence are detected in these regions. In other arrangements, the range information is determined for selected regions or objects in the scene where a significant difference in the sequence is detected, thus saving processing time.
The deblurred image set 820 can be limited by using a subset of blur parameters from the stored set 245. This permits reduction of the processing time needed to deblur the images and to assign the range values. The set of blur parameters used (and hence the deblurred image set 820 created) is limited in increment (i.e. sub-sampled) or extent (i.e. restricted in range). If a digital image sequence is processed, the set of blur parameters used is the same, or different for each image in the sequence.
Alternatively, instead of sub-setting or sub-sampling the blur parameters from the stored set, a reduced deblurred image set is created by combining images corresponding to range values within selected range intervals. This might be done to improve the precision of depth estimates in a highly textured or highly complex scene which is difficult to segment. For example, let zm, where m=1, 2, . . . M denote the set of range values at which the psf data 245 and corresponding blur parameters have been measured. Let Ikm(x,y) denote the deblurred image corresponding to group k at range value m, and let Ĩkm(Vx,Vy) denote its Fourier transform. For example, if the range values are divided into M equal intervals, each containing N range values, a reduced deblurred image set for the kth group is defined as:
In other arrangements, the range values are divided into M unequal groups. In another arrangement, a reduced blurred image set is defined by writing Eq. (14) in the Fourier domain and taking the inverse Fourier transform. In yet another arrangement, a reduced blurred image set is defined for the kth group, using a spatial frequency dependent weighting criterion. In one arrangement this is computed in the Fourier domain using an equation such as:
where w(Vx,Vy) is a spatial frequency weighting function. Such a weighting function is useful, for example, in emphasizing spatial frequency intervals where the signal-to-noise ratio is most favorable, or where the spatial frequencies are most visible to the human observer. In some arrangements, the spatial frequency weighting function is the same for each of the M range intervals; however, in other arrangements, the spatial frequency weighting function is different for some of the intervals.
The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications is effected within the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
3708619 | Martin | Jan 1973 | A |
3971065 | Bayer | Jul 1976 | A |
4876591 | Muramatsu | Oct 1989 | A |
6011875 | Laben et al. | Jan 2000 | A |
6097835 | Lindgren | Aug 2000 | A |
6549288 | Migdal et al. | Apr 2003 | B1 |
7239342 | Kingetsu et al. | Jul 2007 | B2 |
7340099 | Zhang | Mar 2008 | B2 |
7834929 | Okawara | Nov 2010 | B2 |
8017899 | Levenets et al. | Sep 2011 | B2 |
8305485 | Kane et al. | Nov 2012 | B2 |
8330852 | Kane et al. | Dec 2012 | B2 |
8379120 | Wang et al. | Feb 2013 | B2 |
20020075990 | Lanza et al. | Jun 2002 | A1 |
20020149691 | Pereira et al. | Oct 2002 | A1 |
20050030625 | Cattin-Liebl | Feb 2005 | A1 |
20060017837 | Sorek et al. | Jan 2006 | A1 |
20060093234 | Silverstein | May 2006 | A1 |
20060157640 | Perlman et al. | Jul 2006 | A1 |
20060187308 | Lim et al. | Aug 2006 | A1 |
20070046807 | Hamilton, Jr. et al. | Mar 2007 | A1 |
20070223831 | Mei et al. | Sep 2007 | A1 |
20080025627 | Freeman et al. | Jan 2008 | A1 |
20080030592 | Border et al. | Feb 2008 | A1 |
20080100717 | Kwon | May 2008 | A1 |
20080218611 | Parulski et al. | Sep 2008 | A1 |
20080218612 | Border et al. | Sep 2008 | A1 |
20080219654 | Border et al. | Sep 2008 | A1 |
20080240607 | Sun et al. | Oct 2008 | A1 |
20090016481 | Slinger | Jan 2009 | A1 |
20090020714 | Slinger | Jan 2009 | A1 |
20090028451 | Slinger et al. | Jan 2009 | A1 |
20090090868 | Payne | Apr 2009 | A1 |
20090091633 | Tamaru | Apr 2009 | A1 |
20100034429 | Drouin et al. | Feb 2010 | A1 |
20100073518 | Yeh | Mar 2010 | A1 |
20100110179 | Zalevsky et al. | May 2010 | A1 |
20100201798 | Ren et al. | Aug 2010 | A1 |
20100215219 | Chang et al. | Aug 2010 | A1 |
20100220212 | Perlman et al. | Sep 2010 | A1 |
20100265316 | Sali et al. | Oct 2010 | A1 |
20110102642 | Wang et al. | May 2011 | A1 |
20110267477 | Kane et al. | Nov 2011 | A1 |
20110267485 | Kane et al. | Nov 2011 | A1 |
20110267486 | Kane et al. | Nov 2011 | A1 |
20110267507 | Kane et al. | Nov 2011 | A1 |
20110267508 | Kane et al. | Nov 2011 | A1 |
Number | Date | Country |
---|---|---|
WO 2010015086 | Feb 2010 | WO |
Entry |
---|
V.M. Bove, Pictorial Applications for Range Sensing Cameras, SPIE vol. 901, pp. 10-17, 1988. |
J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill, SanFrancisco, 1968, pp. 113-117. |
Veeraraghavan et al, Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing, ACM Transations on Graphics 26 (3), Jul. 2007, paper 69. |
Levin, et al, In Image and Depth from a Conventional Camera with a Coded Aperture, ACM Transactions on Graphics 26 (3), Jul. 2007, paper 70. |
Hiura and Matsuyama in Depth Measurement by the Multi-Focus Camera, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 1998, pp. 953-959. |
Slanted-edge MTF for Digital Camera and Scanner Analysis by P.D. Bums, Proc. IS&T 2000 PICS conference, pp. 135-138. |
Seales and Cutts in Active Camera Control from Compressed Image Streams, SPIE Proceedings, vol. 2589, 50-57 (1995). |
U.S. Appl. No. 12/612,135, filed Nov. 4, 2009, Wang, et al. |
A. N. Rajagopalan, et al., “An MRF Model-Based Approach to Simultaneous Recovery of Depth and Restoration from Defocused Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Service Center, Los Alamitos, CA, US, vol. 21, No. 7, Jul. 1, 1999, pp. 577-589, XP000832594, ISSN: 0162-8828, DOI: DOI: 1.1109/34.777369, the whole document. |
Ben-Ezra, et al., “Motion deblurring using hybrid imaging,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 657-664, 2003. |
Busboom, et al., “Coded Aperture Imaging with Multiple Measurements,” Journal of the Optical Society of America: May 1997, 14:5, 1058-1065. |
Donatelli et al., “Improved image deblurring with anti-reflective boundary conditions and re-blurring,” Inverse Problems, vol. 22, pp. 2035-2053, 2006. |
Dr. Arthur Cox, A Survey of Zoom Lenses, SPIE vol. 3129,0277-786x/97. 1997. |
Fergus et al., “Removing camera shake from a single photograph,” ACM Transactions on Graphics, vol. 25, pp. 787-794, 2006. |
International Search Report and Written Opinion received in corresponding PCT Application No. PCT/US2011/034039, dated Jun. 24, 2011. |
L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astronomical Journal, vol. 79, pp. 745-754, 1974. |
Levin et al., “Understanding and evaluating blind deconvolution algorithms,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2009. |
Liu, et al., “Simultaneous image formation and motion blur restoration via multiple capture,” Proc. International Conference Acoustics, Speech, Signal Processing, pp. 1841-1844, 2001. |
Raskar, et al., “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics, Vol. 25, pp. 795-804, 2006. |
Rav-Acha, et al., “Two motion-blurred images are better than one,” Pattern Recognition Letters, vol. 36, pp. 211-217, 2005. |
S. K. Nayar, et al., “Real Time Focus Range Sensor,” IEEE Transactions of Pattern Analysis and Machine Intelligence, vol. 18, No. 12, Dec. 1996, p. 1186-1198. |
Sazbon, D., et al., “Qualitative Real-Time Range Extraction for Preplanned Scene Partitioning Using Laser Beam Coding,” Apr. 29, 2005, Pattern Recognition Letters, vol. 26, 2005. |
Shan et al., “High-quality motion deblurring from a single image,” ACM Transactions on Graphics, vol. 27 pp. 1-10, 2008. |
W. H. Richardson, “Bayesian-based iterative method of image restoration,” Journal of the Optical Society of America, vol. 62, pp. 55-59, 1972. |
Yuan et al., “Image deblurring with blurred/noisy image pairs,” ACM Transactions on Graphics, vol. 26, Iss. 3, 2007. |
Yuan et al., “Progressive inter-scale and intra-scale non-blind image deconvolution,” ACM Transactions on Graphics, vol. 27, Iss. 3, 2008. |
Number | Date | Country | |
---|---|---|---|
20120076362 A1 | Mar 2012 | US |