The present invention relates to an image capture device that is capable of determining range information for objects in a scene, and in particular a method for using a capture device with a coded aperture and novel computational algorithms to more efficiently determine the range information.
Optical imaging systems are designed to create a focused image of scene objects over a specified range of distances. The image is in sharpest focus in a two dimensional (2D) plane in the image space, called the focal or image plane. From geometrical optics, a perfect focal relationship between a scene object and the image plane exists only for combinations of object and image distances that obey the thin lens equation:
where f is the focal length of the lens, s is the distance from the object to the lens, and s′ is the distance from the lens to the image plane. This equation holds for a single thin lens, but it is well known that thick lenses, compound lenses and more complex optical systems are modeled as a single thin lens with an effective focal length f. Alternatively, complex systems are modeled using the construct of principal planes, with the object and image distances s, s′ measured from these planes, and using the effective focal length in the above equation, hereafter referred to as the lens equation.
It is also known that once a system is focused on an object at distance s1, in general only objects at this distance are in sharp focus at the corresponding image plane located at distance s1′. An object at a different distance s2 produces its sharpest image at the corresponding image distance s2′, determined by the lens equation. If the system is focused at s1, an object at s2 produces a defocused, blurred image at the image plane located at s1′. The degree of blur depends on the difference between the two object distances, s1 and s2, the focal length f of the lens, and the aperture of the lens as measured by the f-number, denoted f/#. For example,
On axis point P1 moves farther from the lens, tending towards infinity, it is clear from the lens equation that s′1=f. This leads to the usual definition of the f-number as f/#=f/D. At finite distances, the working f-number is defined as (f/#)w=f/s′1. In either case, it is clear that the f-number is an angular measure of the cone of light reaching the image plane, which in turn is related to the diameter of the blur circle d. In fact, it can be shown that
By accurate measure of the focal length and f-number of a lens, and the diameter d of the blur circle for various objects in a two dimensional image plane, in principle it is possible to obtain depth information for objects in the scene by inverting the Eq. (2), and applying the lens equation to relate the object and image distances. This requires careful calibration of the optical system at one or more known object distances, at which point the remaining task is the accurate determination of the blur circle diameter d.
The above discussion establishes the principles behind passive optical ranging methods based on focus. That is, methods based on existing illumination (passive) that analyze the degree of focus of scene objects, and relate this to their distance from the camera. Such methods are divided into two categories: depth from defocus methods assume that the camera is focused once, and that a single image is captured and analyzed for depth, while depth from focus methods assume that multiple images are captured at different focus positions, and the parameters of the different camera settings are used to infer the depth of scene objects.
The method presented above provides insight into the problem of depth recovery, but unfortunately is oversimplified and not robust in practice. Based on geometrical optics, it predicts that the out-of-focus image of each object point is a uniform circular disk or blur circle. In practice, diffraction effects and lens aberrations lead to a more complicated light distribution, characterized by a point spread function (psf), specifying the intensity of the light at any point (x,y) in the image plane due to a point light source in the object plane. As explained by Bove (V. M. Bove, Pictorial Applications for Range Sensing Cameras, SPIE vol. 901, pp. 10-17, 1988), the defocusing process is more accurately modeled as a convolution of the image intensities with a depth-dependent psf:
i
def(x,y;z)=i(x,y)*h(x,y;z), (3)
where idef(x,y;z) is the defocused image, i(x,y) is the in-focus image, h(x,y;z) is the depth-dependent psf and * denotes convolution. In the Fourier domain, this is written:
I
def(vx,vy)=I(vx,vy)H(vx,vy;z), (4)
where Idef(vx, vy) is the Fourier transform of the defocused image, I(vx, vy) is the Fourier transform of the in-focus image, and H(vx, vy, z) is the Fourier transform of the depth-dependent psf. Note that the Fourier Transform of the psf is the Optical Transfer Function, or OTF. Bove describes a depth-from-focus method, in which it is assumed that the psf is circularly symmetric, i.e. h(x, y; z)=h(r; z) and H(vx, vy;z)=H(ρ; z), where r and ρ are radii in the spatial and spatial frequency domains, respectively. Two images are captured, one with a small camera aperture (long depth of focus) and one with a large camera aperture (small depth of focus). The Discrete Fourier Transform (DFT) is taken of corresponding windowed blocks in the two images, followed by a radial average of the resulting power spectra, meaning that an average value of the spectrum is computed at a series of radial distances from the origin in frequency space, over the 360 degree angle. At that point the radially averaged power spectra of the long and short depth of field (DOF) images are used to compute an estimate for H(ρ,z) at corresponding windowed blocks, assuming that each block represents a scene element at a different distance z from the camera. The system is calibrated using a scene containing objects at known distances [z1, z2, . . . zn] to characterize H(ρ, z), which then is related to the blur circle diameter. A regression of the blur circle diameter vs. distance z then leads to a depth or range map for the image, with a resolution corresponding to the size of the blocks chosen for the DFT.
Methods based on blur circle regression have been shown to produce reliable depth estimates. Depth resolution is limited by the fact that the blur circle diameter changes rapidly near focus, but very slowly away from focus, and the behavior is asymmetric with respect to the focal position. Also, despite the fact that the method is based on analysis of the point spread function, it relies on a single metric (blur circle diameter) derived from the psf.
Other depth from defocus methods seek to engineer the behavior of the psf as a function of defocus in a predictable way. By producing a controlled depth-dependent blurring function, this information is used to deblur the image and infer the depth of scene objects based on the results of the deblurring operations. There are two main parts to this problem: the control of the psf behavior, and deblurring of the image, given the psf as a function of defocus.
The psf behavior is controlled by placing a mask into the optical system, typically at the plane of the aperture stop. For example,
In practice, finding a unique solution for deconvolution is well known as a challenging problem. Veeraraghavan et al solve the problem by first assuming the scene is composed of discrete depth layers, and then forming an estimate of the number of layers in the scene. Then, the scale of the psf is estimated for each layer separately, using the model
h(x,y,z)=m(k(z)x/w,k(z)y/w), (5)
where m(x,y) is the mask transmittance function, k(z) is the number of pixels in the psf at depth z, and w is the number of cells in the 2D mask. The authors apply a model for the distribution of image gradients, along with Eq. (5) for the psf, to deconvolve the image once for each assumed depth layer in the scene. The results of the deconvolutions are desirable only for those psfs whose scale they match, thereby indicating the corresponding depth of the region. These results are limited in scope to systems behaving according to the mask scaling model of Eq. (5), and masks composed of uniform, square cells.
Levin et al (Image and Depth from a Conventional Camera with a Coded Aperture, ACM Transactions on Graphics 26 (3), July 2007, paper 70) follow a similar approach to Veeraraghavan, however, Levin et al rely on direct photography of a test pattern at a series of defocused image planes, to infer the psf as a function of defocus. Also, Levin et al investigated a number of different mask designs in an attempt to arrive at an optimum coded aperture. They assume a Gaussian distribution of sparse image gadients, along with a Gaussian noise model, in their deconvolution algorithm. Therefore, the optimized coded aperture solution is dependent on assumptions made in the deconvolution analysis.
The coded aperture method has shown promise for determining the range of objects using a single lens camera system. However, there is still a need for methods that can produce accurate ranging results with a variety of coded aperture designs, across a variety of image content.
The present invention represents a method for using an image capture device to identify range information for objects in a scene, comprising:
a) providing an image capture device having an image sensor, a coded aperture, and a lens;
b) storing in a memory a set of blur parameters derived from range calibration data;
c) capturing an image of the scene having a plurality of objects;
d) providing a set of deblurred images using the capture image and each of the blur parameters from the stored set by,
e) using the set of deblurred images to determine the range information for the objects in the scene.
This invention has the advantage that it produces improved range estimates based on a novel deconvolution algorithm that is robust to the precise nature of the deconvolution kernel, and is therefore more generally applicable to a wider variety of coded aperture designs. It has the additional advantage that it is based upon deblurred images having fewer ringing artifacts than prior art deblurring algorithms, which leads to improved range estimates.
In the following description, some arrangements of the present invention will be described in terms that would ordinarily be implemented as software programs. Those skilled in the art will readily recognize that the equivalent of such software can also be constructed in hardware. Because image manipulation algorithms and systems are well known, the present description will be directed in particular to algorithms and systems forming part of, or cooperating more directly with, the method in accordance with the present invention. Other aspects of such algorithms and systems, together with hardware and software for producing and otherwise processing the image signals involved therewith, not specifically shown or described herein are selected from such systems, algorithms, components, and elements known in the art. Given the system as described according to the invention in the following, software not specifically shown, suggested, or described herein that is useful for implementation of the invention is conventional and within the ordinary skill in such arts.
The invention is inclusive of combinations of the arrangements described herein. References to “a particular arrangement” and the like refer to features that are present in at least one arrangement of the invention. Separate references to “an arrangement” or “particular arrangements” or the like do not necessarily refer to the same arrangement or arrangements; however, such arrangements are not mutually exclusive, unless so indicated or as are readily apparent to one of skill in the art. The use of singular or plural in referring to the “method” or “methods” and the like is not limiting. It should be noted that, unless otherwise explicitly noted or required by context, the word “or” is used in this disclosure in a non-exclusive sense.
An image capture device includes one or more image capture devices that implement the methods of the various arrangements of the present invention, including the example image capture devices described herein. The phrases “image capture device” or “capture device” are intended to include any device including a lens which forms a focused image of a scene at an image plane, wherein an electronic image sensor is located at the image plane for the purposes of recording and digitizing the image, and which further includes a coded aperture or mask located between the scene or object plane and the image plane. These include a digital camera, cellular phone, digital video camera, surveillance camera, web camera, television camera, multimedia device, or any other device for recording images.
The step of storing in a memory 60 a set of blur parameters refers to storing a representation of the psf of the image capture device for a series of object distances and defocus distances. Storing the blur parameters includes storing a digitized representation of the psf, specified by discrete code values in a two dimensional matrix. It also includes storing mathematical parameters derived from a regression or fitting function that has been applied to the psf data, such that the psf values for a given (x,y,z) location are readily computed from the parameters and the known regression or fitting function. Such memory can include computer disk, ROM, RAM or any other electronic memory known in the art. Such memory can reside inside the camera, or in a computer or other device electronically linked to the camera. In the arrangement shown in
Returning to
The digitally represented psfs 49 are used in a deconvolution operation to provide 80 a set of deblurred images 81. The captured image 72 is deconvolved m times, once for each of m elements in the set 49, to create a set of m deblurred images 81. The deblurred image set 81, whose elements are denoted [I1, I2, . . . Im], is then further processed with reference to the original captured image 72, to determine the range information for the objects in the scene.
The step of providing a set of deblurred images 80 will now be described in further detail with reference to
Next an initialize candidate deblurred image step 104 is used to initialize a candidate deblurred image 107 using the captured image 72. In a preferred embodiment of the present invention; the candidate deblurred image 107 is initialized by simply setting it equal to the captured image 72. Optionally, any deconvolution algorithm known to those in the art can be used to process the captured image 72 using the blur kernel 106, and the candidate deblurred image 107 is then initialized by setting it equal to the processed image. Examples of such deconvolution algorithms would include conventional frequency domain filtering algorithms such as the well-known Richardson-Lucy (RL) deconvolution method described in the background section. In other arrangements, where the captured image 72 is part of an image sequence, a difference image is computed between the current and previous image in the image sequence, and the candidate deblurred image is initialized with reference to this difference image. For example, if the difference between successive images in the sequence is currently small, the candidate deblurred image would not be reinitialized from its previous state, saving processing time. The reinitialization is saved until a significant difference in the sequence is detected. In other arrangements, only selected regions of the candidate deblurred image are reinitialized, if significant changes in the sequence are detected in only selected regions. In another arrangement, the range information is only determined for selected regions or objects in the scene where a significant difference in the sequence is detected, thus saving processing time.
Next a compute differential images step 108 is used to determine a plurality of differential images 109. The differential images 109 can include differential images computed by calculating numerical derivatives in different directions (e.g., x and y) and with different distance intervals (e.g., Δx=1, 2, 3). A compute combined differential image step 110 is used to form a combined differential image 111 by combining the differential images 109.
Next an update candidate deblurred image step 112 is used to compute a new candidate deblurred image 113 responsive to the captured image 72, the blur kernel 106, the candidate deblurred image 107, and the combined differential image 111. As will be described in more detail later, in a preferred embodiment of the present invention, the update candidate deblurred image step 112 employs a Bayesian inference method using Maximum-A-Posterior (MAP) estimation.
Next, a convergence test 114 is used to determine whether the deblurring algorithm has converged by applying a convergence criterion 115. The convergence criterion 115 is specified in any appropriate way known to those skilled in the art. In a preferred embodiment of the present invention, the convergence criterion 115 specifies that the algorithm is terminated if the mean square difference between the new candidate deblurred image 113 and the candidate deblurred image 107 is less than a predetermined threshold. Alternate forms of convergence criteria are well known to those skilled in the art. As an example, the convergence criterion 115 is satisfied when the algorithm is repeated for a predetermined number of iterations. Alternatively, the convergence criterion 115 can specify that the algorithm is terminated if the mean square difference between the new candidate deblurred image 113 and the candidate deblurred image 107 is less than a predetermined threshold, but is terminated after the algorithm is repeated for a predetermined number of iterations even if the mean square difference condition is not satisfied.
If the convergence criterion 115 has not been satisfied, the candidate deblurred image 107 is updated to be equal to the new candidate deblurred image 113. If the convergence criterion 115 has been satisfied, a deblurred image 116 is set to be equal to the new candidate deblurred image 113. A store deblurred image step 117 is then used to store the resulting deblurred image 116 in a processor-accessible memory. The processor-accessible memory is any type of digital storage such as RAM or a hard disk.
In a preferred embodiment of the present invention, the deblurred image 116 is determined using a Bayesian inference method with Maximum-A-Posterior (MAP) estimation. Using the method, the deblurred image 116 is determined by defining an energy function of the form:
E(L)=(LK−B)2+λD(L) (6)
where L is the deblurred image 116, K is the blur kernel 106, B is the blurred image, i.e. the captured image 72, is the convolution operator, D(L) is the combined differential image 111 and λ is a weighting coefficient
In a preferred embodiment of the present invention the combined differential image 111 is computed using the following equation:
where j is an index value, ∂j is a differential operator corresponding to the jth index, wj is a pixel-dependent weighting factor which will be described in more detail later.
The index j is used to identify a neighboring pixel for the purpose of calculating a difference value. In a preferred embodiment of the present invention, difference values are calculated for a 5×5 window of pixels centered on a particular pixel.
The differential operator ∂j determines a difference between the pixel value for the current pixel, and the pixel value located at the relative position specified by the index j. For example, ∂6S would correspond to a differential image determined by taking the difference between each pixel in the deblurred image L with a corresponding pixel that is 1 row above and 2 columns to the left. In equation form this would be given by:
∂jL=L(x,y)−L(x−Δxj,y−Δyj) (8)
where Δxj and Δyj are the column and row offsets corresponding to the jth index, respectively. It will generally be desirable for the set of differential images ∂jL to include one or more horizontal differential images representing differences between neighboring pixels in the horizontal direction and one or more vertical differential images representing differences between neighboring pixels in the vertical direction, as well as one or more diagonal differential images representing differences between neighboring pixels in a diagonal direction.
In a preferred embodiment of the present invention, the pixel-dependent weighting factor wj is determined using the following equation:
w
j=(wd)j(wp)j (9)
where (wd)j is a distance weighting factor for the jth differential image, and (wp)j is a pixel-dependent weighting factor for the jth differential image.
The distance weighting factor (wd)j weights each differential image depending on the distance between the pixels being differenced:
(wd)j=G(d) (10)
where d=√{square root over (Δxj2+Δyj2)} is the distance between the pixels being differenced, and G(·) is weighting function. In a preferred embodiment, the weighting function G(·) falls off as a Gaussian function so that differential images with larger distances are weighted less than differential images with smaller distances.
The pixel-dependent weighting factor (wp)j weights the pixels in each differential image depending on their magnitude. For reasons discussed in the aforementioned article “Image and depth from a conventional camera with a coded aperture” by Levin et al., it is desirable for the pixel-dependent weighting factor w to be determined using the equation:
(wp)j=|∂jL|α−2 (11)
where |·| is the absolute value operator and α is a constant (e.g., 0.8). During the optimization process, a set of differential images ∂jL is calculated for each iteration, using the estimate of L determined for the previous iteration.
The first term in the energy function given in Eq. (6) is an image fidelity term. In the nomenclature of Bayesian inference, it is often referred to as a “likelihood” term. It is seen that this term will be small when there is a small difference between the blurred image B (the captured image 72) and a blurred version of the candidate deblurred image (L) which as been convolved with the blur kernel 106 (K).
The second term in the energy function given in Eq. (6) is an image differential term. This term is often referred to as an “image prior.” The second term will have low energy when the magnitude of the combined differential image 111 is small. This reflects the fact that a sharper image will generally have more pixels with low gradient values as the width of blurred edges is decreased.
The update candidate deblurred image step 112 computes the new candidate deblurred image 113 by reducing the energy function given in Eq. (8) using optimization methods that are well known to those skilled in the art. In a preferred embodiment of the present invention, the optimization problem is formulated as a PDE given by:
which is solved using conventional PDE solvers. In a preferred embodiment of the present invention, a PDE solver is used where the PDE is converted to a linear equation form that is solved using a conventional linear equation solver, such as a conjugate gradient algorithm. For more details on solving PDE solvers, refer to the aforementioned article by Levin et al. It should be noted that even though the combined differential image 111 is a function of the deblurred image L, it is held constant during the process of computing the new candidate deblurred image 113. Once the new candidate deblurred image 113 has been determined, it is used in the next iteration to determine an updated combined differential image 111.
The deblurred image set 81 is intentionally limited by using a subset of blur parameters from the stored set. This is done for a variety of reasons, such as reducing the processing time to arrive at the range values 91, or to take advantage of other information from the camera 40 indicating that the full range of blur parameters is not necessary. The set of blur parameters used (and hence the deblurred image set 81 created) is limited in increment (i.e. subsampled) or extent (i.e. restricted in range). If a digital image sequence is processed, the set of blur parameters used is the same, or different for each image in the sequence.
Alternatively, instead of subsetting or subsampling the blur parameters from the stored set, a reduced deblurred image set is created by combining images corresponding to range values within selected range intervals. This might be done to improve the precision of depth estimates in a highly textured or highly complex scene which is difficult to segment. For example, let zm, where m=1, 2, . . . M denote the set of range values at which the psf data [psf1, psf2, . . . psfm] and corresponding blur parameters have been measured. Let îm(x,y) denote the deblurred image corresponding to range value m, and let Îm(vx,vy) denote its Fourier transform. For example, if the range values are divided into M equal groups or intervals; each containing M range values, a reduced deblurred image set is defined as:
In other arrangements, the range values are divided into M unequal groups. In another arrangement, a reduced blurred image set is defined by writing Eq. (6) in the Fourier domain and taking the inverse Fourier transform. In yet another arrangement, a reduced blurred image set is defined, using a spatial frequency dependent weighting criterion. Preferably this is computed in the Fourier domain using an equation such as:
where w(vx, vy) is a spatial frequency weighting function. Such a weighting function is useful, for example, in emphasizing spatial frequency intervals where the signal-to-noise ratio is most favorable, or where the spatial frequencies are most visible to the human observer. In some arrangements, the spatial frequency weighting function is the same for each of the M range intervals, however, in other arrangements the spatial frequency weighting function is different for some or all of the intervals.
The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the invention.
Reference is made to commonly assigned, co-pending U.S. patent application Ser. No. 12/612,135, filed Nov. 4, 2009, entitled Image deblurring using a combined differential image, by Sen Weng, et al, co-pending U.S. patent application Ser. No. ______ filed concurrently herewith and entitled Range measurement using multiple coded apertures, by Paul J. Kane, et al, co-pending U.S. patent application Ser. No. ______ filed concurrently herewith and entitled Range measurement using a zoom camera, by Paul J. Kane, et al, co-pending U.S. patent application Ser. No. ______, filed concurrently herewith and entitled Digital camera with coded aperture rangefinder, by Paul J. Kane, et al, and co-pending U.S. patent application Ser. No. ______, filed concurrently herewith and entitled Range measurement using symmetric coded apertures, by Paul J. Kane, et al, all of which are incorporated herein by reference.