1. Technical Field
The present disclosure relates to a light-field camera that extracts and records the direction of light rays using microlenses.
2. Description of Related Art
In recent years, a refocusable light-field camera has been available that has an integrated optical system and image sensor, focuses on a desired position after shooting, and generates an image at any given focal position. A light-field camera is disclosed, for example, in the Non-Patent Literature “Ren. Ng, et al, “Light Field Photography with a Hand-Held Plenoptic Camera”, Stanford Tech Report CTSR 2005-2”.
The light-field camera is comprised of a main lens, a microlens array, and an image sensor. Light from a subject passes through the main lens and then through the microlens array and is incident on the image sensor. Unlike a typical camera, a light-receiving surface of the image sensor includes information about a traveling direction of the light as well as the intensity of the light because the directions of the light are identified and recorded in the image sensor.
As such, refocusing can be made that generates an image at any given focal position after shooting. For example, projection from pixels that convert light received in the image sensor to electrical signals onto a virtual image plane in the direction of light rays enables a refocused image for the case of the image sensor being placed on the virtual image plane to be generated.
The present disclosure provides an image capture device that improves image resolution when a refocused image at any given focal position is generated by reconstructing an image with a light-field camera.
The image capture device of the present disclosure, which is capable of recording light information including a traveling direction of light and intensity of the light in the traveling direction, includes a main lens, an image sensor, a microlens array that is placed between the main lens and the image sensor and has a predetermined vertical rotation angle relative to the image sensor, and a signal processing unit for generating a refocused image on a virtual image plane at any given focal position using the light information.
The image capture device of the present disclosure can improve the image resolution when the refocused image is generated by reconstructing an image with the light-field camera.
An exemplary embodiment will now be described in detail with reference to the drawings as appropriate. However, unnecessarily detailed description may occasionally be omitted. For example, detailed description of well-known matters and redundant description of substantially the same configurations may occasionally be omitted. The omission of these items is to avoid the following description from becoming unnecessarily redundant, and to ease understanding of those skilled in the art.
Note that, the following description and the accompanying drawings are provided to allow any person skilled in the art to fully understand the present disclosure, and that it is not intended to limit the subject matter described in the claims by the following description and the accompanying drawings.
A first exemplary embodiment is described below with reference to
[1-1 Relationship Between Ray Centers and Image Resolution]
The relationship between ray centers and image resolution is first described. When a light-field camera processes pixels that convert light received in an image sensor to electrical signals, reconstructs an image, and generates a refocused image, ray centers play an important role. A “ray center” is a point at which a light ray projected from the image sensor onto a virtual image plane in a direction of the light ray intersects the virtual image plane where an image is reconstructed according to any given focal position. Thus, each of pixels of the image to be reconstructed is complemented using a ray center in the vicinity of each pixel in the image to be reconstructed of ray centers on the virtual image plane projected from the image sensor onto the virtual image plane in the direction of light rays. Here, a number of the ray centers is fixed by a number of pixels of the image sensor so that if the ray centers on the virtual image plane gather at a certain point, the image resolution in reconstruction is reduced in a region where density indicating a degree of gathering of the ray centers on the virtual image plane is low.
[1-2 Configuration of Light-Field Camera]
A light-field camera as an image capture device is described in the first exemplary embodiment.
Light passing through subject 101 passes through main lens 102 and microlens array 103 and is recorded by image sensor 104, at which time not only the intensity of the light but also a traveling direction of the light is recorded simultaneously in each of pixels of image sensor 104.
The pixels, which convert the light received in image sensor 104 to electrical signals, transmit the electrical signals to signal processing unit 320. When virtual image plane 105 is provided by virtually disposing image sensor 104 on any given plane in a space in order to reconstruct an image to be captured at any given focal position, signal processing unit 320 calculates positions of ray centers 106 projected from the pixels of image sensor 104 onto virtual image plane 105 in the direction of light rays. The image is then reconstructed using ray centers 106 and thus a refocused image is generated on virtual image plane 105.
In
In
When distribution 210 of the ray centers is compared with distribution 220 of the ray centers in
In light-field camera 100 configured as above, an optimum rotation angle of microlens array 103 relative to image sensor 104 has been calculated, which is described below.
[1-3 Optimum Rotation Angle]
[1-3-1 Position of Ray Center]
A method of calculating a position of a ray center on virtual image plane 105 is first described.
A coordinate of center position 402, i.e., a position at which a dashed line extending horizontally from a center position of i-th microlens 401 of microlens array 103 toward image sensor 104 intersects with image sensor 104, is represented as follows:
(MLA[i]·Cx,MLA[i]·Cy) [EQ 1]
Pixel 403 is any given pixel of image sensor 104, and direction vector 404 from center position 402 to pixel 403 is represented as follows:
(DirX,DirY) [EQ 2]
Where, “d” is a diameter of i-th microlense 401.
Assuming the light passes through i-th microlens 401 and travels in a straight line, coordinate 405 of the ray center projected onto virtual image plane 105 from pixel 403 of image sensor 104 is represented as follows:
Where, “dd” is a width of a light ray on virtual image plane 105. Assuming the light ray captured through diameter d of i-th microlens 401 is collected in pixel 403 of image sensor 104, dd is equal to d. Thus, the position of the ray center and the width of the light ray projected from each pixel of image sensor 104 onto virtual image plane 105 can be calculated.
[1-3-2. Calculation of Cost Value]
Evaluation is performed using a cost function in order to calculate an optimum vertical rotation angle of microlens array 103 relative to image sensor 104. Specifically, the vertical rotation angle of microlens array 103 relative to image sensor 104 is varied from 0 to 30 degrees in increments of 0.1 degrees, and a cost value is calculated for each rotation angle using the cost function. The optimum vertical rotation angle of microlens array 103 relative to image sensor 104 has been found through the evaluation using the cost values calculated.
Here, a procedure is described that calculates a cost value using the cost function with respect to any given vertical rotation angle of microlens array 103 relative to image sensor 104.
The cost values for any given vertical rotation angle of microlens array 103 relative to image sensor 104 are calculated for all of virtual image planes envisioned. All of the virtual image planes 105 envisioned are all virtual image planes 105 at refocusing distances that are determined at predetermined intervals within a predetermined focal length from image sensor 104.
(S501) Cost values, variables, etc. of the cost function are first initialized to zero.
(S502) It is then determined whether processing for all of the virtual image planes envisioned has been completed. When all processing has been completed (when Yes), cost values and angles at which the cost values has been calculated are output and the process is terminated. When processing of all the virtual image planes envisioned is not completed (when No), the process proceeds to step S503.
(S503) A virtual image plane at a refocusing distance of interest is set at the predetermined interval.
(S504) It is then determined whether the cost values are calculated for all pixels within a specified range on the virtual image plane set. When the cost values have been calculated for all the pixels (when Yes), the process returns to step S502. When the cost values are not calculated for all the pixels within the specified range (when No), the process proceeds to step S505.
(S505) A position of a pixel within the specified range on the virtual image plane for which calculation is not performed is obtained.
(S506) Step 506 searches for a ray center nearest to the position of the pixel obtained in step S505 with regard to the positions of the ray centers that are projected from the pixels of image sensor 104 onto virtual image plane 105 based on pre-calculation and identifies a position of the ray center.
(S507) Step 507 obtains a distance between the position of the ray center determined in step S506 and the position of the pixel obtained in step S505 as the cost value and returns to step S504.
[1-3-3. Example of Cost Function]
The cost function to calculate a cost value will now be described in detail.
P(r)=(xr,yr) [EQ 4]
Representing a position of virtual image plane 105 as f, position Ray(f, n) of a ray center when n-th pixel of image sensor 104 is projected onto virtual image plane 105 at position f is defined as follows:
Ray(f,n)=(xf,n,yf,n) [EQ 5]
Here, if distance Dist(P(r), Ray(f, n)) between position P(r) of the pixel of interest and position Ray(f, n) of the ray center is defined, for example, as a minimum distance square error, distance Dist(P(r), Ray(f, n)) is represented as follows:
Dist(P(r),Ray(f,n))=(xr−xf,n)2+(yr−yf,n)2 [EQ 6]
Cost function Cost(focus, R, N) can be defined as follows:
where focus is a set of all the virtual image planes envisioned, R is the specified range, and N is a set of all pixels of image sensor 104.
The cost function defined by EQ 7 is equivalent to evaluating the distances between the respective positions of the pixels to be reconstructed and positions of the ray centers used for reconstruction within specified range R, i.e., within a range of the image to be reconstructed for all the virtual image planes 105 envisioned. EQ 7 is based on the idea that the closer overall distances between the positions of the pixels to be reconstructed and the positions of the ray centers are, the higher the density of the ray centers on virtual image plane 105 is and the higher resolution can be obtained.
[1-3-4. Calculation of Optimum Rotation Angle]
The optimum vertical rotation angle of microlens array 103 relative to image sensor 104 has been found using cost function defined by EQ 7, which is described with reference to
The mean value of the cost function is a value obtained by dividing the calculated cost value by a number of pixels on virtual image plane 105 used for calculation. The lower the mean value of the cost function is, the image can be reconstructed in higher resolution.
In
The relationship of the mean value of the cost function with respect to the refocusing distance, i.e., a distance from image sensor 104 to virtual image plane 105, when vertical rotation angles of microlens array 103 relative to image sensor 104 are about 0 degrees and about 6.6 degrees will now be described.
Thus, it can be visually confirmed from
It also can be found from
Thus, the vertical rotation angle of microlens array 103 relative to image sensor 104 improves the resolution of the refocused image in reconstructing an image.
While it is described that a vertical rotation angle of microlens array 103 relative to image sensor 104 is 0 degrees, the vertical rotation angle may be less than or equal to 0.2 degrees in practice, for example, due to limited accuracy in manufacturing. In the exemplary embodiment, however, the vertical rotation angle of microlens array 103 relative to image sensor 104 is intended to be a rotation angle greater than or equal to about 1 degree regardless of the limited accuracy in manufacturing.
Furthermore, an optimum rotation angle for microlens array 103 in another configuration also has been found.
[1-4. Advantageous Effects]
As described above, the image capture device of the present disclosure, which is capable of recording light information comprised of the traveling direction of light and the intensity for the traveling direction, includes the main lens, the image sensor, the microlens array that is placed between the main lens and the image sensor and has a predetermined vertical rotation angle relative to the image sensor, and the signal processing unit for generating a refocused image on the virtual image plane at any given focal position using the light information.
The image capture device of the present disclosure can improve the image resolution when the refocused image is generated by reconstructing an image accordingly.
The image capture device of the present disclosure is applicable to a light-field camera and, in particular, to light-field cameras for use in a vehicle camera, surveillance camera, digital camera, movie camera, wearable camera, etc.
Number | Date | Country | Kind |
---|---|---|---|
2014-040455 | Mar 2014 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2014/006065 | Dec 2014 | US |
Child | 15194694 | US |