This application is based upon and claims the benefit of priority from prior Japanese Patent Application No. 2013-057093 filed on Mar. 19, 2013 in Japan, the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to a microlens array unit and a solid state imaging device.
Various techniques such as a technique using a reference light and a stereo ranging technique using a plurality of cameras are known as imaging techniques for obtaining a depth-direction distance to a subject, serving as two-dimensional array information. In particular, in recent years, imaging devices capable of obtaining distance information at relatively low costs have been increasingly needed to serve as newly developed consumer input devices.
An example of distance imaging methods that do not use any reference light in order to save system costs is a triangulation method using parallaxes. Stereo cameras and compound-eye cameras are known as devices using such a method. These cameras, however, each use two or more camera elements, which causes such problems as an increase in the size of the systems, and an increase in failure rate caused by an increase in the number of system components.
A known structure of imaging devices capable of obtaining distance information has a microlens array arranged above pixels, so that a few pixels are located below a corresponding microlens. With such a structure, images with parallaxes can be obtained in the units of pixel blocks. The parallaxes make it possible to estimate the distance to a subject, and to perform re-focusing based on such distance information.
If a microlens array is formed of microlenses of the same type having the same focal length, the magnification of a reconstructed image is determined based on the distance to the subject. For this reason, it is not possible to obtain, using such a microlens array, an image formed with another magnification for the same distance to the subject. In order to vary reconstruction magnification, a voice coil motor or actuator should be equipped to the imaging device.
a) and 6(b) are diagrams showing a third specific example of the microlens array.
a) and 7(b) are diagrams showing a fourth specific example of the microlens array.
a) and 8(b) are diagrams showing a fifth specific example of the microlens array.
a) and 9(b) are diagrams showing a sixth specific example of the microlens array.
a) to 14(c) are cross-sectional views for explaining a method of producing a microlens array.
a) and 16(b) are diagrams for explaining a method for obtaining a deep focus image and a method for obtaining a refocused image, respectively.
a) and 26(b) are diagrams for explaining how to obtain microlens images in which a subject for which the polarization main axis is to be measured is imaged in a number of microlenses.
a) and 28(b) are diagrams for explaining image matching between 0° polarizing-axis images.
A microlens array unit according to an embodiment includes: a substrate; a first group of microlenses including first microlenses having a convex shape and a first focal length, the first group of microlenses being arranged on the substrate; and a second group of microlenses including second microlenses having a convex shape and a second focal length that is different from the first focal length, the second group of microlenses being arranged on the substrate, a first imaging plane of the first group of microlenses and a second imaging plane of the second group of microlenses being parallel to each other, a distance between the first imaging plane and the second imaging plane in a direction perpendicular to the first imaging plane being 20% or less of the first focal length, and images of the first microlenses projected on the substrate not overlapping images of the second microlenses projected on the substrate.
Embodiments will now be explained with reference to the accompanying drawings.
How the present inventors have reached the present invention will be described before the embodiments of the present invention will be described. Some imaging devices have an optical structure capable of obtaining distance information and a re-focusing effect, in which a microlens array is arranged above pixels so that some pixels are located below each microlens. In such imaging devices, the resolution of a reconstructed image abruptly changes depending on the distance to the subject. As a result, if there are two or more subjects each having a different distance to the imaging device, a reconstructed image with a low resolution may be obtained for some subjects that are not in focus for the distance selected. On the contrary, an imaging device having an optical structure capable of obtaining a reconstructed image with a high resolution for all subjects is not capable of having a re-focusing effect easily. The present inventors considered that if the aforementioned drawbacks are solved, and a high resolution image and an image with a re-focusing effect can be obtained simultaneously, the field to which such techniques using cameras are applied may be widened and the convenience of camera users may be improved. In the following embodiments, a microlens array and a solid-state imaging element capable of obtaining a high-resolution image and an image with a re-focusing effect, and an imaging device using such elements will be proposed.
The imaging module unit 10 includes an imaging optics 12, a microlens array 14, an imaging element 16, and an imaging circuit 18. The imaging optics 12 functions as an image capturing optical system to capture light from a subject and transfer it into the imaging element 16. The imaging element 16 functions as an element for converting the light captured by the imaging optics 12 to signal charges, and includes a plurality of pixels (for example, photodiodes serving as photoelectric conversion elements) arranged in a two-dimensional array form. The microlens array 14 includes microlenses.
The optical system of this embodiment functions as a system for reducing and re-forming images on pixel blocks corresponding to respective microlenses of the microlens array 14 from light rays that are focused on an image plane by the imaging optics 12. The imaging circuit 18 includes a drive circuit unit (not shown) that drives the respective pixels of the pixel array of the imaging element 16, and a pixel signal processing circuit unit (not shown) that processes signals outputted from the pixel region. The drive circuit unit and the pixel signal processing circuit unit may be combined with each other to form a drive and processing circuit. In the embodiments described below, the imaging circuit 18 includes a drive and processing circuit. The drive circuit unit includes, for example, a vertical selection circuit that sequentially selects pixels to be driven in a vertical direction in units of horizontal line (row), a horizontal selection circuit that sequentially selects pixels to be driven in units of column, and a timing generator (TG) circuit that drives the selection circuits with several types of pulses. The pixel signal processing circuit unit includes such circuits as an analog-to-digital conversion circuit that converts analog electric signals from the pixel region to digital signals, a gain adjustment and amplifier circuit that performs gain adjustment and amplifying operations, and a digital signal processing circuit that corrects the digital signals.
The ISP 20 includes a camera module interface (I/F) 22, an image capturing unit 24, a signal processing unit 26, and a driver I/F 28. A RAW image obtained by an imaging operation performed by the imaging module unit 10 is captured through the camera module I/F 22 into the image capturing unit 24. The signal processing unit 26 performs a signal processing operation on the RAW image captured into the image capturing unit 24. The driver I/F 28 outputs, to a display driver that is not shown, image signals having been subjected to the signal processing operation at the signal processing unit 26. The display driver displays the image formed by the solid state imaging device.
(Details of Optical System)
Herein, for easy explanation, the subject 100 side relative to a plane passing through the center of the imaging lens 12 and perpendicular to the optical axis is defined as the front side, and the imaging element 16 side is defined as the back side. In the optical system, the microlens array 14 has a role of dividing the light rays from the imaging lens 12 into images of respective visual points, which are formed on the imaging element 16.
As shown in
(Details of Microlens Array)
The microlens array (hereinafter also referred to as the “microlens array unit”) 14 of the solid state imaging device according to the first embodiment will be described below with reference to
The groups of microlenses 14a1, 14a2 are arranged on the substrate 14b in such a manner that the images projected by the group of microlenses 14a1 onto a plane parallel to the first surface of the substrate 14b do not overlap the images projected by the group of microlenses 14a2 onto a plane parallel to the first surface of the substrate 14b. The microlenses in each group of microlenses 14a1, 14a2 are arranged so that the projected images are in a square arrangement or hexagonal arrangement. The imaging plane 90 is on the top surface of the imaging element. Thus, the imaging element 16 faces the second surface of the substrate 14b.
Incidentally, the term “convex” means projecting relative to the surface of the substrate 14b, on which the group of microlens is located, and the term “concave” means being depressed relative to such a surface.
The microlens array 14 is, for example, transmissive to light, and for example, transparent. The substrate 14b is formed of, for example, glass or resin. The groups of microlenses 14a1, 14a2 are formed of resin, for example.
Like the first specific example, each of the specific examples described below has a first imaging plane and a second imaging plane that are parallel to each other, the distance between the first imaging plane and the second imaging plane in a direction perpendicular to the first imaging plane being 20% or less, preferably 10% or less, of the first focal length. The microlenses of the first group of microlenses 14a1 and the second group of microlenses 14a2 are arranged so that the images thereof projected on the substrate 14b do not overlap each other.
As in the case of the first specific example, the group of microlenses 14a1 having a longer focal length of the microlens array 14 of the second specific example form images on the same imaging plane (not shown) as the group of microlenses 14a2, 14a3 having a shorter focal length.
As in the case of the first specific example, the group of microlenses 14a1 having a longer focal length of the microlens array 14 of the third specific example form images on the same imaging plane (not shown) as the group of microlenses 14a2 having a shorter focal length.
As in the case of the first specific example, the groups of microlenses 14a1, 14a4 to provide a longer focal length of the microlens array 14 of the fourth specific example form images on the same imaging plane (not shown) as the groups of microlenses 14a2, 14a3 to provide a shorter focal length.
As in the case of the first specific example, the groups of microlenses 14a1, 14a4 to provide a shorter focal length of the microlens array 14 of the fifth specific example form images on the same imaging plane (not shown) as the groups of microlenses 14a2, 14a3 to provide a longer focal length.
As in the case of the first specific example, the group of microlenses 14a4 of the first microlens sub-array and the group of microlenses 14a4 of the second microlens sub-array form images on the same imaging plane (not shown) as the groups of microlenses 14a2, 14a3 of the first microlens sub-array and the groups of microlenses 14a2, 14a3 of the second microlens sub-array.
Incidentally, all the microlenses of the first to the sixth specific examples have the same effective f-number in order to prevent the overlapping of images.
Next, the positional relationship between the shorter-focal-length microlenses and the longer-focal-length microlenses in the microlens array 14 will be described in detail with reference to
The case where the microlens array 14 includes two or more microlens sub-arrays will be described with reference to
Next, microlens images obtained by the imaging element 16 will be described with reference to
Next, a method of producing the microlens array 14 of this embodiment will be described with reference to
First, a transparent substrate 14b of, for example, glass, and molds 521, 522 for forming microlenses are prepared. The molds 521, 522 each have depressions corresponding to the convex microlenses 14a1, 14a2 (
A resin 14a is applied to both the sides of the substrate 14b, and the molds 521, 522 are pressed to the resin 14a. As a result, the resin 14a is shaped to form the microlenses 14a1, 14a2 (
The molds 521, 522 are then separated from the resin 14a to form the microlens array 14 (
If the positions of the molds 521, 522 are determined in advance in such a production method, the microlens array 14 can be formed with a higher positioning accuracy, and the number of steps of producing the microlens array can be reduced.
(Method of Obtaining Two Images Having Different Reconstruction Magnifications)
Next, a method for obtaining two images having different reconstruction magnifications will be described, taking the optical system shown in
where A denotes the distance between the subject 100 and the imaging optics (imaging lens) 12, B denotes the distance between the imaging lens 12 and the imaging plane 70, and f denotes the focal length of the imaging lens 12. When the distance A between the imaging lens 12 and the subject 100 changes, the value of the distance B between the imaging lens 12 and the imaging plane 70 also changes according to the expression (3). It is assumed in this embodiment that the distance E between the imaging lens 12 and the microlens array 14 is fixed. The equation E=B+0.0 can be derived from the positional relationship in the optical system. Since E is a fixed value, a change in the value of B changes leads to a change in the value of C. It can be understood from the following expression (4) relating to microlenses the change in the value of C further leads to a change in the value of D.
As a result, a virtual image formed on the imaging plane 70 by the imaging lens 12 is reduced by each microlens by the magnification N (N=D/C), which is further expressed by the following expression (5):
It can be understood from the expression (5) that the reduction rate of each microlens image is dependent on the distance A to the subject 100. Therefore, in order to reconstruct the original two-dimensional image, the microlens images are enlarged by 1/N times, caused to overlap, and combined. In this manner, a reconstructed image that is in focus on the distance A can be obtained. In overlapping, images are slightly shifted from each other for the portions other than the portion in focus on the distance A, which brings about a kind of “blur” effect.
Such an operation to adjust the focus of an image obtained from captured microlens images to an arbitrarily selected position is called “refocusing.” If the reconstruction magnification greatly changes depending on the distance A to the subject, an image including heavy blur between subjects with different distances (refocused image) is obtained. On the contrary, if the variation in reconstruction magnification is slight for the distance A to the subject, an image with slight blur between subjects with different distances (deep focus image) is obtained.
The expression (6) can be obtained by transforming the expression (5). As is clear from the expression (6), the reconstruction magnification 1/N is inversely proportional to the focal length g of the microlenses.
Accordingly, the relationship between the distance A to the subject and the reconstruction magnification 1/N varies depending on the values of the focal length g, g′ of the microlenses, as shown in
(Methods for Obtaining Deep Focus Image and Refocused Image)
Next, methods for obtaining a deep focus image and a refocused image will be described below.
First, a method for obtaining a deep focus image will be described. It is assumed here that the microlens array 14 of the optical system includes two types of microlenses, longer-focal-length microlenses and shorter-focal-length microlenses. In this case, an image reconstructed from longer-focal-length microlens images represented by the g′ graph in
In contrast, an image reconstructed from shorter-focal-length microlens images becomes a refocused image since the variation in reconstruction magnification 1/N with respect to the distance A to the subject is great. The deep focus image can be obtained by causing the longer-focal-length microlens images 64 to overlap each other as shown in
Next, the brightness of the microlenses is corrected by a known method, e.g., the method disclosed in JP 2012-186764 A (step S4). Thereafter, the microlens images are extracted (step S5). In the extraction operation, the region of interest (ROI) such as a circular region having its center at the central coordinate of the microlens images is set, and the pixel value data within the region are stored in a memory or the like. The central coordinate of the microlens images used for the extraction operation is pre-stored in the memory. The central coordinate data of the microlens images can be obtained by capturing a calibration image of such a subject as a white planar light source, and using a binarization process and a contour fitting process.
Subsequently, the central positions of the microlenses are rearranged and the microlens pixel positions are corrected using a known method, e.g., the method disclosed in JP 2012-186764 A (steps S6 and S7).
Thereafter, the microlens images are enlarged (step S8). An enlarged macrolens image is generated by performing an interpolation method such as the bilinear method and the bicubic method on the ROI pixel value data stored in the memory. Subsequently, the pixels are rearranged by using a known method, e.g., the method disclosed in JP 2012-186764 A. Then, whether there is a region in which microlens images are overlapping or not is determined with respect to the rearranged pixels (step S9). If there is no overlapping region, the process ends. If there is an overlapping region, the process proceeds to step S10 to synthesize an image, and ends.
(Method for Obtaining Distance to Subject)
Next, a method for obtaining the distance to a subject will be described.
As mentioned above regarding the expression (5), when the distance A to the subject 100 changes in the optical system shown in
can be obtained. Thus, if the reduction magnification N of the microlenses is calculated based on such a method by means of image matching, and if the values of D, E, and f are known, the value of A can be calculated backward from the expression (7).
In the optical system shown in
Therefore, in this case, the relationship between A and N can be expressed by the expression (9):
The reduction magnification N can be expressed as follows from the geometric relationship among light rays, where Δ′ denotes the image shift amount between microlenses, and L denotes the distance between the centers of microlenses.
Therefore, in order to obtain the reduction magnification N, the image shift amount between microlenses is obtained by image matching using such evaluation values as the sum of squared difference and the sum of absolute difference. In this method, the image shift amount for the microlens images with the same focal length can be obtained using a common image matching method. The image shift amount for the microlens images with different focal lengths can also be obtained using a similar image matching method by correcting the reduction rates of the images and enlarging the microlens images using a known method.
(Optical System Combined with Polarizing Plates)
Next, an optical system combined with polarizing plates will be described in detail.
The polarizing plate array 90 is used for obtaining information on polarization direction, which may differ for each microlens image. The polarizing plate array 90 includes polarizing plates 91 arranged in an array form as shown in
The polarizing plate array 90 shown in each of
(Method for Obtaining Images with Polarization Angles)
Next, a method for obtaining images with polarization angles will be described.
(Method of Calculating Polarization Information from Captured Image)
Next, a method of calculating polarization information from captured images will be described.
Then, whether the distance information should be computed or not is determined (step S26). If the distance information computing is performed, polarized image matching, which will be described later, is performed (step S27). If the distance information computing is not performed, the process proceeds to step S28, and whether auto-focusing has been performed or not. If the auto-focusing has been performed, the process proceeds to step S27, and the polarized image matching is performed. If the auto-focusing has not been performed, the process proceeds to step S29, and the microlens images are enlarged. After the polarized image matching is performed at step S27, the process also proceeds to step S29, and the microlens images are enlarged.
Thereafter, whether there is an overlapping portion in the enlarged microlens images or not is determined; if there is an overlapping portion, the pixels are rearranged, and if not, nothing is performed (step S30). If the pixels have not been rearranged, the process ends. If the pixels have been rearranged, the process proceeds to step S31, and polarization angle fitting is performed on each pixel. Then, a two-dimensional image with polarization angle is formed using pixels having been subjected to the polarization angle fitting (step S32), and the process ends. The two-dimensional image is reconstructed by enlarging the respective microlens images with the central positions thereof being fixed, and adding the overlapping pixel values.
The polarized image matching will be described below. First, see the pixel 66 in the microlens images shown in
The light intensity of each microlens image is dependent on the polarizing axis of the corresponding polarizing plate. Accordingly, the intensities of the lights emitted from the same point of the subject and passing through polarizing plates having different polarizing axes can be measured at a time. As an example, it is assumed that the polarizing axes relating to the microlens images overlapping on the pixel 66 are 0°, 45°, 90°, and 135°. The light intensities of the respective polarized microlens images can be obtained from microlens images before the enlarging and synthesizing operation. The polarization main axis θ can be obtained from a polarization curve determined by the fitting based on the polarizing axis angle of the polarizing plate and the light intensity, as shown in
The polarization main axes relating to
An optical system including microlenses with different focal lengths as in the above optical system may be used in such a manner that a high-resolution two-dimensional image is formed based on polarization information using the longer-focal-length microlens images, and more accurate polarization information is computed using the shorter-focal-length microlenses images. Thus, it is possible to obtain a two-dimensional polarization angle distribution and depth information more accurately.
When the distance to a subject is obtained using an optical system combined with a polarizing plate array, microlens images formed by polarizing plates with the same polarizing axis are compared with each other in order to prevent the mismatching caused by comparing images formed by polarizing plates with different polarizing axes. For example, image matching can be performed for images having the 0° polarizing axis as shown in
As described above, according to the first embodiment, it is possible to provide a microlens array unit and a solid state imaging device capable of providing high-resolution images and images with a re-focusing effect.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fail within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2013-057093 | Mar 2013 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
8675118 | Ryu | Mar 2014 | B2 |
20080112635 | Kondo et al. | May 2008 | A1 |
20090200623 | Qian et al. | Aug 2009 | A1 |
20100021167 | Aota et al. | Jan 2010 | A1 |
20110122308 | Duparre | May 2011 | A1 |
20120050589 | Ueno et al. | Mar 2012 | A1 |
20120057020 | Kobayashi et al. | Mar 2012 | A1 |
20120062771 | Ueno et al. | Mar 2012 | A1 |
20120218448 | Ueno et al. | Aug 2012 | A1 |
20120218454 | Suzuki et al. | Aug 2012 | A1 |
20120229683 | Kobayashi et al. | Sep 2012 | A1 |
20130075585 | Kobayashi et al. | Mar 2013 | A1 |
20130075586 | Ueno et al. | Mar 2013 | A1 |
20130075587 | Suzuki et al. | Mar 2013 | A1 |
20130075849 | Suzuki et al. | Mar 2013 | A1 |
20130128092 | Ogasahara et al. | May 2013 | A1 |
20130240709 | Ueno et al. | Sep 2013 | A1 |
20130242161 | Kobayashi et al. | Sep 2013 | A1 |
20130308197 | Duparre | Nov 2013 | A1 |
20140240559 | Ueno et al. | Aug 2014 | A1 |
20140284746 | Suzuki et al. | Sep 2014 | A1 |
20140285703 | Kizu et al. | Sep 2014 | A1 |
20140285708 | Kwon et al. | Sep 2014 | A1 |
Number | Date | Country |
---|---|---|
2012-186764 | Sep 2012 | JP |
2013-145897 | Jul 2013 | JP |
Entry |
---|
U.S. Appl. No. 14/162,122, filed Jan. 23, 2014, Risako Ueno et al. |
U.S. Appl. No. 13/827,237, filed Mar. 14, 2013, Kazuhiro Suzuki et al. |
T. Georgiev et al. “Reducing Plenoptic Camera Artifacts”, Computer Graphics Forum, vol. 29, No. 6, 2010, 14 pages. |
The Extended European Search Report issued Jul. 18, 2014, in Application No. / Patent No. 14156126.6-1562. |
Todor Georgiev, et al., “Using Focused Plenoptic Cameras for Rich Image Capture”, IEEE Computer Graphics and Applications, IEEE Service Center, vol. 31, No. 1, XP011341207, Jan. 1, 2011, pp. 62-73. |
Number | Date | Country | |
---|---|---|---|
20140285693 A1 | Sep 2014 | US |