The present application relates to an image pickup device such as a camera and also relates to a distance measuring device.
Recently, an image pickup device to produce stereoscopic images of a subject using a plurality of imaging optical systems has been used in actual products of digital still cameras, movie cameras, endoscope cameras for medical inspection and treatment and various other cameras. Meanwhile, a distance measuring device has been used as a device for determining the distance to an object (i.e., the object of range finding) based on parallax between multiple imaging optical systems. Specifically, such a device has been used to determine the distance between running cars and as a member of an autofocusing system for cameras or a three-dimensional shape measuring system.
Such an image pickup device obtains a left-eye image and a right-eye image using two imaging optical systems which are arranged side by side horizontally. On the other hand, such a distance measuring device determines the distance to the object by carrying out triangulation based on the parallax between the left- and right-eye images.
Such an image pickup device or distance measuring device needs to use two image capturing sections, and therefore, the overall size of the device and its manufacturing cost will both increase, which is a problem.
Thus, to overcome such a problem, an image pickup device which obtains stereoscopic images using a single imaging optical system has been disclosed (in Patent Documents Nos. 1 and 2, for example).
Contrary to these conventional technologies, however, there is an increasing demand for an image pickup device that can obtain an image with an even higher resolution without using a dedicated image sensor.
A non-limiting exemplary embodiment of the present application provides an image pickup device which can obtain an image with an even higher resolution without using any dedicated image sensor.
An image pickup device according to an aspect of the present invention includes: a lens optical system which includes a first pupil region and a second pupil region that is different from the first pupil region; an image sensor with multiple groups of pixels, in each of which first, second, third and fourth pixels, on which light that has passed through the lens optical system is incident, are arranged in two rows and two columns on an image capturing plane; and an array of optical elements which is arranged between the lens optical system and the image sensor and which includes a plurality of optical elements. The multiple groups of pixels are arranged in first and second directions on the image capturing plane. The first and second pixels have a first spectral transmittance characteristic, the third pixel has a second spectral transmittance characteristic, the fourth pixel has a third spectral transmittance characteristic, and in each group of pixels, the first and second pixels are arranged at mutually different positions in the second direction. In the array of optical elements, each of the plurality of optical elements is arranged at such a position as to face groups of pixels that are arranged in a row in the first direction among the multiple groups of pixels.
An image pickup device according to an aspect of the present invention can obtain high-resolution color images for stereoscopic viewing using a single image pickup system. In addition, since an image sensor with an ordinary Bayer arrangement can be used, the initial equipment cost can be cut down.
The present inventors checked out the image pickup devices disclosed in Patent Documents Nos. 1 and 2. As a result, we discovered that although Patent Documents Nos. 1 and 2 do disclose an embodiment that uses a color image sensor with an existent Bayer arrangement, each of those patent documents adopts an arrangement in which a single optical element of a lenticular lens covers four columns of pixels as shown in
In another example, Patent Document No. 2 adopts an arrangement in which a single optical element of a lenticular lens covers two columns of pixels as shown in
Thus, to overcome such a problem, the present inventors invented a novel image pickup device which can obtain high-resolution color images for stereoscopic viewing using a single imaging optical system. An aspect of the present invention may be outlined as follows.
An image pickup device according to an aspect of the present invention includes: a lens optical system which includes a first pupil region and a second pupil region that is different from the first pupil region; an image sensor with multiple groups of pixels, in each of which first, second, third and fourth pixels, on which light that has passed through the lens optical system is incident, are arranged in two rows and two columns on an image capturing plane; and an array of optical elements which is arranged between the lens optical system and the image sensor and which includes a plurality of optical elements. The multiple groups of pixels are arranged in first and second directions on the image capturing plane. The first and second pixels have a first spectral transmittance characteristic, the third pixel has a second spectral transmittance characteristic, the fourth pixel has a third spectral transmittance characteristic, and in each group of pixels, the first and second pixels are arranged at mutually different positions in the direction. In the array of optical elements, each of the plurality of optical elements is arranged at such a position as to face groups of pixels that are arranged in a row in the first direction among the multiple groups of pixels.
On a plane which is parallel to the image capturing plane of the image sensor, the first and second pupil regions may be arranged at mutually different positions in the second direction.
The array of optical elements may make light that has passed through the first pupil region incident on the first and third pixels and may also make light that has passed through the second pupil region incident on the second and fourth pixels.
The image pickup device may further include a signal processing section. The signal processing section may receive first and second pieces of image information that have been generated by the first and second pixels, respectively, may extract the magnitude of parallax between the first and second pieces of image information, and may generate first and second color images with parallax based on the first, second, third and fourth pixels and the magnitude of parallax.
The image pickup device may generate the first and second color images by shifting, by the magnitude of parallax, third and fourth pieces of image information that have been generated by the third and fourth pixels.
The first color image may include, as its components, the first piece of image information, the third piece of image information, and a piece of image information obtained by shifting the fourth piece of image information by the magnitude of parallax, and the second color image may include, as its components, the second piece of image information, the fourth piece of image information, and a piece of image information obtained by shifting the third piece of image information by the magnitude of parallax.
The image pickup device may generate the first and second color images by shifting, by the magnitude of parallax, the first, second, third and fourth pieces of image information that have been generated by the first, second, third and fourth pixels, respectively.
The first color image may include, as its components, the first piece of image information, the third piece of image information, and a piece of image information obtained by shifting the second and fourth pieces of image information by the magnitude of parallax, and the second color image may include, as its components, the second piece of image information, the fourth piece of image information, and a piece of image information obtained by shifting the first and third pieces of image information by the magnitude of parallax.
The first, second, third and fourth pixels of the image sensor may be arranged in a Bayer arrangement pattern.
The first and second pupil regions may have been divided using the optical axis of the lens optical system as the center of their boundary.
The array of optical elements may be a lenticular lens.
The array of optical elements may be a micro lens array. Each of the plurality of optical elements may include a plurality of micro lenses that are arranged in the first direction. And each of the plurality of micro lenses may be arranged at such a position as to face two pixels that are arranged in the second direction.
The lens optical system may be an image-space telecentric optical system.
The lens optical system may be an image-space non-telecentric optical system, and the arrangement of the array of optical elements may be offset with respect to the arrangement of pixels of the image sensor outside of the optical axis of the lens optical system.
The array of optical elements may have been formed on the image sensor.
The image pickup device may further include a micro lens which is arranged between the array of optical elements and the image sensor, and the array of optical elements may have been formed over the image sensor with the micro lens interposed.
The image pickup device may further include a liquid crystal shutter array which changes the positions of the first and second pupil regions.
Each of liquid crystal shutters in the liquid crystal shutter array may have a variable transmittance.
The lens optical system may further include reflective members 1A and 1B which make light incident on the first pupil region and reflective members 2A and 2B which make light incident on the second pupil region.
The image pickup device may further include a relay optical system.
The image pickup device may further include a stop, which may make light that has come from the subject incident on the first and second pupil regions.
A distance measuring device according to an aspect of the present invention includes: an image pickup device according to any of the embodiments described above; and a second signal processing section which measures the distance to the subject.
An image pickup system according to an aspect of the present invention includes an image pickup device according to any of the embodiments described above and a signal processor. The signal processor receives first and second pieces of image information that have been generated by the first and second pixels, respectively, extracts the magnitude of parallax between the first and second pieces of image information, and generates first and second color images with parallax based on the first, second, third and fourth pixels and the magnitude of parallax.
Hereinafter, embodiments of an image pickup device according to the present invention will be described with reference to the accompanying drawings.
The lens optical system L includes a stop S and an objective lens L1 that light that has passed through the stop S enters. The lens optical system L has a region (pupil region) D1 and another region (pupil region) D2 which is arranged at a different position from the region D1. As shown in
In
a) is an enlarged view of the array of optical elements K and image sensor N shown in
b) shows the relative positions of the optical elements M in the array of optical elements K and pixels on the image sensor N. The image sensor N has a plurality of pixels which are arranged on the image capturing plane Ni. As shown in
The array of optical elements K is arranged so that each single optical element M thereof is associated with two rows of pixels on the image capturing plane Ni. In other words, each single optical element M is arranged at a position corresponding to a group of pixels that are arranged in a row in the x direction among those multiple groups of pixels Pg. And on a plan view which intersects with the optical axis at right angles, each single optical element M is arranged so as to overlap with a group of pixels Pg that are arranged in a row in the x direction. The array of optical elements K makes most of the light that passed through the region D1 incident on the pixels P1 and P3 on the image sensor N and makes most of the light that passed through the pupil region D2 incident on the pixels P2 and P4 on the image sensor N.
The array of optical elements K has the function of selectively determining the outgoing direction of an incoming light beam according to its angle of incidence. That is why the array of optical elements K can make light incident onto pixels on the image capturing plane Ni so that the pattern of the light beams incident on the image capturing plane Ni corresponds to the regions D1 and D2 that have been divided by the stop S. To make light beams incident on those pixels in this manner, various parameters such as the refractive index of the array of optical elements K, the distance from the image capturing plane Ni, and the radius of curvature on the surface of the optical elements M just need to be set appropriately.
On the image capturing plane Ni, micro lenses Ms are arranged so as to cover the surface of respective pixels.
The pixels P1 and P2 are provided with a filter having a first spectral transmittance characteristic so as to mostly transmit a light beam falling within the color green wavelength range and absorb a light beam falling within any other wavelength range. Meanwhile, the pixel P3 is provided with a filter having a second spectral transmittance characteristic so as to mostly transmit a light beam falling within the color red wavelength range and absorb a light beam falling within any other wavelength range. And the pixel P4 is provided with a filter having a third spectral transmittance characteristic so as to mostly transmit a light beam falling within the color blue wavelength range and absorb a light beam falling within any other wavelength range.
The pixels P1 and P3 are alternately arranged in the x direction, so are the pixels P2 and P4. The pixels P1 and P4 are alternately arranged in the y direction, so are the pixels P2 and P3. The pixels P1 and P3 are arranged on the same row, so are the pixels P2 and P4, and the row of the pixels P1 and P3 and the row of the pixels P2 and P4 are alternately arranged in the y direction. In this manner, these pixels form a Bayer arrangement. As will be described later, these pixels do not have to form a Bayer arrangement but may be arranged in any other pattern as long as the pixels P1 and P2 having the same spectral transmittance characteristic are arranged at mutually different locations in the y direction (i.e., in the direction in which parallax is produced) within each group of pixels Pg.
In this embodiment, all of these pixels P1, P2, P3 and P4 have the same shape on the image capturing plane Ni. For example, the first kind of pixels P1 and the second kind of pixels P2 have the same rectangular shape and have the same area. Also, if these pixels are looked at on a row basis, the respective locations of each pair of pixels on two rows that are adjacent to each other in the y direction do not shift from each other in the x direction. Likewise, if these pixels are looked at on a column basis, the respective locations of each pair of pixels on two column that are adjacent to each other in the x direction do not shift from each other in the y direction.
Next, it will be described in what flow the first signal processing section C1 generates first and second color images.
In this case, the image information G1 and the image information G2 have the same spectral information and represent two images produced by imaging light beams that have passed through mutually different pupil regions. As a result, images with parallax are generated. This parallax Px is extracted in Step 102 shown in
where x and y represent the coordinates on the image capturing plane and I0 and I1 respectively represent the intensity values in the base and reference images, of which the locations are specified by the coordinates in the parentheses.
Next, in Step 103A shown in
Now let us compare the amount of information of an image in a predetermined area to a comparative example.
a) and 7(b) show the image information yet to be interpolated for first and second color images that have been extracted from the 4×4 pixel image area at the lower left corner shown in
Comparing this embodiment to the comparative example, it can be seen that the amount of information about the color green image is the same but the amount of information about the colors red and blue images according to this embodiment is twice as large as in the comparative example. Consequently, the resolution of the color images to be generated by complementing can be increased.
As can be seen, according to this embodiment, first and second color images with a high resolution can be obtained by using a single image pickup system. The first and second color images can be handled as an image to be viewed with the right eye and an image to be viewed with the left eye, respectively. Consequently, by displaying the first and second color images on a 3D monitor, the object can be viewed as a stereoscopic image.
In addition, since images to make the viewer sense a stereoscopic image of the subject can be obtained by using a single image pickup system, there is no need to adjust the characteristics or positions of multiple imaging optical systems unlike an image pickup device with multiple imaging optical systems.
On top of that, since an image sensor with an ordinary Bayer arrangement may be used as an image sensor according to this embodiment, photomasks for forming color filters with a dedicated filter arrangement no longer need to be used, and therefore, the initial equipment cost can be cut down.
Optionally, by performing similar processing steps to the ones shown in
Also, the optical system of the image pickup device of this embodiment may be an image-space telecentric optical system. In that case, even if the angle of view changes, the principal ray will also be incident on the array of optical elements K at an angle of incidence of nearly zero degrees. As a result, crosstalk between the bundle of rays impinging on the pixels P1 and P3 and the bundle of rays impinging on the pixels P2 and P4 can be reduced over the entire image capturing area.
Furthermore, in the embodiment described above, the optical elements M of the array of optical elements K are supposed to be a lenticular lens. However, the optical elements M may also be an array of micro lenses, each covering two pixels as shown in
According to a second embodiment, an array optical elements is arranged on the image capturing plane, which is a major difference from the first embodiment described above. In the following description of this second embodiment, common features between this and first embodiments will not be described in detail all over again.
a) is an enlarged view of an array of optical elements K and image sensor N according to this embodiment. In this embodiment, the optical elements Md of the array of optical elements K have been formed on the image capturing plane Ni of the image sensor N. As in the first embodiment described above, a number of pixels are arranged in columns and rows on the image capturing plane Ni. Those pixels are also associated with a single optical element Md. In this embodiment, light beams which have been transmitted through different regions of the stop S can be guided to different pixels as in the first embodiment described above.
According to a third embodiment, the regions D1 and D2 are spaced apart from each other with a predetermined gap left between them unlike the first and second embodiments described above. In the following description of this third embodiment, common features between this and first embodiments will not be described in detail all over again.
a) is a front view of a stop S′ as viewed from the subject side. The regions D1 and D2 defined by the stop S′ both have a circular shape and are separated from each other. V1′ and V2′ indicate the respective centers of mass of the regions D1 and D2, and the distance B′ between V1′ and V2′ corresponds to the base line length in viewing with right and left eyes. According to this embodiment, the base line length B′ can be greater than B of the first embodiment shown in
Optionally, the aperture shape of the regions D1 and D2 may be elliptical ones as in the stop S″ shown in FIG. 10(b). By adopting such an elliptical aperture shape, the quantity of light passing through each region can be increased and eventually the sensitivity of the image can be increased compared to
According to a fourth embodiment, the positions of the regions D1 and D2 separated by the stop can be changed, which is a major difference from the third embodiment described above. In the following description of this fourth embodiment, common features between this and third embodiments will not be described in detail all over again.
According to this fourth embodiment, the stop Sv is implemented as a liquid crystal shutter array and the positions of the regions D1 and D2 can be changed by switching the aperture positions of the liquid crystal shutter array as shown in
In
Since the aperture positions can be changed, the depth of the image can be appropriately selected according to the subject distance.
In this embodiment, the base line lengths are supposed to be changeable in three stages as shown in
According to a fifth embodiment, the positions of the regions D1 and D2 that are separated from each other by the stop can be changed with an even higher resolution, which is a major difference from the fourth embodiment described above.
According to this fifth embodiment, the stop Sv′ is implemented as a liquid crystal shutter array as shown in
In each of these sub-regions Su1, Su2 and Su3 shown in
In
To increase the resolution of the base line length using two-stage liquid crystal shutters to be either turned ON or OFF as is done in the fourth embodiment, the number of the liquid crystal shutters to provide needs to be increased, too. However, the larger the number of liquid crystal shutters to provide, the lower the aperture ratio of the liquid crystal shutters and the lower the transmittances of the regions D1 and D2. As a result, the sensitivity of the image also decreases eventually, which is not beneficial.
On the other hand, by using multi-stage liquid crystal shutters as is done in this embodiment, the resolution of the base line length can be increased even with a small number of liquid crystal shutters. In addition, since the decrease in the aperture ratio of the liquid crystal shutters can be minimized, the decrease in the transmittance of the regions D1 and D2 can be minimized, too. Consequently, the decrease in the sensitivity of the image can be avoided.
According to a sixth embodiment, reflective members (reflective surfaces) 1A and 1B to make light incident on the region D1 are arranged on the lens optical system L, which is a major difference from the first through fifth embodiments. In the following description of this sixth embodiment, common features between this and first embodiments will not be described in detail all over again.
a) is a schematic representation illustrating an optical system for an image pickup device A according to this sixth embodiment. In
According to this embodiment, high-resolution color images for stereoscopic viewing can be obtained using a single lens optical system. In addition, by folding the optical paths leading to the respective regions D1 and D2 at the reflective surfaces, the base line length can be extended, and the image can have its depth increased when viewed as a stereoscopic image on a 3D monitor.
Although the reflective surfaces are mirrors in
Still alternatively, a concave lens may be arranged in front of each of the reflective surfaces J1a and J2a as shown in
In this description, a “single image pickup system” refers herein to an optical system which is configured so that a lens optical system's objective lens (except the array of optical elements) produces an image on a single primary imaging plane, which refers herein to a plane on which the light that has entered that objective lens produces an image for the first time. The same statement applies to every embodiment other than this. In this embodiment, in both of
According to a seventh embodiment, the lens optical system includes an objective lens and a relay optical system, which is a major difference from the first through sixth embodiments described above. In the following description of this seventh embodiment, common features between this and first embodiments will not be described in detail all over again.
If a stereoscopic image is viewed with a pair of optical systems arranged as in a conventional method, the optical properties of a pair of lens optical system need to be matched to each other, so are the optical properties of a pair of relay optical systems. However, since such an optical system needs a great many lenses, it is very difficult to match their properties to each other between the optical systems. In contrast, since a single optical system is used according to this embodiment as described above, the properties of the optical systems no longer need to be matched to each other, and therefore, the assembling process can be simplified.
Although the relay optical system LL is comprised of two relay lenses LL1 and LL2 in
According to this embodiment, high-resolution color images for stereoscopic viewing can also be obtained using a single image pickup system. Specifically, in this embodiment, the first relay lens LL1 produces an intermediate image Im2 based on the intermediate image Im1 that has been produced by the objective lens L, and the second relay lens LL2 produces an image on the image capturing plane Ni based on the intermediate image Im2. In the objective lens L, the intermediate image Im1 is produced on the primary imaging plane. According to this embodiment, an image is produced by the objective lens L on a single primary imaging plane.
According to an eighth embodiment, a signal processing section which measures the distance to the object is used, which is a major difference from the first through seventh embodiments described above.
The second signal processing section C2 calculates the distance to the subject based on the parallax Px that has been extracted by the first signal processing section.
Hereinafter, it will be described how to calculate the distance to the subject based on the parallax Px extracted.
a) and 17(b) illustrate conceptually the principle of rangefinding according to this embodiment. In this example, an ideal optical system including a thin lens is supposed to be used to describe the basic principle of rangefinding simply.
Also, in
Based on these Equations (2) and (3), the subject distance a can be calculated by the following Equation (4):
In Equation (4), the focal length f and the base line length B are already known, the parallax δ is extracted by pattern matching described above, and the distance e from the principal point to the image capturing plane i varies according to the setting of focusing length. However, if the focusing length is fixed, then “e” also becomes a constant, and therefore, the subject distance a can be calculated.
Also, if e=f is substituted for Equation (4) and if the focusing length is set to be infinity, then Equation (4) is transformed into the following Equation (5):
This Equation (5) becomes the same as the equation of triangulation that uses a pair of imaging optical systems that are arranged parallel to each other.
By making these calculations, the distance to a subject that has been captured at an arbitrary position on a given image or information about the distance to a subject over the entire image can be obtained by a single imaging optical system.
In the first through eighth embodiments described above, the objective lens L1 is supposed to be a single lens. However, the objective lens L1 may also be made up of either multiple groups of lenses or multiple lenses.
Furthermore, in the first through eighth embodiments described above, the lens optical system L is supposed to be an image-space telecentric optical system. However, the lens optical system L may also be an image-space non-telecentric optical system as well.
Also, although the pixel arrangement of the image sensor is supposed to be a Bayer arrangement in the embodiments described above, a pixel arrangement such as the one shown in
Even if such a pixel arrangement is adopted, first and second color images can also be generated following the flow shown in
Each of the first through seventh embodiments is an image pickup device including a first signal processing section C1, and the eighth embodiment is an image pickup device further including a second signal processing section C2. However, an image pickup device according to the present invention may include none of these signal processing sections. In that case, the processing that should be carried out by the first and second signal processing sections C1 and C2 may be performed by a PC provided outside of the image pickup device. That is to say, the present invention may also be implemented as a system including an image pickup device with an objective lens L, an array of optical elements K and an image sensor N and an external signal processor.
An image pickup device according to the present disclosure can be used effectively as a digital still camera or a digital camcorder, for example. In addition, the image pickup device of the present disclosure is also applicable to a distance measuring device for monitoring the environment surrounding a car or the crew in a car and to viewing a stereoscopic image on, or entering 3D information to, game consoles, PCs, mobile telecommunications devices, endoscopes, and so on.
Number | Date | Country | Kind |
---|---|---|---|
2012-021695 | Feb 2012 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2013/000565 | 2/1/2013 | WO | 00 | 10/1/2013 |