This application is a national stage application of International Patent Application No. PCT/JP2015/004474, filed Sep. 3, 2015, which claims the benefit of Japanese Patent Application No. 2014-188472, filed on Sep. 17, 2014, and Japanese Patent Application No. 2015-155151, filed on Aug. 5, 2015, which are hereby incorporated by reference herein in their entireties.
The present invention relates to a technique to calculate the positional shift amount between images.
A known depth measuring apparatus measures depth by calculating a positional shift amount (also called “parallax”), which is a relative positional shift amount between two images having different points of view (hereafter called “image A” and “image B”). To calculate the positional shift amount, an area-based corresponding points search technique called “template matching” is often used. In template matching, either image A or image B is set as a base image, and the other image, which is not the base image, is set as a reference image. A base area around a target point (also called “base window”) is set on the base image, and a reference area around a reference point corresponding to the target point (also called “reference window”) is set on the reference image. The base area and the reference area are collectively called “matching windows”. A reference point at which the similarity of an image in the base area and an image in the reference area is highest (correlation thereof is highest) is searched for while sequentially moving the reference point, and the positional shift amount is calculated using the relative positional shift amount between the target point and the reference point. Generally, a calculation error occurs to the positional shift amount due to a local mathematical operation if the size of the base area is small. Hence, a relatively large area size is used.
The depth (distance) to an object can be calculated by converting the positional shift amount into a defocus amount or into an object depth using a conversion coefficient. This allows measuring the depth at high-speed and at high accuracy, since it is unnecessary to move the lens to measure the depth.
The depth measurement accuracy improves by accurately determining the positional shift amount. Factors that cause an error to the positional shift amount are changes of the positional shift amount in each pixel of the base area, and noise generated in the process of acquiring image data. To minimize the influence of the changes of the positional shift amount in the base area, the base area must be small. If the base area is small, however, a calculation error to the positional shift amount may be generated by the influence of noise or because of the existence of similar image patterns.
In Japanese Patent Application Laid-Open No. 2011-013706, the positional shift amount is calculated for each scanning line (e.g., a horizontal line), and the positional shift amount at the adjacent scanning line is calculated based on the calculated positional shift amount data. In this case, a method of setting a base area independently for each pixel, so that a boundary where the calculated positional shift amount changes is not included, has been proposed.
In Japanese Patent Application Laid-Open No. H10-283474, a method of decreasing the size of the base area in steps and gradually limiting the search range to search for a corresponding point is proposed.
A problem of the positional shift amount calculation method disclosed in Japanese Patent Application Laid-Open No. 2011-013706, however, is that the memory amount and computation amount required for calculating the positional shift amount are large. This is because the positional shift amount of a spatially adjacent area is calculated and evaluated in advance to determine the size of the base area. Furthermore, in a case when the positional shift amount changes continuously, the base area becomes small since the base area is set in a range where the positional shift amount is approximately the same, and a calculation error may occur to the positional shift amount. In other words, the depth may be miscalculated depending on the way of changing the object depth.
A problem of the positional shift amount calculating method disclosed in Japanese Patent Application Laid-Open No. H10-283474 is that the computation amount is large since a plurality of base areas is set at each pixel position, and the correlation degree is evaluated.
With the foregoing in view, it is an object of the present invention to provide a technique that can calculate the positional shift amount at high accuracy by an easy operation.
A first aspect of the present invention is to provide a positional shift amount calculation apparatus that calculates a positional shift amount, which is a relative positional shift amount between a first image based on a luminous flux that has passed through a first imaging optical system, and a second image, the apparatus having a calculation unit adapted to calculate the positional shift amount based on data within a predetermined area out of first image data representing the first image and second image data representing the second image, and a setting unit adapted to set a relative size of the area to the first and second image data, and in this positional shift amount calculation apparatus, the calculation unit being adapted to calculate a first positional shift amount using the first image data and the second image data in the area having a first size which is preset, the setting unit being adapted to set a second size of the area based on the size of the first positional shift amount and an optical characteristic of the first imaging optical system, and the calculation unit being adapted to calculate a second positional shift amount using the first image data and the second image data in the area having the second size.
A second aspect of the present invention is to provide a positional shift amount calculation method for a positional shift amount calculation apparatus to calculate a positional shift amount, which is a relative positional shift amount between a first image based on a luminous flux that has passed through a first imaging optical system, and a second image, the method having a first calculation step of calculating a first positional shift amount based on data within an area having a predetermined first size, out of first image data representing the first image and second image data representing the second image, a setting step of setting a second size, which is a relative size of the area to the first and second image data, based on the size of the first positional shift amount and an optical characteristic of the first imaging optical system, and a second calculation step of calculating a second positional shift amount using the first image data and the second image data in the area having the second size.
According to the present invention, the positional shift amount can be calculated at high accuracy by an easy operation.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Embodiments of the present invention will now be described with reference to the drawings. In the following description, a digital camera is described as an example of an imaging apparatus that includes a depth calculation apparatus (positional shift calculation apparatus), but application of the present invention is not limited to this. For example, the positional shift amount calculation apparatus of the present invention can be applied to a digital depth measuring instrument.
In the description with reference to the drawings, as a rule, a same segment is denoted by a same reference number, even if the figure number is different, and a redundant description is minimized.
<Configuration of a Digital Camera>
The imaging optical system 120 is a photographing lens of the digital camera 100, and has a function to form an image of the object on the imaging element 101, which is an imaging surface. The imaging optical system 120 is constituted by a plurality of lens groups (not illustrated) and a diaphragm (not illustrated), and has an exit pupil 103 at a position apart from the imaging element 101 by a predetermined distance. Reference number 140 in
An operation example of this digital camera 100 will now be described with reference to
<Configuration of Imaging Element>
The imaging element 101 is constituted by a CMOS (Complementary Metal-Oxide Semiconductor) or a CCD (Charge-Coupled Device). The object image is formed on the imaging element 101 via the imaging optical system 120, and the imaging element 101 performs photoelectric conversion on the received luminous flux, and generates image data based on the object image. The imaging element 101 according to this embodiment will now be described in detail with reference to
<Principle of Depth Measurement>
In each pixel constituting the pixel group 150 of this embodiment, two photoelectric conversion units (first photoelectric conversion unit 161, and second photoelectric conversion unit 162), of which shapes are symmetric in the xy cross section, are disposed in the light receiving layer (203 in
The plurality of first photoelectric conversion units 161 disposed in each pixel performs photoelectric conversion on the received luminous flux and generates the first image data. In the same manner, the plurality of second photoelectric conversion units 162 disposed in each pixel performs photoelectric conversion on the received luminous flux and generates the second image data. From the first image data, the intensity distribution of the first image (image A), which the luminous flux that has mainly passed through the first pupil area forms on the imaging element 101, can be acquired. From the second image data, the intensity distribution of the second image (image B), which the luminous flux that has mainly passed through the second pupil area forms on the imaging element 101, can be acquired. Therefore, the relative positional shift amount of the first image and the second image is the positional shift amount of the image A and image B. By calculating this positional shift amount according to a later mentioned method and converting the calculated positional shift amount into the defocus amount using a conversion coefficient, the depth (distance) to the object can be calculated.
<Description on Depth Calculation Procedure>
The depth calculation procedure of this embodiment will now be described with reference to
In step S1, the imaging element 101 acquires the first image data and the second image data, and transfers the acquired data to the depth calculation unit 102.
In step S2, the light quantity balance correction processing is performed to correct the balance of the light quantity between the first image data and the second image data. To correct the light quantity balance, a known method can be used. For example, a coefficient to correct the light quantity balance between the first image data and the second image data is calculated based on an image acquired by photographing a uniform surface light source in advance using the digital camera 100.
In step S3, the depth calculation unit 102 calculates the positional shift amount based on the first image data and the second image data. The calculation method for the positional shift amount will be described later with reference to
In step S4, the depth calculation unit 102 converts the positional shift amount into an image-side defocus amount using a predetermined conversion coefficient. The image-side defocus amount is a distance from an estimated focal position (imaging element surface) to the focal position of the imaging optical system 120.
The calculation method for the conversion coefficient that is used for converting the positional shift amount into the image side defocus amount will now be described with reference to
In this embodiment, the positional shift amount is converted into the image-side defocus amount using Expression 1, but the positional shift amount may be converted into the image side defocus amount by a different method. For example, based on the assumption that the base line length w is sufficiently larger than the positional shift amount r in Expression 1, a gain value Gain may be calculated using Expression 2, and the positional shift amount may be converted into the image side defocus amount based on Expression 3.
Gain=L/w (2)
ΔL=Gain·r (3)
By using Expression 3, the positional shift amount can be easily converted into the image side defocus amount, and the computation amount to calculate the object depth can be reduced. A lookup table for conversion may be used to convert the positional shift amount into the image side defocus amount. In this case, as well, the computation amount to calculate the object depth can be reduced.
In
In step S5, the image side defocus amount calculated in step S4 is converted into the object depth based on the image forming relationship of the imaging optical system (object depth calculation processing). Conversion into the object depth may be performed by a different method. For example, the image side defocus amount is converted into the object-side defocus amount, and the sum of the object side defocus amount and the object-side focal position, which is calculated based on the focal length of the imaging optical system 120, is calculated, whereby the depth to the object is calculated. The object-side defocus amount can be calculated using the image-side defocus amount and the longitudinal magnification of the imaging optical system 120.
In the depth calculation procedure of this embodiment, the positional shift amount is converted into the image-side defocus amount in step S4, and then, the image-side defocus amount is converted into the object depth in step S5. However, the processing executed after calculating the positional shift amount may be other than the above mentioned processing. As mentioned above, the image-side defocus amount and the object-side defocus amount, or the image-side defocus amount and the object depth can be converted into each other using the image forming relationship of the imaging optical system 120. Therefore, the positional shift amount may be directly converted into the object-side defocus amount or the object depth, without being converted into the image-side defocus amount. In either case, the defocus amount (image-side and/or object-side) and the object depth can be accurately calculated by accurately calculating the positional shift amount.
In this embodiment, the image-side defocus amount is converted into the object depth in step S5, but step S5 need not always be executed, and the depth calculation procedure may be completed in step S4. In other words, the image-side defocus amount may be the final output. The blur amount of the object in the final image depends on the image-side defocus amount, and, as the image-side defocus amount of the object becomes greater, a more blurred image is photographed. To perform refocusing processing for adjusting the focal position in the image processing in a subsequent step, it is sufficient if the image-side defocus amount is known, and conversion into the object depth is unnecessary. As mentioned above, the image-side defocus amount can be converted into/from the object side defocus amount or the positional shift amount. Hence, the final output may be the object-side defocus amount or the positional shift amount.
<Factor for Generating Positional Shift Amount Error>
A calculation method for the positional shift amount will be described first with reference to
Now, a factor that generates the positional shift amount error will be described. In
Contrast does not deteriorate very much in an area near the focal position of the imaging optical system 120. Hence, a high contrast object image can be acquired near the focal position. As the position of the object moves away from the focal position of the imaging optical system 120 (as the object is defocused), the contrast drops, and the contrast of the acquired image also decreases. If the defocus amount is plotted on the abscissa and the positional shift amount error is plotted on the ordinate, as shown in
If the positional shift amount changes within the base area, a bimodal correlation value curve, having two minimum values, is acquired, as shown in
<Detailed Description on Positional Shift Amount Calculation Method>
The depth calculation unit 102 of this embodiment and the positional shift amount calculation procedure S3 will now be described in detail with reference to
The depth calculation unit 102 is constituted by a positional shift amount calculation unit 602, a base area setting unit 603, and a depth conversion unit 604. The positional shift amount calculation unit 602 calculates the positional shift amount of the first image data and the second image data stored in the image storage unit 104 using a base area having a predetermined size, or a base area having a size set by the base area setting unit 603. The base area setting unit 603 receives the positional shift amount (first positional shift amount) from the positional shift amount calculation unit 602, and outputs the size of the base area corresponding to this positional shift amount to the positional shift amount calculation unit 602. The first image data and the second image data, on which light quantity balance correction has been performed, as described with reference to step S2 in
In step S3-1 in
In step S3-2 in
In
In step S3-3 in
By the above processing, the positional shift amount calculation procedure S3 completes. Then, the depth conversion unit 604 converts the second positional shift amount into the object depth by the method described in step S4 and S5 in
<Reason why Changes of Positional Shift Amount and Influence of Noise can be Reduced>
The reason why changes of the positional shift amount in the base area and influence of noise generated upon acquiring image signals can be reduced by the depth calculation method executed by the depth calculation unit 102 of this embodiment will be described with reference to
If the defocus amount is large, as in the case of the object 702 (
If the absolute value of the first positional shift amount is large (that is, if the defocus amount is large), the depth calculation unit 102 of this embodiment sets a large base area (second base area), and calculates the positional shift amount again. In other words, if the contrast of the image acquired via the imaging optical system 120 is low, and the changes of the positional shift amount are gentle, a large base area is set.
If the absolute value of the first positional shift amount is greater than a predetermined threshold, the depth calculation unit 102 of this embodiment sets a larger second base area. This makes it unnecessary to calculate the positional shift amount of spatially adjacent pixels, and both reducing the influence of the changes of positional shift amount and the influence of noise generated upon acquiring image signals can be implemented by a simple operation. Furthermore, the base area is set according to the optical characteristic of the imaging optical system 120. Hence, dependency of the object depth on the changes of the positional shift amount can be reduced, and the object depth can be accurately measured.
The depth calculation unit 102 of this embodiment sets a larger size for the second base area when the absolute value of the first positional shift amount is greater than a predetermined threshold 610, as shown in
The depth calculation unit 102 of this embodiment need not calculate the first positional shift amount by setting the target point 410 for all of the pixel positions in the first image data. The depth calculation unit 102 may calculate the first positional shift amount by sequentially moving the target point 410 by a predetermined space. For example, the first positional shift amount is calculated by keeping ten pixels of space in the horizontal direction and vertical direction, and two-dimensional distribution of the first positional shift amount is expanded by a known expansion method (e.g., bilinear interpolation, nearest neighbor interpolation), and is referred to in order to set the second base area. By decreasing a number of target points that are set for calculating the first positional shift amount, a computation amount required for calculating the first positional shift amount can be reduced.
In the present embodiment, the changes of the positional shift amount in the base area and the influence of noise generated upon acquiring the image signals are reduced by setting the size of the second base area considering the optical characteristic of the imaging optical system 120. Therefore, it is only required that the ratio of the surface area, included in the base area in the first image data, can be changed to set the size of the base area. If the size of the base area is enlarged to increase the number of pixels included in the base area, the influence of the noise generated upon acquiring the image signals can be reduced. Further, the influence of the noise generated upon acquiring the image signals can also be suppressed by reducing (thinning out) the image data while keeping the number of pixels included in the base area constant. The influence of the noise generated upon acquiring the image signals can be reduced either by increasing a number of pixels included in the base area, or by reducing the image data while keeping the number of pixels included in the base area constant.
In the case of reducing the image data while keeping the number of pixels included in the base area constant, the positional shift amount calculation unit 602 shown in
To calculate the second positional shift amount, both the size of the base area and the size of the image data may be changed. In other words, the processing executed by the base area setting unit 603 may not be limited to a specific manner only as long as the relative sizes of the base areas to the first image data and the second image data are changed. The influence of noise can be reduced if the relative sizes of the base areas to the first and second image data upon calculating the second positional shift amount are larger than the relative sizes upon calculating the first positional shift amount.
<Modification 1 of Depth Calculation Unit>
The configuration shown in
The general flow of the depth calculation procedure of this modification is the same as above, but, in the step of setting the size of the second base area in step S3-2 in
In the depth calculation unit 102 shown in
The blur size of the imaging optical system 120 can be expressed by 3σ (three times the standard deviation σ) of PSF, for example. Therefore, the PSF size storage unit 804 outputs 3σ of PSF of the imaging optical system 120 as the size of PSF. It is sufficient if the PSF size storage unit 804 stores the PSF size only for the central angle of view, but it is preferable to store the PSF size of the peripheral angle of view as well if the aberration of the imaging optical system 120 at the peripheral angle of view is large. The PSF size may be expressed as a function representing the relationship of the PSF size and the positional shift amount, so that the coefficients are stored in the PSF storage unit 804. For example, the PSF size may be calculated using a linear function in which a reciprocal number of the diaphragm value (F value) of the imaging optical system is a coefficient, as shown in Expression 4, and the coefficients k1 and k2 may be stored in the PSF size storage unit 804.
Here PSFsize is a PSF size, r is a first positional shift amount, F is an F value of the imaging optical system 120, and k1 and k2 are predetermined coefficients.
The PSF size may be determined as shown in Expression 5, considering that the ratio of the base line length (distance 513) described with reference to
Here, w is a base line length, D is a diameter of the exit pupil, and k1 and k2 are predetermined coefficients. The size of PSF and the defocus amount have an approximate proportional relationship. If it is considered that the defocus amount and the positional shift amount have an approximate proportional relationship, as shown in Expression 3, the coefficient k2 in Expression 4 and Expression 5 is not essentially required.
The base area setting unit 603 according to this modification sets the size of the second base area in accordance with the PSF size acquired from the PSF size storage unit 804.
In either case, the second base area is more appropriately set by setting the second base area in accordance with the change of the PSF size due to the defocus of the imaging optical system 120. Thereby, the area size of the second base area is not set too large (or too small), and an increase of computation amount and positional shift amount error can be prevented.
<Modification 2 of Depth Calculation Unit>
As another modification of this embodiment, the depth calculation unit 102 may include an imaging performance value storage unit instead of the PSF size storage unit 804. From the imaging performance value storage unit, a value representing the imaging performance of the object image formed by the imaging optical system 120 is outputted. The imaging performance can be expressed, for example, by an absolute value of the optical transfer function (that is, a modulation transfer function, and hereafter called “MTF”), which indicates the imaging performance of the imaging optical system 120. In
<Modification 3 of Depth Calculation Unit>
As another modification of this embodiment, the procedure shown in
In
In the procedure shown in
In the above description, it is assumed that the depth calculation unit 102 includes the base area setting determination unit in addition to the configuration shown in
<Other Examples of First Image Data and Second Image Data Acquisition Method>
In the first embodiment, two image data having different points of view are acquired by splitting the luminous flux of one imaging optical system, but two image data may be acquired using two imaging optical systems. For example, the stereo camera 1000 shown in
In the case of the stereo camera, it is assumed that the image data generated by the imaging element 1010 is the first image data, and the image data generated by the imaging element 1011 is the second image data. The optical characteristic of the imaging optical system 1020 and that of the imaging optical system 1021 are preferably similar. The depth calculation unit 102 can calculate the depth to the object according to the depth calculation procedure described with reference to
In this modification, as well, the changes of the positional shift amount in the base area and the influence of noise generated upon acquiring images can be reduced by setting the size of the second base area considering the optical characteristic of the imaging optical systems 1020 and 1021. Particularly, in the case of the stereo camera 1000, the F values of the imaging optical systems 1020 and 1021 must be small in order to acquire high resolution images. In this case, a drop in contrast due to defocus becomes conspicuous. Hence, the depth calculation apparatus that includes the depth calculation unit 102 according to this embodiment can ideally calculate the depth to the object.
The above mentioned depth calculation apparatus according to the first embodiment can be installed by software (programs) or hardware. For example, a computer program is stored in memory of a computer (e.g. a microcomputer, a CPU, an MPU, an FPGA) included in the imaging apparatus or image processing apparatuses, and the computer executes the program to implement each processing. It is also preferable to dispose a dedicated processor, such as an ASIC, to implement all or a portion of the processing of the present invention using logic circuits. The present invention is also applicable to a server in a cloud environment.
The present invention may be implemented by a method constituted by steps to be executed by a computer of a system or an apparatus, which implements the above mentioned functions of the embodiment by reading and executing a program recorded in a storage apparatus. For this purpose, this program is provided to the computer via a network, or via various types of recording media that can function as a storage apparatus (that is, a computer readable recording media that holds data non-temporarily), for example. Therefore, this computer (including such a device as a CPU and an MPU), this method, this program (including program codes and program products), and the computer readable recording media that non-temporarily stores this program are all included within the scope of the present invention.
Embodiment(s) of the present invention can also be realized by a computer of a system or an apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., an application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., a central processing unit (CPU), or a micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and to execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), a digital versatile disc (DVD), or a Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
Number | Date | Country | Kind |
---|---|---|---|
2014-188472 | Sep 2014 | JP | national |
2015-155151 | Aug 2015 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2015/004474 | 9/3/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/042721 | 3/24/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4812869 | Akashi et al. | Mar 1989 | A |
7411626 | Ueda | Aug 2008 | B2 |
7792420 | Kusaka | Sep 2010 | B2 |
8548226 | Sakano et al. | Oct 2013 | B2 |
8842936 | Kawamura | Sep 2014 | B2 |
9279677 | Fujiwara | Mar 2016 | B2 |
20140009577 | Wakabayashi et al. | Jan 2014 | A1 |
20140247344 | Fujiwara | Sep 2014 | A1 |
Number | Date | Country |
---|---|---|
H10-283474 | Oct 1998 | JP |
2007-233032 | Sep 2007 | JP |
2008-134641 | Jun 2008 | JP |
2010-117593 | May 2010 | JP |
2011-013706 | Jan 2011 | JP |
2014-038151 | Feb 2014 | JP |
Entry |
---|
Notification of and International Preliminary Report on Patentability dated Mar. 21, 2017, and dated Mar. 30, 2017, in corresponding International Patent Application No. PCT/JP2015/004474. |
Notification of and International Search Report and Written Opinion dated Dec. 8, 2015, in corresponding International Patent Application No. PCT/JP2015/004474. |
Kanade, Takeo, et al., A Stereo Matching Algorithm with an Adaptive Window: Theory and Experiment, Proceedings of the 1991 IEEE International Conference on Robotics and Automation, Apr. 1991, pp. 1088-1095, Sacramento, California. |
Okutomi, Masatoshi, et al., A Locally Adaptive Window for Signal Matching, International Journal of Computer Vision, Jan. 1992, pp. 143-162, 7:2, Norwell, Massachusetts. |
Search Report dated Apr. 5, 2018, issued in corresponding European Patent Application No. 15842093.5. |
Number | Date | Country | |
---|---|---|---|
20170270688 A1 | Sep 2017 | US |