1. Field of the Invention
The present invention relates to an image processing technique to improve image quality of captured images.
2. Description of the Related Art
In image capturing by image pickup apparatuses such as cameras, part of light entering an image capturing optical system is often reflected by a lens surface or a lens holding member to reach an image sensor surface as unnecessary light. The unnecessary light reaching the image sensor surface forms a high-density spot image or spreads over a wide area of an object image to appear as ghost or flare as an unnecessary image component in a captured image.
Moreover, in a telephoto lens whose most-object side lens is a diffractive optical element for correcting longitudinal chromatic aberration or chromatic aberration of magnification, light emitting from a high-luminance object such as a sun existing outside an image capturing field angle and entering the diffractive optical element sometimes generates dim unnecessary light. Such unnecessary light also appears as an unnecessary image component in the captured image.
Thus, methods of optically reducing the unnecessary light or of removing the unnecessary component by digital image processing are conventionally proposed. Japanese Patent Laid-Open No. 2008-054206 discloses, as one of the methods of removing the unnecessary component by digital image processing, a method of detecting ghost from a difference image showing difference between an in-focus image captured through an image capturing optical system in an in-focus state for an object and a defocused image captured through the image capturing optical system in a defocused state for the object.
However, the method disclosed in Japanese Patent Laid-Open No. 2008-054206 requires image capturing multiple times including image capturing in the in-focus state and image capturing in the defocused state. Therefore, the method is not suitable for still image capturing of moving objects and for moving image capturing.
The present invention provides an image processing method, an image processing apparatus and an image pickup apparatus capable of accurately deciding an unnecessary image component included in a captured image without requiring image capturing multiple times.
The present invention provides as one aspect thereof an image processing method including a step of acquiring parallax images having parallax and produced by image capturing of an object, performing position matching of the parallax images to calculate difference between the parallax images, and deciding, in the difference, an unnecessary image component different from an image component corresponding to the parallax.
The present invention provides as another aspect thereof an image processing apparatus including an image acquiring part configured to acquire parallax images having parallax and produced by image capturing of an object, a difference calculating part configured to perform position matching of the parallax images to calculate difference between the parallax images, and an unnecessary image component deciding part configured to decide, in the difference, an unnecessary image component different from an image component corresponding to the parallax.
The present invention provides as still another aspect thereof an image pickup apparatus including an image capturing system configured to perform image capturing of an object to produce parallax images having parallax, and the above image processing apparatus.
The present invention provides as yet still another aspect thereof a non-transitory computer-readable storage medium storing an image processing program for causing a computer to execute an image processing operation. The image processing operation includes acquiring parallax images having parallax and produced by image capturing of an object, performing position matching of the parallax images to calculate difference between the parallax images, and deciding, in the difference, an unnecessary image component different from an image component corresponding to the parallax.
Other aspects of the present invention will become apparent from the following description and the attached drawings.
Exemplary embodiments of the present invention will hereinafter be described with reference to the accompanying drawings.
An image pickup apparatus using in each embodiment of the present invention has an image capturing system introducing light fluxes passing through mutually different areas of a pupil of an image capturing optical system to mutually different light-receiving portions (pixels) of an image sensor to cause the light-receiving portions to perform photoelectric conversion of the light fluxes, which enables production of parallax images having parallax.
The image sensor is provided with a plurality of the paired G1 and G2 pixels (pixel pairs). The paired G1 and G2 pixels are provided with a conjugate relationship with the exit pupil EXP by one microlens ML provided for the paired G1 and G2 pixels (that is, for each pixel pair). In the following description, a plurality of the G1 pixels is hereinafter referred to as “a G1 pixel group” and a plurality of the G2 pixels is hereinafter referred to as “a G2 pixel group”.
The pass of the light fluxes through the mutually different areas of the pupil corresponds to separation of the light fluxes entering from the object point OSP according to their angles (parallax). That is, an image produced by using an output signal from the G1 pixel provided for one microlens ML and an image produced by using an output signal from the G2 pixel provided for the same microlens ML form a plurality (pair) of parallax images having the parallax. In the following description, receiving of the light fluxes passing through the mutually different areas of the pupil by the mutually different light-receiving portions (pixels) is also referred to as “pupil division”.
Moreover, in
Analog electrical signals produced by the photoelectric conversion of the light fluxes by the image sensor 202 are converted into digital signals by the A/D converter 203, and the digital signals are input to an image processor 204. The image processor 204 performs, on the digital signals, general image processes, an unnecessary light decision process and a correction process for reducing or removing the unnecessary light to produce an output image. The image processor 204 corresponds to an image processing apparatus provided in the image pickup apparatus. Moreover, the image processor 204 serves as an image acquiring part to acquire (produce) parallax images, a difference calculating part to calculate difference between the parallax images and an unnecessary image component deciding part to decide an unnecessary image component.
The output image produced the by image processor 204 is stored in an image recording medium 209 such as a semiconductor memory or an optical disk. The output image may be displayed on a display device 205.
A system controller 210 controls operations of the image sensor 202, the image processor 204 and the aperture stop 201a and the focus lens 201b in the image capturing optical system 201. Specifically, the system controller 210 outputs a control instruction to an image capturing optical system controller 206. The image capturing optical system controller 206 controls mechanical drive of the aperture stop 201a and focus lens 201b in the image capturing optical system 201 in response the control instruction. Thus, an aperture diameter of the aperture stop 201a is controlled according to an aperture value (F-number) set by the system controller 210. A current aperture diameter of the aperture stop 201a and a current position of the focus lens 201b are detected by a status detector 207 through the image capturing optical system controller 206 or the system controller 210 to input to the image processor 204. A position of the focus lens 201b is controlled to perform focusing according to object distances by an autofocus (AF) system (not shown) and a manual focus mechanism (not shown). Although the image capturing optical system 201 shown in
Moreover,
Description will be made of a method of deciding an unnecessary image component that is an image component appearing, due to photoelectric conversion of the unnecessary light, in a captured image produced by image capturing by the image pickup apparatus thus configured, with reference to
Moreover, a correction process is performed to remove or reduce the unnecessary image component thus decided on an image(s) to be output (such as a reconstructed image produced by combination of the paired G1 and G2 pixels, or the paired parallax images), which enables acquisition of an output image including almost no unnecessary image component, as shown in
The decision of the unnecessary image component requires, as described above, the process to produce the difference image and the process to isolate the unnecessary image component from the object parallax component, which decreases image processing speed to production of the output image and deteriorates image quality due to erroneous detection. Thus, each embodiment makes a determination of whether or not to perform the above-mentioned image process to decide and remove the unnecessary image component, by using (with reference to) image capturing condition information of the image pickup apparatus and determination information, which are described later. The determination may be made only of whether or not to perform the process to decide the unnecessary image component. This is because, if the process to decide the unnecessary image component does not decide it, the process to remove it is not needed.
Therefore, this embodiment performs, in the case where the image capturing condition is included in the image capturing condition area P, the image process to decide and remove the unnecessary image component, and on the other hand does not perform it in the case where the image capturing condition is included in the image capturing condition area Q, which enables avoidance of the undesirable decrease of the image processing speed and avoidance of the deterioration of the image quality due to erroneous detection.
A boundary of the image capturing condition areas P and Q and a division number of the image capturing condition areas are not limited to those shown in
At step S11, the system controller 210 controls the image capturing system including the image capturing optical system 201 and the image sensor 202 to perform image capturing of an object.
At step S12, the system controller 210 causes the status detector 207 to acquire the image capturing condition information.
At step S13, the system controller 210 determines, by using the image capturing condition and the determination information, whether or not to perform the image process to decide and remove the unnecessary image component.
If determining not to perform the image process to decide and remove the unnecessary image component at step S13, the system controller 210 produces an output image by performing predetermined processes such as a development process and an image correction process. The system controller 210 may produce parallax images as needed, on other purposes than the decision and removal of the unnecessary image component. Description will hereinafter be made of a case where a determination to perform the image process to decide and remove the unnecessary image component is made.
At step S14, the system controller 210 causes the image processor 204 to produce a pair of parallax images as input images by using digital signals converted by the A/D converter 203 from analog signals output from the G1 and G2 pixel groups of the image sensor 202.
Next, at step S15, the image processor 204 performs the position matching of the paired parallax images. The position matching can be performed by relatively shifting one of the paired parallax images with respect to the other one thereof to decide a shift position where correlation between these images becomes maximum. The position matching can also be performed by deciding a shift position where a square sum of difference components between the paired parallax images becomes minimum. Moreover, the shift position decision for the position matching may be performed by using in-focus areas of the parallax images.
In addition, the shift position decision for the position matching may be performed by detecting edges in each of the parallax images and by using the detected edges. Since edges with high contrast are detected in an in-focus area and edges with low contrast are difficult to be detected in an out-of-focus area such as a background area, the shift position decision may be performed inevitably focusing on the in-focus area.
Furthermore, performing the edge detection on the unnecessary image component GST shown in
Next, at step S16, the image processor 204 calculates difference between the paired parallax images to produce the above-mentioned difference image. In a case where light fluxes of the unnecessary light reaching the image pickup surface pass through mutually different pupil areas of the image capturing optical system, the unnecessary image components are generated at different positions in the parallax images as shown in
Thus, at step S17, the image processor 204 corrects the difference image such that only difference in the difference image equal to or greater than (or difference greater than) a predetermined threshold which is greater than the absolute value of the object parallax component, that is, the unnecessary image component remains. This correction may be performed by an image processing technique such as smoothing for suppressing detection noise. Moreover, the correction can be performed by removing thin lines and isolated points on a basis of a characteristic that the unnecessary image component has a larger area than that of the object parallax component as shown in
Next, at step S18, the image processor 204 decides a remaining image component in the image acquired at step S17 as the unnecessary image component.
Next, at step S19, the image processor 204 performs a correction process to remove (or reduce) the unnecessary image component from an image to be output. In this embodiment, the image processor 204 produces, as the image to be output, a reconstructed image that is acquired by combining the G1 and G2 pixels shown in
The image processor 204 may produce such an output image in which the unnecessary image component is removed (reduced) by another method that combines the entire parallax image acquired by using the G1 pixel group with the entire parallax image acquired by using the G2 pixel group to produce a reconstructed image and subtracts therefrom the above-mentioned unnecessary component image area.
Finally, at step S20, the system controller 210 stores the output image in which the unnecessary image component is removed (reduced) in the image recording medium 209 and displays the output image on the display device 205.
As described above, this embodiment can decide the unnecessary image component, which is generated due to the unnecessary light (ghost), by using the parallax images produced by one image capturing. That is, this embodiment can decide the unnecessary image component included in a captured image without performing image capturing multiple times. Moreover, this embodiment determines whether or not to perform the decision of the unnecessary image component by using the image capturing condition, and therefore can avoid that the image processing speed is decreased and the image quality is deteriorated due to erroneous detection by not performing an unnecessary image process in the case where it is clear that no unnecessary image component is generated. Thus, this embodiment can provide a high quality captured image in which the decided unnecessary image component is sufficiently removed or reduced.
Processes at steps S11 to S14 are same as those in Embodiment 1, so that description thereof is omitted.
At step S15, the system controller 210 causes the image processor 204 to produce information on distances to the objects (hereinafter referred to as “object distance information”) by using the parallax images. A method acquiring the object distance information from parallax images is well-known, so that description thereof is omitted.
Next, at step S16, the image processor 204 performs position matching of the paired parallax images by using the object distance information. The position matching can be performed by, as in Embodiment 1, relatively shifting one of the paired parallax images with respect to the other one thereof to decide the shift position where the correlation between the parallax images becomes maximum. The position matching can also be performed by deciding the shift position where the square sum of the difference components between the paired parallax images becomes minimum. Changing a shift amount to the shift position according to the object distance information enables minimization of displacement in the position matching due to parallax even in a case where distances to multiple objects are mutually different.
Processes at steps S17 to S21 correspond to those at steps S16 to S20 in Embodiment 1, so that description thereof is omitted.
As described above, this embodiment also can decide the unnecessary image component, which is generated due to the unnecessary light (ghost), by using the parallax images produced by one image capturing. That is, this embodiment also can decide the unnecessary image component included in a captured image without performing image capturing multiple times. Moreover, this embodiment determines whether or not to perform the decision of the unnecessary image component by using the image capturing condition, and therefore can avoid that the image processing speed is decreased and the image quality is deteriorated due to erroneous detection by not performing an unnecessary image process in the case where it is clear that no unnecessary image component is generated. Thus, this embodiment also can provide a high quality captured image in which the decided unnecessary image component is sufficiently removed or reduced.
Processes at steps S11 and S12 are same as those in Embodiment 1, so that description thereof is omitted.
At step S13, the system controller 210 causes the image processor 204 to detect presence or absence of a high-luminance area from the input image (parallax images). The high-luminance area includes, for example, an area SUN shown in
Next, at step S14, the system controller 210 determines whether or not to perform the image process to decide and remove the unnecessary image component by using the above-mentioned image capturing condition information, the determination information and a detection result of the high-luminance area. As shown in
Processes at steps S15 to S21 correspond to those at steps S14 to S20 in Embodiment 1, so that description thereof is omitted.
In addition,
Processes at steps S11 and S12 are same as those in Embodiment 1, so that description thereof is omitted.
At step S13, the system controller 210 causes, as well as the above-described modified example, the image processor 204 to detect presence or absence of the high-luminance area from the input image (parallax images). Moreover, if the high-luminance area is present, the system controller 210 detects a position thereof (coordinates in the image).
Next, at step S14, the system controller 210 determines whether or not to perform the image process to decide and remove the unnecessary image component by using the above-mentioned image capturing condition information, the determination information and a detection result of the high-luminance area.
Next, at step S15, the system controller 210 causes the image processor 204 to decide a target area on which decision of the unnecessary image component is made depending on the position of the high-luminance area. Description of this target area will be made with reference to
Processes at steps S16 to S19 correspond to those at steps S14 to S17 in Embodiment 1, so that description thereof is omitted.
Next, at step S20, the image processor 204 decides, for the image acquired at step S19, an image component remaining in the target area decided at step S15 as the unnecessary image component.
Processes at steps S21 and S22 correspond to those at steps S19 and S20 in Embodiment 1, so that description thereof is omitted.
As described above, this embodiment also can decide the unnecessary image component, which is generated due to the unnecessary light (ghost), by using the parallax images produced by one image capturing. That is, this embodiment also can decide the unnecessary image component included in a captured image without performing image capturing multiple times. Moreover, this embodiment determines whether or not to perform the decision of the unnecessary image component by using the image capturing condition and the detection result of the high-luminance area, and therefore can avoid that the image processing speed is decreased and the image quality is deteriorated due to erroneous detection by not performing an unnecessary image process in the case where it is clear that no unnecessary image component is generated. Thus, this embodiment also can provide a high quality captured image in which the decided unnecessary image component is sufficiently removed or reduced.
Description will be made of a method of deciding, in paired parallax images produced by image capturing, an unnecessary image component corresponding to the unnecessary diffracted light in this embodiment with reference to
Then, performing the correction process described in Embodiment 1 to remove or reduce the decided unnecessary image component on an image to be output enables acquisition of an output image in which the unnecessary image component is mostly removed as shown in
The processes shown in
As described above, this embodiment can decide the unnecessary image component, which is generated due to the unnecessary diffracted light, by using the parallax images produced by one image capturing. That is, this embodiment also can decide the unnecessary image component included in a captured image without performing image capturing multiple times. Moreover, this embodiment also determines whether or not to perform the decision of the unnecessary image component by using the image capturing condition, and therefore can avoid that the image processing speed is decreased and the image quality is deteriorated due to erroneous detection by not performing an unnecessary image process in the case where it is clear that no unnecessary image component is generated. Thus, this embodiment also can provide a high quality captured image in which the decided unnecessary image component is sufficiently removed or reduced.
Next, description of a fifth embodiment (Embodiment 5) of the present invention will be made. “Light Field Photography with a Hand-held Plenoptic Camera” (Stanford Tech Report CTSR 2005-2) by Ren. Ng et al. proposes “Plenoptic Camera”. Using a technique called “Light Field Photography” in this “Plenoptic Camera” enables acquisition of information on positions and angles of light rays from an object side.
The microlens array 301c serves as a separator that prevents mixing of light rays passing through a certain point M in an object space and light rays passing through a point near the point M on the image sensor 302.
As understood from
Moreover, “Full Resolution Light Field Rendering” (Adobe Technical Report January 2008) by Todor Georgive et al. proposes, as a method for acquiring information (Light Field) on positions and angles of light rays, a method showing in
The image capturing system shown in
Moreover, both the image capturing systems shown in
The image capturing optical system 301 shown in
Moreover, each of the image capturing optical systems 301 shown in
As described in Embodiments 1 to 4, the unnecessary light such as the ghost and the unnecessary diffracted light passes through an off-centered area of the pupil. Thus, using the image processing method described in any one of Embodiments 1 to 4 in the image pickup apparatus of this embodiment performing image capturing with the pupil division enables decision of the unnecessary image component. In other words, Embodiments 1 to 4 described the case where the pupil of the image capturing optical system is divided into two pupil areas, but this embodiment describes the case where the pupil is divided into more number of pupil areas.
Furthermore, as shown in
Although the above embodiments described the cases of deciding and removing the unnecessary image component formed by the unnecessary light such as the ghost and the unnecessary diffracted light, there are other unnecessary image components formed by pixel defect in an image sensor and dust adhering to an image capturing optical system.
Moreover, although each of the above embodiments described the cases of removing or reducing the unnecessary image component from the image, a correction process may be performed which adds another unnecessary image component to the image by using the decided unnecessary image component. For example, the parallax images shown in
Although each of Embodiments 1 to 6 described the image pickup apparatus using the image processing method, that is, provided with an image processing apparatus, the image processing method may be implemented by an image processing program installed from a non-transitory computer-readable storage medium 250 (shown in
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications, equivalent structures and functions.
This application claims the benefit of Japanese Patent Application Nos. 2012-019436, filed on Feb. 1, 2012, and 2012-226445, filed on Oct. 11, 2012 which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2012-019436 | Feb 2012 | JP | national |
2012-226445 | Oct 2012 | JP | national |