The present disclosure relates to an image processing method for improving the image quality of a captured image.
When an image capture apparatus such as a camera captures an image, a part of the light entering the optical system is sometimes reflected on the boundary face of a lens or on a member holding the lens and reaches the image pickup plane as an undesirable light. The undesirable light that has reached the image pickup plane appears in the captured image as an undesirable component such as a ghost or flare. Japanese Patent Laid-Open No. 2011-205531 discloses a method that detects a ghost by comparing a plurality of viewpoint images.
The method of Japanese Patent Laid-Open No. 2011-205531 detects a ghost by calculating the difference between the viewpoint images. When each viewpoint image has a large amount of noise, the accuracy of the detection of a ghost is decreased.
What is needed is an improvement to provide an image processing apparatus capable of suppressing the noise component included in a captured image and reducing the undesirable component more accurately, and a control method for controlling the image processing apparatus.
The present disclosure includes an image obtaining unit configured to obtain a plurality of viewpoint images, a detecting unit configured to detect a first undesirable component of the viewpoint images according to relative difference information that is a difference between the viewpoint images, a noise information obtaining unit configured to obtain the noise information of the viewpoint images, a calculation unit configured to calculate a second undesirable component by subtracting noise from the first undesirable component with the first undesirable component and the noise information, and a reducing unit configured to reduce the second undesirable component of an image formed based on the viewpoint images.
Further features of the present disclosure will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Preferred embodiments of the present disclosure will be described in detail with reference to the appended drawings.
An image capture apparatus used in the present exemplary embodiment is capable of generating a plurality of viewpoint images. The image capture apparatus includes an image capturing system that guides a plurality of amounts of luminance flux, which has passed through different regions of the pupil of the optical system, to different light-receiving units (pixels) in the image capture element so as to photoelectrically convert the amounts of luminance flux.
The image processing unit 104 performs all the image processing procedures of the input image data. The image processing unit 104 performs, for example, a demosaicing process, a process for correcting an inherent defect in the image capture element 102, a shading correction process, a process, for example, for correcting the black level, white balance processing, a gamma correction process, and a color conversion process. In addition, the image processing unit 104 performs, for example, a noise reduction process, and a compression and coding process. Furthermore, in order to suppress the effect of a ghost included in an image, the image processing unit 104 of the present exemplary embodiment processes the input image data in a correction process as described below so as to detect the included ghost based on the difference between a plurality of viewpoint images and reduce the effect of the ghost.
The output image (image data) processed by the image processing unit 104 is stored in an image recording medium 107 such as a semiconductor memory or an optical disk. In addition, the image output from the image processing unit 104 may be displayed on a display unit 105. A storage unit 106 stores an image processing program necessary for the image processing by the image processing unit 104 and various types of information.
A CPU 108 (control unit) performs various controls including a drive control of the image capture element 102, a control of the process by the image processing unit 104, and a drive control of the optical system 101. Note that the optical system 101 of the present exemplary embodiment is included in (integrated into) the image capture apparatus 100 including the image capture element 102 as a part of the image capture apparatus 100. The structure of the image capture apparatus 100 is not limited to the present exemplary embodiment. The image capture apparatus 100 can be an image capturing system such as a single-lens reflection camera in which an interchangeable optical system (exchangeable lens) is detachably attached to the main body of the image capture apparatus.
The image processing unit 104 processes the digital data in general image processing, and also performs a process for determining the undesirable light and a correction process for reducing or removing the undesirable light. The image processing unit 104 includes an undesirable component detecting unit 104a, an undesirable component synthesizing unit 104b, a noise component removing unit 104c, a viewpoint image synthesizing unit 104d, and an undesirable component reducing unit 104e.
The undesirable component detecting unit 104a generates (obtains) a viewpoint image and detects (determines) an undesirable component (a first undesirable component) of the viewpoint image. The undesirable component synthesizing unit 104b calculates a first composite value of the undesirable components detected by the undesirable component detecting unit 104a. The noise component removing unit 104c calculates a second composite value of the undesirable components by removing a noise component from the first composite value of the undesirable components calculated by the undesirable component synthesizing unit 104b. The viewpoint image synthesizing unit 104d synthesizes the viewpoint images generated by the undesirable component detecting unit 104a. The undesirable component reducing unit 104e reduces an undesirable component (a second undesirable component) of the viewpoint image synthesized by the viewpoint image synthesizing unit 104d based on the second composite value of the undesirable components that are the first undesirable components from which the noise components has been removed in the calculation by the noise component removing unit 104c. In the present exemplary embodiment, the second undesirable components are a value equal to the first composite value of the first undesirable components, or a value obtained based on the first composite value of the first undesirable components.
A plurality of pairs of a pixel 206 and a pixel 207 is arranged in the image capture element 102. A pair of the pixel 206 and the pixel 207 has a conjugate relationship with the exit pupil 203 through a common micro lens 201 (in other words, a micro lens 201 provided for each pair of pixels). In each exemplary embodiment, the pixels 206 and the pixels 207 arranged in the image capture element are sometimes collectively referred to as a pixel group 206 and 207.
Next, a method for determining an undesirable component will be described with reference to
Similarly,
Here, a case in which a captured image as illustrated in
In light of the foregoing, a synthesis process for synthesizing the undesirable components of the viewpoint images is performed when a composite image of viewpoint images is output in the present exemplary embodiment, similarly to the synthesis process for synthesizing the viewpoint images of the output images. In order to output an image obtained by combining (synthesizing) the viewpoint images as a final output image, the undesirable components of the viewpoint images are combined (synthesized) in the present exemplary embodiment.
Next, the procedures of a determination process (image processing) for determining a undesirable component (a ghost component) in the present exemplary embodiment will be described with reference to
First, in step S601, the CPU 108 controls an image capture unit including the optical system 101, the image capture element 102, and the A/D converter 103 (the image capture system) to capture an image of an object and obtains the input image (the captured image). Alternatively, the CPU 108 reads the image data previously captured and recorded in the image recording medium 107 into the temporary memory area of the image processing unit 104 to obtain an input image. The images obtained as the input images in the present exemplary embodiment include a composite image of a plurality of viewpoint images corresponding to the luminance flux passing through the different pupil regions of the optical system 101 in the image capture element 102, and the viewpoint images that are not synthesized yet and correspond to some of the pupil regions. The input images are not limited to the present exemplary embodiment. Each of viewpoint images may be obtained as an input image.
In step S602, the CPU 108 controls the image processing unit 104 to generate a pair of viewpoint images that are the composite image and one of the viewpoint images. Specifically, taking a difference can calculate a plurality of viewpoint images. Here, the image processing unit 104 can perform some of the various types of image processing described above while generating the viewpoint images. When a plurality of viewpoint images is obtained as input images in step S601, only some of the various types of image processing need to be performed in step S602.
Next, in step S603, the undesirable component detecting unit 104a of the image processing unit 104 calculates the relative difference information between the viewpoint images of the pair. In other words, the undesirable component detecting unit 104a generates a relative difference image (the image of
At that point in the present exemplary embodiment, the undesirable component detecting unit 104a performs a process for removing the negative number and putting a zero value instead in order to simplify the undesirable component reduction process to be described below. Thus, only the undesirable components included in
Alternatively, in order to remove the object parallax component when the relative difference information between the images capturing a short-range object is calculated, the undesirable component detecting unit 104a can align the positions of the viewpoint images of the pair. Specifically, the undesirable component detecting unit 104a can align the positions of the viewpoint images of the pair by shifting a first viewpoint image of the pair relatively to a second viewpoint image of the pair and determining the shift position of the first viewpoint image at which the correlation between the first and second viewpoint images is maximized. Alternatively, the undesirable component detecting unit 104a can align the positions of the first and second viewpoint images by determining the shift position at which the square sum of the difference between the first and second viewpoint images is minimized. Alternatively, the undesirable component detecting unit 104a can shift an in-focus area in a first viewpoint image of the pair and determine the shift position of the in-focus area to align the first and second viewpoint images of the pair.
Alternatively, the undesirable component detecting unit 104a can detect the edges of each of the viewpoint images to determine a shift position to be shifted in order to align the viewpoint images, according to the detected edges. This edge detection method detects a high-contrast edge of an in-focus area. There is a low contrast in an area that is not in focus such as a background and this low contrast makes it difficult to detect the area that is not in focus as an edge. Thus, the undesirable component detecting unit 104a determines the shift position necessarily focusing on the in-focus area. Furthermore, the undesirable component detecting unit 104a can perform an additional procedure, for example, a threshold process for removing the effect of the noise when generating a relative difference image.
Next, in step S604, the undesirable component detecting unit 104a determines the components remaining in the relative difference images generated in step S603 as the undesirable components.
Next, in step S605, the undesirable component synthesizing unit 104b of the image processing unit 104 performs a process for combining the undesirable components of the viewpoint images determined in step S604 (calculates the composite value of the undesirable components). Specifically, the undesirable component synthesizing unit 104b performs a process for adding the relative difference image of
Next, in step S606, the noise component removing unit 104c of the image processing unit 104 performs a correction process for reducing or removing the noise component from the undesirable components. Specifically, the noise component removing unit 104c performs a process for subtracting the noise included in the undesirable components included in the viewpoint images from the composite value of the undesirable components calculated in step S605.
Hereinafter, the procedures of the correction process for reducing or removing the noise component in the present exemplary embodiment will be described with reference to
In step S701, the noise component removing unit 104c calculates the noise component according to the standard deviation of the noise components (the noise information) previously measured in the image capture element 102 and stored in the storage unit 106. The predicted values of the noise components are measured from the results of previously capturing an object with a uniform intensity of luminance with the image capture element 102 and sorted according to the ISO sensitivity largely affecting the noise in a table. Actually measuring the noise components of each of viewpoint images takes time and effort, and the shading of the viewpoint image affects the actual measurement. In light of the foregoing, the present exemplary embodiment determines the noise components from the measurement data of the composite image of the viewpoint images corresponding to the luminance flux from the different pupil regions of the optical system. The noise components are determined based on the measured values and every pixel may have a uniform noise component according to the ISO sensitivity, according to the image height, or according to the pixel. In step S702, the noise components calculated in step S701 are subtracted from the composite value of the undesirable components calculated in step S605. The noise components included in the undesirable components of each viewpoint image are loaded every time the undesirable components are combined in step S605. Thus, the process for subtracting the noise components needs to be performed as many as the number of the viewpoint images minus one. The method of subtracting the noise components is not limited to the method in step S702. For example, the standard deviation of the noise components of each viewpoint image can be calculated. In this case, specifically, the image is divided into local regions of a 10 by 10 image. The standard deviation of the pixel values of each local region is calculated. Then, a process for subtracting the noise components is performed in each local region.
Next, in step S607, the undesirable component reducing unit 104e of the image processing unit 104 performs a correction process for reducing or removing the undesirable components from the image to be output. Specifically, the undesirable component reducing unit 104e subtracts the undesirable components calculated in step S605 and illustrated in
At last, in step S609, the CPU 108 records the output image illustrated in
As described above, the present exemplary embodiment improves a reduction process for reducing the undesirable components of an image by reducing or removing a noise component from the undesirable components using the image processing apparatus that reduces the undesirable components caused by a undesirable light in an image formed based on a plurality of viewpoint images.
In the present exemplary embodiment, a composite image that have been obtained by analog synthesis in an image capture sensor when the composite image is output from the sensor or a composite image of a plurality of viewpoint images has been described as an example of an image formed based on a plurality of viewpoint images that is an image to be processed in a ghost reduction process. However, the image to be processed is not limited to the examples. For example, the undesirable components of one of the viewpoint images may be calculated as the undesirable components calculated in the present exemplary embodiment. The undesirable components of one of the viewpoint images may be used for the reduction process.
Next, a second exemplary embodiment of the present disclosure will be described. In the first exemplary embodiment, the noise component is subtracted from the undesirable components calculated from a plurality of viewpoint images. On the other hand, in the present exemplary embodiment, the noise component is subtracted from each of the viewpoint images, and then the undesirable components of the viewpoint images are calculated. Thus, the undesirable components from which the noise component has been reduced or removed are calculated.
The basic configuration of the image capture apparatus of the present exemplary embodiment is similar to the image capture apparatus 100 of the first exemplary embodiment described with reference to
The color interpolation processing unit 104f performs a demosaicing process included in the general image processing described above. In the present exemplary embodiment, the image capture element 102 is a sensor including color filters arranged in a Bayer array. Interpolating the color mosaic image data of the two colors absent in each pixel among three primary colors generates a color demosaiced image that has all the R, G, and B color image data of every pixel.
The noise smoothing processing unit 104g obtains the demosaiced image generated in the color interpolation processing unit 104f to reduce the noise of the demosaiced image (smooth the demosaiced image).
Next, the procedures of a determination process (image processing) for determining an undesirable component (a ghost component) in the present exemplary embodiment will be described with reference to
First, in step S901, the CPU 108 controls an image capture unit including the optical system 101, the image capture element 102, and the A/D converter 103 (the image capture system) to capture an image of an object and obtains the input image (the captured image). Alternatively, the CPU 108 reads the image data previously captured and recorded in the image recording medium 107 into the temporary memory area of the image processing unit 104 to obtain an input image. The images obtained as the input images in the present exemplary embodiment include a composite image of a plurality of viewpoint images corresponding to the luminance flux passing through the different pupil regions of the optical system 101 in the image capture element 102, and the viewpoint images that are not synthesized yet and correspond to some of the pupil regions. The input images are not limited to the present exemplary embodiment. Each of viewpoint images may be obtained as an input image.
In step S902, the CPU 108 controls the image processing unit 104 to generate a pair of viewpoint images that are the composite image and one of the viewpoint images. Specifically, taking a difference can calculate a plurality of viewpoint images. Here, the image processing unit 104 can perform some of the various types of image processing described above while generating the viewpoint images. When a plurality of viewpoint images is obtained as input images in step S901, only some of the various types of image processing need to be performed in step S902. Furthermore, in step S902, the CPU 108 generates the demosaiced image by controlling the image processing unit 104. The color interpolation processing unit 104f of the image processing unit 104 generates the demosaiced image having all the R, G, and B color image data by interpolating the mosaic image data.
In step S903, the noise smoothing processing unit 104g performs a correction process for smoothing or reducing the noise components of the viewpoint images generated in step S902. Specifically, the noise smoothing processing unit 104g processes at least a pixel in a region, which is to be processed with the ghost reduction process, in a filtering process. For example, the noise smoothing processing unit 104g performs a process for replacing the value of each pixel with the median of the pixels around the pixel using a five by five median filter.
Next, in step S904, the undesirable component detecting unit 104a of the image processing unit 104 calculates the relative difference information between a pair of viewpoint images generated after the noise reduction process in step S903. In other words, the undesirable component detecting unit 104a generates a relative difference image (the image of
Next, in step S905, the undesirable component detecting unit 104a determines the components remaining in the relative difference images generated in step S904 as the undesirable components.
In step S906, the undesirable component synthesizing unit 104b of the image processing unit 104 performs a process for combining the undesirable components of the viewpoint images determined in step S905 (calculates the composite value of the undesirable components). Specifically, the undesirable component synthesizing unit 104b performs a process for adding the relative difference image of
Next, in step S907, the undesirable component reducing unit 104e of the image processing unit 104 performs a correction process for reducing or removing the undesirable components from the image to be output. Specifically, the undesirable component reducing unit 104e subtracts the undesirable components calculated in step S905 and illustrated in
In step S908, the correction image is processed with a process that the image processing unit 104 normally performs. This process generates an output image to be output to the image recording medium 107 or the display unit 105. At the same time, the correction image is processed in a publicly known noise reduction process in addition to a normal development process including a white balance process or a gamma correction. This process reduces the noise of the correction image.
At last, in step S909, the CPU 108 records the output image illustrated in
The present exemplary embodiment improves a reduction process for reducing the undesirable components of an image by reducing or removing a noise component from the undesirable components using the image processing apparatus that reduces the undesirable components caused by a undesirable light in an image formed based on a plurality of viewpoint images.
The objective of the present disclosure may be achieved as described below. In other words, a storage medium in which a program code of software describing the procedures for implementing the functions described in each embodiment is recorded is provided to a system or an apparatus. Then, a computer (or, a CPU or an MPU) of the system or apparatus reads and executes the program code stored in the storage medium.
In such a case, the program code read from the storage medium implements a new function of the present disclosure, and the storage medium storing the program code and the program are included in the present disclosure.
The storage medium to provide the program code may, for example, be a flexible disk, a hard disk, an optical disk, or a magnet-optical disk. Alternatively, a CD-ROM, a CD-R, a CD-RW, a DVD-ROM, a DVD-RAM, a DVD-RW, a DVD-R, a magnetic tape, a non-volatile memory card, a ROM, or the like may be used as the storage medium.
Alternatively, making the program code read by the computer executable implements the functions described in each embodiment. Furthermore, for example, an operating system (OS) running on the computer performs some or all of the actual processes according to the commands in the program code, and the actual processes may implement the functions described in each embodiment.
In addition, the following case is included in the disclosure. First, the program code read from a storage medium is written into a memory included in a function extension board inserted in the computer or a function extension unit connected to the computer. After that, for example, a CPU included in the function extension board or the function extension unit performs some or all of the actual processes.
The present disclosure may be applied not only to a device mainly for image sensing such as a digital camera but also to an arbitrary apparatus including a built-in or externally-connected image capture apparatus such as a mobile phone, a personal computer (for example, a laptop, a desktop, or a tablet), or a game console. Thus, the “image capture apparatus” described herein is intended for including an arbitrary electric appliance with an image capturing function.
The present disclosure can reduce the undesirable components of a captured image more accurately by suppressing the noise component included in the captured image.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present disclosure has been described with reference to exemplary embodiments, the scope of the following claims are to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2016-143696, filed Jul. 21, 2016, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2016-143696 | Jul 2016 | JP | national |