Field of the Invention
The present invention relates to defective pixel detection of an imaging element.
Description of the Related Art
An imaging apparatus which performs defective pixel detection using information of a pixel adjacent to a target pixel so as to detect a defective pixel within an imaging element has been proposed. In Japanese Patent Laid-Open No. 2010-130236, technology for performing defective pixel detection using information of two or more adjacent pixels of the same color is disclosed. In Japanese Patent Laid-Open No. 2011-97542, technology for performing defective pixel detection using information of pixels of the same color and pixels of different colors is disclosed.
However, an output value of an image signal when the image signal passes through an imaging optical system to reach the imaging element and light of the image signal is received by a photosensor is unlikely to be a uniform value due to an influence of shading. That is, because luminance changes according to a light receiving area of the photosensor due to the occurrence of the shading, it is difficult to appropriately perform defective pixel detection of the imaging element.
The present invention provides technology for precisely performing defective pixel detection even when shading has occurred.
A device according to an embodiment of the present invention is an image processing device for acquiring output values of a plurality of pixels and processing image signals, the image processing device including: an acquisition unit configured to acquire a first output value from a pixel and acquire a second output value determined from a pixel adjacent to the pixel; and a detection unit configured to perform defective pixel detection by calculating an evaluation value of the pixel from the first output value and the second output value and comparing the evaluation value with a threshold value. The detection unit calculates a second evaluation value using the second output value and a first evaluation value derived from the first output value and the second output value and detects the pixel as a defective pixel if the second evaluation value is greater than the threshold value.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. An example in which an image processing device according to the present invention is applied to an imaging apparatus such as a digital camera will be described in the embodiments, but the present invention can be widely applied to an information processing device, an electronic device, etc. which execute the following picture processing.
A zoom actuator 111 performs a magnification-change operation by rotating a cam barrel (not illustrated) to move the first lens group 101 and the second lens group 103 in the optical axis direction. An aperture-shutter actuator 112 controls the opening diameter of the aperture-shutter 102 to adjust the amount of light for photographing and controls the exposure time during still picture capturing. A focus actuator 114 moves the third lens group 105 in the optical axis direction to adjust the focus.
An electronic flash 115 for illuminating an object is used during photographing. A flash illumination device using a xenon tube or an illumination device having a continuous-flash light emitting diode (LED) is used. An auto focus (AF) auxiliary light source 116 projects an image of a mask having a predetermined opening pattern onto the object field through a projection lens. Thereby, focus detection capability for low-luminance objects or low-contrast objects is improved. A central processing unit (CPU) 121 constituting a control unit of a camera body unit has a control center function of controlling the camera main unit in various ways. The CPU 121 includes a calculation unit, a read only memory (ROM), a random access memory (RAM), an analog-to-digital (A/D) converter, a digital-to-analog (D/A) converter, a communication interface circuit, etc. According to predetermined programs stored in the ROM, the CPU 121 drives various types of circuits in the camera and executes a series of operations such as AF control, an imaging process, picture processing, and a recording process. The CPU 121 performs control of defective pixel detection, defective pixel correction, and shading correction of the present embodiment.
An electronic flash control circuit 122 controls the ON operation of the electronic flash 115 in synchronization with a photographing operation according to a control command of the CPU 121. An auxiliary light source driving circuit 123 controls the ON operation of the AF auxiliary light source 116 in synchronization with a focus detection operation according to a control command of the CPU 121. An imaging element driving circuit 124 controls the imaging operation of the imaging element 107 and converts an acquired imaging signal according to A/D conversion to transmit the converted imaging signal to the CPU 121. A picture processing circuit 125 performs processes such as gamma conversion, color interpolation, and Joint Photographic Experts Group (JPEG) compression on the picture acquired by the imaging element 107 according to a control command of the CPU 121. The picture processing circuit 125 performs a process of generating a captured picture or a parallax picture acquired by the imaging element 107. A recording process or a display process is performed on an image signal of the captured picture. Also, the parallax picture is used in focus detection, a viewpoint change process, stereoscopic display, a refocus process, a ghost removing process, etc.
A focus driving circuit 126 drives the focus actuator 114 on the basis of a focus detection result according to a control command of the CPU 121 and moves the third lens group 105 in the optical axis direction, thereby adjusting the focus. An aperture-shutter driving circuit 128 drives the aperture-shutter actuator 112 to control the opening diameter of the aperture-shutter 102 according to a control command of the CPU 121. A zoom driving circuit 129 drives the zoom actuator 111 in response to a zoom operation instruction of the user according to a control command of the CPU 121.
A display unit 131 has a display device such as a liquid crystal display (LCD) and displays information about a photographing mode of the camera, a preview picture before photographing, a confirmation picture after photographing, a focus state display picture during focus detection, etc. As an operation switch, an operation unit 132 includes a power switch, a release (photographing trigger) switch, a zoom operation switch, a photographing mode selection switch, etc. and outputs an operation instruction signal to the CPU 121. A flash memory 133 is a recording medium detachable from the camera body unit and records captured picture data and the like.
Next, a pixel array of the imaging element in the present embodiment will be described with reference to
A pixel group 200 of 2 columns×2 rows includes pixels 200R, 200G, and 200B as one set. The pixel 200R (see an upper-left position) is a pixel having spectral sensitivity to red (R) and the pixel 200G (see an upper-right position and a lower-left position) is a pixel having spectral sensitivity to green (G). The pixel 200B (see a lower-right position) is a pixel having spectral sensitivity to blue (B). Further, each pixel is constituted of a first sub-pixel 201 and a second sub-pixel 202 arrayed in 2 columns×1 row. Each sub-pixel has a function of a focus detection pixel which outputs a focus detection signal. In the example illustrated in
A plan view of one pixel 200G in the imaging element illustrated in
The photoelectric conversion units 301 and 302 may be formed as, for example, photodiodes having a pin structure in which an intrinsic layer is sandwiched between a p-type layer and an n-type layer, or if necessary, may be formed as p-n junction photodiodes by omitting the intrinsic layer. In each pixel, a color filter 306 is formed between the microlens 305 and the photoelectric conversion units 301 and 302. If necessary, spectral transmittance of the color filter 306 may be changed for each sub-pixel and the color filter may be omitted.
After light incident on the pixel 200G is concentrated by the microlens 305 and further separated by the color filter 306, the light is received by each of the photoelectric conversion units 301 and 302. In the photoelectric conversion units 301 and 302, pairs of electrons and holes are generated according to an amount of light and electrons having negative charge are accumulated in an n-type layer (not illustrated) after the pairs of electrons and holes are separated by a depletion layer. On the other hand, the holes are discharged outside the imaging element through the p-type layer connected to a constant voltage source (not illustrated). Electrons accumulated in the n-type layer (not illustrated) of the photoelectric conversion units 301 and 302 are transferred to an electrostatic capacitance unit (FD) via a transfer gate and converted into a voltage signal.
A first pupil part area 501 corresponding to the first sub-pixel 201 is generally set to be in a conjugate relationship by the microlens 305 with respect to a light receiving surface of the photoelectric conversion unit 301 having a center of gravity biased in the −x-direction. That is, the first pupil part area 501 represents a pupil area capable of being received by the first sub-pixel 201 and has a center of gravity biased in the +X-direction on the pupil plane. In addition, a second pupil part area 502 corresponding to the second sub-pixel 202 is generally set to be in a conjugate relationship by the microlens 305 with respect to a light receiving surface of the photoelectric conversion unit 302 having a center of gravity biased in the +x-direction. The second pupil part area 502 represents a pupil area capable of being received by the second sub-pixel 202 and has a center of gravity biased in the −X-direction on the pupil plane. In addition, an area 500 illustrated in
The incident light is concentrated at a focus position by the microlens. However, because of an influence of diffraction due to the wave nature of light, the diameter of alight concentration spot cannot be less than a diffraction limit Δ and has a finite magnitude. While a light receiving surface size of the photoelectric conversion unit is about 1 to 2 μm, a light concentration spot size of the microlens is about 1 μm. Thus, the first and second pupil part areas 501 and 502 of
A correspondence relationship between the imaging element and the pupil division is illustrated in a schematic diagram of
As described above, the imaging element of the present embodiment has a structure in which a plurality of pixel units are arrayed, wherein each of the plurality of pixel units has a plurality of sub-pixels for receiving light beams passing through different pupil part areas of an image forming optical system. For example, signals of the sub-pixel 201 and the sub-pixel 202 are summed and read for each pixel of the imaging element, so that the CPU 121 and the picture processing circuit 125 generate a captured picture with resolution of the number of effective pixels. In this case, the captured picture is generated by combining received light signals of a plurality of sub-pixels for each pixel. Also, in another method, a first parallax picture is generated by collecting received light signals of the sub-pixels 201 of each pixel unit of the imaging element. A second parallax picture is generated by subtracting the first parallax picture from the captured picture. If necessary, the CPU 121 and the picture processing circuit 125 generates the first parallax picture by collecting received light signals of the sub-pixels 201 of each pixel unit of the imaging element and generates the second parallax picture by collecting received light signals of the sub-pixels 202 of each pixel unit. It is possible to generate one or more parallax pictures from the received light signals of the sub-pixels for each of different pupil part areas.
A parallax picture is a picture having a different viewpoint from a captured image, shading correction to be described below is performed, and pictures at a plurality of viewpoints can be simultaneously acquired. In the present embodiment, each of a captured picture, a first parallax picture, and a second parallax picture is a picture of a Bayer array. If necessary, a demosaicing process may be performed on a captured picture, a first parallax picture, a second parallax picture of the Bayer array.
Shading will be described with reference to
In
In the case of a lens exchange type imaging apparatus, shading correction corresponding to a lens device mounted on a main body unit of the imaging apparatus is performed. That is, it is necessary to pre-store a shading correction value according to imaging optical system information of the lens device in the main body unit of the imaging apparatus so as to perform the shading correction during picture recording. This is to perform picture recording at high speed so that the continuous photographing performance of the imaging apparatus is prevented from being lost. However, a method of storing all shading correction values according to imaging optical system information for each lens device in a memory requires a huge data storage area and is not practical. Therefore, shading correction is performed by acquiring data necessary for the shading correction during picture reproduction in which the rapidity of the shading correction is not required after picture acquisition. On the basis of information related to vignetting of incident light by the imaging optical system and a sensitivity characteristic of the pixel according to an angle change of the incident light, a correction value for use in the shading correction can be calculated by combining information thereof.
Next, defective pixel detection will be described with reference to
If an output value of the pixel is denoted by S, S includes a signal component Styp and a noise component N. Further, the noise component N includes a fixed noise component Nfixed and a random noise component Nrandom. Consequently, the output value S is represented by the following Formula (1).
S=S
typ
+N
fixed
+N
random (1)
The fixed noise component Nfixed is constantly output as an error of a fixed value. The random noise component Nrandom is output as an error which changes according to a magnitude of the signal component Styp. If the fixed noise component Nfixed is large, it is necessary to precisely detect a pixel having the large fixed noise component Nfixed in the defective pixel detection because the color of the picture changes and appears at all times.
The fixed noise component Nfixed is a component affected by gain (denoted by α) with respect to the signal component Styp as shown in the following Formula (2), and the defective pixel detection is performed to mainly detect such a component.
N
fixed
=S
typ·α (2)
α: Pixel variation error.
On the other hand, the random noise component Nrandom is a component which changes on the basis of a Poisson distribution in proportion to the square root of the signal component Styp as shown in the following Formula (3).
N
random=β·√{square root over (Styp)}·f(t) (3)
f(t): Function which changes in a range of ±1 at photographing time t
β: Sensor-specific value
To determine whether there is a defective pixel by mainly detecting the fixed noise component Nfixed in the defective pixel detection, the detection is performed in a condition in which shading is not possible and measurement is performed by reducing the random noise component Nrandom. However, it is difficult to remove all of the random noise component Nrandom. Thus, a process of setting an allowed value of each of the fixed noise component Nfixed and the random noise component Nrandom is performed and a threshold value is determined on the basis of a sum thereof.
As one general method of the defective pixel detection, there is a method using a difference value between a representative value obtained by selecting a peripheral pixel adjacent to a pixel as a detection target or a representative value calculated using the adjacent peripheral pixel and an output value of a defect detection pixel. Because a signal component of a case in which a noise component is not included is not actually known, the representative value is used as the signal component. A process of evaluating whether the difference value based on the representative value can be allowed is performed.
A position indicated by a pixel position (i,j) in
An evaluation value of general defective pixel detection (a first evaluation value) is denoted by a function E(i,j,t) of a pixel position (i,j) and the photographing time t. An output value of the pixel is denoted by S(i,j,t). The first evaluation value is calculated by dividing an absolute value of a difference between the first output value and the second output value by the second output value. The following Formula (4) using a predetermined threshold value Eerror is used.
If a predetermined threshold value in a certain standard output value (denoted by Sstd) is denoted by Eerror0 and an allowed variation error is defined as α0, the predetermined threshold value Eerror0 from Formula (4) becomes the following Formula (5).
In the defective pixel detection, it is determined that the target pixel is a defective pixel if the evaluation value E exceeds the predetermined threshold value Eerror0. That is, defective pixel detection is performed using the following Formula (6).
Formula (6) is normalized in luminance. That is, the evaluation value E is a normalized luminance evaluation value. If a change in luminance is in a range of several %, it is possible to precisely perform defective pixel detection because a change of Styp(i,j) is considered to be very small. However, a difference in transmittances of color filters of R, G, and B pixels or a difference in shading illustrated in
If a lens exchange type camera or the like performs photographing at various exit pupil distances, the defective pixel detection should be performed in real time. In this case, it is necessary to maintain detection precision to the same extent for each picture area even when shading as in
A conditional formula of the defective pixel detection when an output value has changed becomes the following Formula (7).
If the whole of Formula (7) is multiplied by √Styp(i,j)/√Sstd, the following Formula (8) is obtained.
If Formula (5) is substituted into Formula (8) for rearrangement, the following Formula (9) is given.
If Formula (9) and Formula (6) are compared, it can be seen that the first evaluation value E is corrected using Styp and Sstd and the second term of the right side of Formula (9) is added in relation to specific noise. That is, the second evaluation value is calculated by multiplying the first evaluation value by the term including the square root of a ratio between the second output value and the standard output value. The second term of the right side of Formula (9) is the term in which a contribution rate increases when a change of Styp with respect to Sstd increases. Thereby, it is possible to change and evaluate a determination threshold value according to Styp Also, Sstd may be set so that √Styp (i,j)/√Sstd is necessarily less than 1 in the right side of Formula (9) in view of the balance between defective pixel detection precision and a calculation scale which are required. The following Formula (10) is an inequality indicating a minimum value Styp_min assumed in Styp and a determination threshold value Eerror0*. It is possible to perform evaluation by a fixed determination threshold value using Eerror0* derived by Formula (10) in the right side of Formula (9)
The defective pixel detection focused on one pixel has been described in this example, but a similar concept can also be applied to the case of the linear defective pixel detection illustrated in
In a defective pixel correction process, correction is performed by a bilinear method, a bi-cubic method, or the like using a pixel signal of a peripheral pixel with respect to a pixel detected by the defective pixel detection. By appropriately detecting and correcting defective pixels, high quality pictures can be provided. The defective pixel correction can be performed by a predetermined calculation method without using the information of the imaging optical system. Further, by performing hardware processing within the image processing device, defective pixel correction can be performed at a higher speed than software processing by an external device (PC or the like). Therefore, after the extraction of the defective pixel, the defective pixel correction process is executed within the imaging apparatus.
A process of generating a parallax picture will be described with reference to
As described with reference to
In S801 of
The CPU 121 executes a process of reading pixel data from the memory in S805 of
In the present embodiment, it is possible to appropriately perform defective pixel detection on the basis of a luminance evaluation value normalized when shading has occurred. Consequently, it is possible to provide a high-quality picture on the basis of an image signal on which the defective pixel correction and the shading correction have been performed.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., CPU, micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a RAM, a ROM, a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2016-039156, filed Mar. 1, 2016, which is hereby incorporated by reference wherein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2016-039156 | Mar 2016 | JP | national |