The present disclosure relates to an image processing device, and an image processing method, and a program, and particularly relates to an image processing device, and an image processing method, and a program that execute remosaic processing for an image.
Imaging elements used in imaging devices such as digital cameras are configured such that a color filter made up of an RGB array for example is mounted thereon, and specific wavelength light is incident upon pixels.
Specifically, a color filter having a Bayer array for example is often used.
A Bayer-array captured image is what is known as a mosaic image in which only pixel values corresponding to any of the RGB colors are set to the pixels of an imaging element. A signal processing unit of a camera carries out demosaic processing and so forth in which various signal processing such as pixel value interpolation is carried out with respect to this mosaic image and all pixel values for RGB are set to the pixels, and a color image is generated and output.
Many studies have already been carried out with regard to signal processing for captured images for which a color filter according to this Bayer array is provided, and this signal processing can, to an extent, be said to be technically established. However, at present, sufficient studies have not yet been carried out with regard to signal processing for images having an array that is different from a Bayer array.
Signal processing in general cameras is often set to be executed with respect to Bayer array images, and if the pixel array of the imaging element is an array that is different from a Bayer array, it is possible to apply known signal processing by converting the pixel array of an image that is input from the imaging element into a Bayer array and inputting this to a camera signal processing unit.
Therefore, with regard to a captured image of a pixel array other than a Bayer array, it is preferable to execute processing for converting an input image from the imaging element into a Bayer array as preliminary processing prior to inputting to the camera signal processing unit. This kind of pixel array conversion processing is called “remosaic processing”. However, there are various arrays for imaging element pixel arrays, and there is no conventional technology that sufficiently discloses optimal remosaic processing with respect to these various pixel arrays.
It should be noted that configurations having an array that is different from a Bayer array are described in, for example, Patent Document 2 (JP 11-29880 A) and Patent Document 3 (JP 2000-69491 A).
In these Patent Documents 2 and 3, settings are implemented with which a plurality of same-color pixels, such as R pixels having 2×2 pixels, G pixels having 2×2 pixels, and B pixels having 2×2 pixels, are arranged in an imaging element (image sensor), the constituent pixels having 2×2 pixels of the same color are set to different exposure times, and photographing is executed. Patent Documents 2 and 3 describe configurations in which same-color pixel values of different exposure times captured by means of this kind of image sensor are synthesized, and wide dynamic range images are obtained.
However, these documents describe the generation of wide dynamic range images based on the synthesis of pixel values of a plurality of different exposure times, and a clear explanation with respect to remosaic processing is not disclosed.
The present disclosure takes for example the abovementioned problem into consideration, and an objective thereof is to provide an image processing device, and an image processing method, and a program that execute image processing, particularly optimal remosaic processing, for images captured by an imaging element which is provided with a color filter having an array that is different from a Bayer array.
A first aspect of the present disclosure is an image processing device including an image signal correction unit that executes correction processing for a pixel signal of an input image,
wherein the image signal correction unit inputs, as the input image, a mosaic image in which pixel blocks configured of a plurality of same-color pixels are arrayed,
from among constituent pixels of the input image, detects pixel value gradients in eight directions in a conversion-target pixel location that is targeted for color conversion processing,
decides an interpolated pixel value calculation mode for the conversion-target pixel on the basis of the pixel value gradients in the eight directions, and
calculates an interpolated pixel value for the conversion-target pixel location in accordance with the decided processing mode.
Further, in one embodiment of an image processing device of the present disclosure, the image signal correction unit executes remosaic processing for inputting, as the input image, a mosaic image in which pixel blocks configured of a plurality of same-color pixels are arrayed, and converting the pixel array of the input image into a different pixel array,
and the image signal correction unit, in the remosaic processing,
from among constituent pixels of the input image, detects pixel value gradients in eight directions in a conversion-target pixel location that is targeted for color conversion processing,
decides an interpolated pixel value calculation mode for the conversion-target pixel on the basis of the pixel value gradients in the eight directions, and
calculates an interpolated pixel value for the conversion-target pixel location in accordance with the decided processing mode.
In addition, in an embodiment of the image processing device of the present disclosure, the image signal correction unit executes remosaic processing for inputting, as the input image, an image having a quadripartite Bayer-type RGB array in which colors of a Bayer-type RGB array are implemented as an array of 2×2 pixel units, and converting the input image into a Bayer array.
In addition, in an embodiment of the image processing device of the present disclosure, the image signal correction unit executes processing for selecting pixels in a direction having a small gradient, as reference pixels on the basis of the pixel value gradients in the eight directions, and calculating an interpolated pixel value for the conversion-target pixel by means of blend processing of the pixel values of the selected reference pixels.
In addition, in an embodiment of the image processing device of the present disclosure, if the largest value of the pixel value gradients in the eight directions is equal to or less than a predetermined threshold value, the image signal correction unit executes processing for calculating, as an interpolated pixel value for the conversion-target pixel, a smoothed signal based on the pixel values of the pixels surrounding the conversion-target pixel.
In addition, in an embodiment of the image processing device of the present disclosure, the image signal correction unit, in the calculation processing for the pixel value gradients in the eight directions, when calculating a gradient in at least one direction,
calculates low-frequency component gradient information in which pixels of the same color included in the input image are applied,
high-frequency component gradient information in which pixels of the same color included in the input image are applied, and
different-color component gradient information in which pixels of different colors included in the input image are applied, and calculates a pixel value gradient by weighted addition of the three items of gradient information.
In addition, in an embodiment of the image processing device of the present disclosure, the image signal correction unit, in the calculation processing for the pixel value gradients in the eight directions, when
calculating a gradient in at least one direction, calculates a low-frequency component gradient in which pixels of the same color included in the input image are applied,
a high-frequency component gradient in which pixels of the same color included in the input image are applied, and
a mid-frequency component gradient in which pixels of the same color included in the input image are applied, and calculates a pixel value gradient by weighted addition of the three items of gradient information.
In addition, in an embodiment of the image processing device of the present disclosure, the image signal correction unit, in the calculation processing for the pixel value gradients in the eight directions, when calculating a gradient in at least one direction,
calculates a low-frequency component gradient in which pixels of the same color included in the input image are applied, and
a high-frequency component gradient in which pixels of the same color included in the input image are applied, and calculates a pixel value gradient by weighted addition of the two items of gradient information.
In addition, in an embodiment of the image processing device of the present disclosure, the image signal correction unit, in the calculation processing for the pixel value gradients in the eight directions, when calculating gradients in a vertical direction and a horizontal direction,
calculates low-frequency component gradient information in which pixels of the same color included in the input image are applied,
high-frequency component gradient information in which pixels of the same color included in the input image are applied, and
different-color component gradient information in which pixels of different colors included in the input image are applied, and calculates a pixel value gradient by weighted addition of the three items of gradient information,
when calculating gradients in a top-right 45-degree direction and a bottom-right 45-degree direction,
calculates low-frequency component gradient information in which pixels of the same color included in the input image are applied,
high-frequency component gradient information in which pixels of the same color included in the input image are applied, and
mid-frequency component gradient information in which pixels of the same color included in the input image are applied, and calculates a pixel value gradient by weighted addition of the three items of gradient information,
and when calculating gradients in a top-right 22.5-degree direction, a bottom-right 22.5-degree direction, a top-right 67.5-degree direction, and a bottom-right 67.5-degree direction,
calculates low-frequency component gradient information in which pixels of the same color included in the input image are applied, and
high-frequency component gradient information in which pixels of the same color included in the input image are applied, and calculates a pixel value gradient by weighted addition of the two items of gradient information.
In addition, in an embodiment of the image processing device of the present disclosure, the image signal correction unit executes interpolated pixel value calculation processing in which reference pixel locations are altered in accordance with constituent pixel locations of the pixel blocks configured of the plurality of same-color pixels.
In addition, a second aspect of the present disclosure is an image processing method which is executed in an image processing device,
wherein an image signal correction unit inputs a mosaic image in which pixel blocks configured of a plurality of same-color pixels are arrayed as an input image,
and executes processing for detecting, from among constituent pixels of the input image, pixel value gradients in eight directions in a conversion-target pixel location that is targeted for color conversion processing,
processing for deciding an interpolated pixel value calculation mode for the conversion-target pixel on the basis of the pixel value gradients in the eight directions, and
processing for calculating an interpolated pixel value for the conversion-target pixel location in accordance with the decided processing mode.
In addition, a third aspect of the present disclosure is a program which causes image processing to be executed in an image processing device,
wherein an image signal correction unit is made to input a mosaic image in which pixel blocks configured of a plurality of same-color pixels are arrayed as an input image,
and to execute processing for detecting, from among constituent pixels of the input image, pixel value gradients in eight directions in a conversion-target pixel location that is targeted for color conversion processing,
processing for deciding an interpolated pixel value calculation mode for the conversion-target pixel on the basis of the pixel value gradients in the eight directions, and
processing for calculating an interpolated pixel value for the conversion-target pixel location in accordance with the decided processing mode.
Moreover, the program of the present disclosure is, for example, a program that can be provided by means of a storage medium or a communication medium provided in a computer-readable format, with respect to an information processing device or a computer system capable of executing various program codes. By providing this kind of program in a computer-readable format, processing corresponding to the program is realized on the information processing device or the computer system.
Further objectives, features, and advantages of the present disclosure will become apparent by means of a more detailed explanation based on an embodiment of the present disclosure described hereafter and the appended drawings. It should be noted that a system in the present description is a logical set configuration of a plurality of devices, and is not restricted to the constituent devices being in the same enclosure.
According to the configuration of an embodiment of the present disclosure, a device and a method for executing remosaic processing for performing conversion into an image of a difference pixel array are realized.
Specifically, remosaic processing for inputting, as an input image, a mosaic image in which pixel blocks configured of a plurality of same-color pixels are arrayed, and converting the pixel array of the input image into a different pixel array is executed. In this remosaic processing, from among constituent pixels of the input image, pixel value gradients in eight directions in a conversion-target pixel location that is targeted for color conversion processing are detected, an interpolated pixel value calculation mode for the conversion-target pixel is decided on the basis of the pixel value gradients in the eight directions, and an interpolated pixel value for the conversion-target pixel location is calculated in accordance with the decided processing mode. For example, an image having a quadripartite Bayer-type RGB array in which colors of a Bayer-type RGB array are implemented as an array of 2×2 pixel units is converted into a Bayer array.
Details of the image processing device, and the image processing method, and the program of the present disclosure are described hereafter with reference to the drawings. It should be noted that the explanation is given in accordance with the following items.
1. Regarding an exemplary configuration of an imaging element
2. Regarding an exemplary configuration of an image processing device
3. Regarding a specific example of image processing
4. Regarding details of remosaic processing
5. Summary of the configuration of the present disclosure
[1. Regarding an Exemplary Configuration of an Imaging Element]
An exemplary configuration of an imaging element is described with reference to
(1) Bayer array
(2) Quadripartite Bayer-type RGB array
(3) RGBW-type array
(1) Bayer array is an array that is employed in many cameras, and signal processing for captured images that has a color filter having this Bayer array is more or less established.
However, with regard to (2) quadripartite Bayer-type RGB array and (3) RGBW-type array, at present it still cannot be said that sufficient studies have been carried out with regard to signal processing for images captured by an imaging element provided with these filters.
It should be noted that the (2) quadripartite Bayer-type RGB array is equivalent to an array in which single R, G, and B pixels of the Bayer array depicted in (1) are set as four pixels.
An image processing device that executes signal processing for images captured by an imaging element provided with a color filter having this (2) quadripartite Bayer-type RGB array is described hereafter.
[2. Regarding an Exemplary Configuration of an Image Processing Device]
As depicted in
It should be noted that the imaging device 100 depicted in
Hereafter, the imaging device 100 depicted in
The imaging element (image sensor) 110 of the imaging device 100 depicted in
red (R) that transmits near-red wavelengths,
green (G) that transmits near-green wavelengths, and
blue (B) that transmits near-blue wavelengths.
As previously described, a quadripartite Bayer-type RGB array is equivalent to an array in which single pixels of the Bayer array depicted in FIG. 1(1) are set as four pixels.
The imaging element 110 having this quadripartite Bayer-type RGB array 181 receives, in pixel units, light of any of RGB via the optical lens 105, and by means of photoelectric conversion, generates and outputs an electrical signal corresponding to light-reception signal intensity. A mosaic image formed from the three types of spectra of RGB is obtained by this imaging element 110.
The output signal of the imaging element (image sensor) 110 is input to an image signal correction unit 200 of the image processing unit 120.
The image signal correction unit 200 executes processing for converting an image having the quadripartite Bayer-type RGB array 181 into a Bayer array 182 that is often used in general cameras.
In other words, processing for converting a captured image having the quadripartite Bayer-type RGB array depicted in FIG. 1(2) into the Bayer array depicted in FIG. 1(1) is performed.
Hereafter, this kind of color array conversion processing is referred to as remosaic processing.
An image having the Bayer array 182 depicted in
Moreover, control signals from the control unit 140 are input to the optical lens 105, the imaging element 110, and the image processing unit 120, and photographing processing control and signal processing control are executed. In accordance with a program stored in the memory 130 for example, the control unit 140 for example executes various kinds of processing other than imaging in accordance with user input from an input unit that is not depicted.
[3. Regarding a Specific Example of Image Processing]
Next, processing that is executed in the image processing unit 120 of
a) is the entire signal processing sequence depicting the entirety of the processing executed in the image processing unit 120.
First, in step S101, a captured image is input from the imaging element 110.
This captured image is an image having the quadripartite Bayer-type RGB array 181.
Next, in step S102, remosaic processing is executed. This is processing that is executed by the image signal correction unit 200 depicted in
The details of this processing are depicted in the flow depicted in
When the remosaic processing has been completed in step S102, an image having the Bayer array 182 depicted in
Step S103 and thereafter is processing performed by the RGB signal processing unit 250 depicted in
In step S103, white balance (WB) adjustment processing is executed.
In step S104, demosaic processing for setting RGB pixel values to the pixels is executed.
In step S105, linear matrix (LMTX) processing for removing mixed colors and so forth is executed.
Finally, in step S106, the color image 183 depicted in
The remosaic processing of step S102 is executed in accordance with the flow depicted in
It should be noted that the details of the remosaic processing are furthermore described in detail with reference to
First, a summary of the remosaic processing is described with reference to the flow depicted in
First, in step S151, direction determination for the pixel value gradients of the quadripartite Bayer-type RGB array image which is the input image is performed. This is processing that is equivalent to what is known as edge direction determination.
Next, in step S152, G interpolation processing for estimating and setting G pixel values for RB locations in the quadripartite Bayer-type RGB array image is performed. At the time of this interpolation processing, interpolation processing is performed in which the pixel value gradient information calculated in step SZ151 is applied, and a pixel value in a direction having a small pixel value gradient serves as a reference pixel value.
In addition, in step S153, estimation of RB pixel values in a G signal location is performed. This estimation processing is performed on the basis of the estimation that G pixel values and the RB pixel values have a fixed correlation in a predetermined local region.
By means of the processing of these steps S152 and S153, basic pixel value setting for converting the captured image having the quadripartite Bayer-type RGB array 181 into the Bayer array 182 is performed.
In addition, parallel with this processing, false color detection in step S154 and false color correction in step S155 are executed.
If a signal in which the Nyquist frequency (frequency of ½ of the sampling frequency) of the imaging element (image sensor) 110 is exceeded is included in the signals input to the imaging element (image sensor) 110, aliasing (folding) based on the sampling theorem occurs, which is a factor in the occurrence of image quality deterioration, specifically false colors.
In steps S154 to S155, this kind of false color detection and false color correction are executed.
In step S156, a Bayer array image in which the results of the false color detection and the false color correction in steps S154 to S155 are reflected is generated and output.
[4. Regarding Details of Remosaic Processing]
Next, details of the remosaic processing are described with reference to the flowchart depicted in
The flowchart depicted in
The flow depicted in
The image signal correction unit 200 depicted in
First, in step S201, it is determined whether or not an input pixel value is a G pixel.
If the input pixel value is not a G pixel, in other words if the input pixel value is an RB pixel, processing advances to step S202, and direction determination is performed.
This direction determination is processing for determining the gradient (grad) of a pixel value, and is executed as processing that is the same as edge direction determination.
The direction determination processing of step S202 is described with reference to
As depicted in
the horizontal direction (H),
the vertical direction (V),
the top-right 45-degree direction (A), and
the bottom-right 45-degree direction (D),
and, in addition, also the four directions of
the top-right 22.5-degree direction (A2),
the top-right 67.5-degree direction (A3),
the bottom-right 22.5-degree direction (D2), and
the bottom-right 67.5-degree direction (D3).
With regard to the quadripartite Bayer-type RGB array previously described with reference to FIG. 1(2), the sampling intervals for same-color components are sparse compared to the Bayer array depicted in FIG. 1(1), and therefore folding occurs at the ½ Nyquist frequency.
In order to accurately detect this kind of frequency folding pattern, in the direction determination of the present disclosure, as described above, determination is executed in the large number of eight directions.
Moreover, in the present embodiment, when gradient detection in this large number of directions is performed, for example, in addition to the gradient information calculated by selecting only same-color pixels from gradient directions, as depicted in
(Horizontal Direction (H) Gradient: gradH)
The example depicted in
The horizontal direction (H) gradient is calculated on the basis of the following data.
Low-frequency component horizontal direction (H) gradient: gradH_low
High-frequency component horizontal direction (H) gradient: gradH_high
Different-color component horizontal direction (H) gradient: gradH_color
It should be noted that I(x,y) is the pixel value at the location of coordinate (x,y).
abs( ) represents an absolute value.
The horizontal direction (H) gradient corresponding to the coordinate location (2, 2) in the 6×6 pixels depicted in
It should be noted that, in the above formula, aH, bH, and cH are predefined weighting coefficients.
Generally, when incident light passes through a lens, the signal level decreases at high-frequency components.
Therefore, it is desirable for the magnitude relationship of the abovementioned weighting coefficients to be aH>bH.
Furthermore, an intense false color is generated if the direction determination is erroneous in the ½ Nq vicinity.
In order to sufficiently suppress erroneous determination in this region, it is desirable to set cH>aH.
(Vertical Direction (V) Gradient: gradV)
The example depicted in
The vertical direction (V) gradient is calculated on the basis of the following data.
Low-frequency component vertical direction (V) gradient: gradV_low
High-frequency component vertical direction (V) gradient: gradV_high
Different-color component vertical direction (V) gradient: gradV_color
It should be noted that I(x,y) is the pixel value at the location of coordinate (x,y).
abs( ) represents an absolute value.
The vertical direction (V) gradient corresponding to the coordinate location (2, 2) in the 6×6 pixels depicted in
It should be noted that, in the above formula, aV, bV, and cV are predefined weighting coefficients.
As previously described, generally, when incident light passes through a lens, the signal level decreases at high-frequency components.
Therefore, it is desirable for the magnitude relationship of the weighting coefficients to be aV>bV.
Furthermore, an intense false color is generated if the direction determination is erroneous in the ½ Nq vicinity.
In order to sufficiently suppress erroneous determination in this region, it is desirable to set cV>aV.
(Top-Right 45-Degree Direction (A) Gradient: gradA)
The example depicted in
The top-right 45-degree direction (A) gradient is calculated on the basis of the following data.
Mid-frequency component top-right 45-degree direction (A) gradient: gradA_mid
Low-frequency component top-right 45-degree direction (A) gradient: gradA_low
High-frequency component top-right 45-degree direction (A) gradient: gradA_high
It should be noted that I(x,y) is the pixel value at the location of coordinate (x,y).
abs( ) represents an absolute value.
The top-right 45-degree direction (A) gradient corresponding to the coordinate location (2, 2) in the 6×6 pixels depicted in
It should be noted that, in the above formula, aA, bA, and cA are predefined weighting coefficients.
As previously described, generally, when incident light passes through a lens, the signal level decreases at high-frequency components.
Therefore, it is desirable for the magnitude relationship of the weighting coefficients in the above formula to be aA, bA>cA.
(Bottom-Right 45-Degree Direction (D) Gradient: gradD)
The example depicted in
The bottom-right 45-degree direction (D) gradient is calculated on the basis of the following data.
Mid-frequency component bottom-right 45-degree direction (D) gradient: gradD_mid
Low-frequency component bottom-right 45-degree direction (D) gradient: gradD_low
High-frequency component bottom-right 45-degree direction (D) gradient: gradD_high
It should be noted that I(x,y) is the pixel value at the location of coordinate (x,y).
abs( ) represents an absolute value.
The bottom-right 45-degree direction (D) gradient corresponding to the coordinate location (2, 2) in the 6×6 pixels depicted in
It should be noted that, in the above formula, aD, bD, and cD are predefined weighting coefficients.
As previously described, generally, when incident light passes through a lens, the signal level decreases at high-frequency components.
Therefore, it is desirable for the magnitude relationship of the weighting coefficients in the above formula to be aD, bD>cD.
(Top-Right 22.5-Degree Direction (A2) Gradient: gradA2)
The example depicted in
The top-right 22.5-degree direction (A2) gradient is calculated on the basis of the following data.
Center component top-right 22.5-degree direction (A2) gradient: gradA2_center
High-frequency component top-right 22.5-degree direction (A2) gradient: gradA2_high
It should be noted that, as depicted in
It should be noted that I(x,y) is the pixel value at the location of coordinate (x,y).
abs( ) represents an absolute value.
The top-right 22.5-degree direction (A2) gradient corresponding to the coordinate location (2, 2) in the 6×6 pixels depicted in
It should be noted that, in the above formula, aA2 and bA2 are predefined weighting coefficients.
As previously described, generally, when incident light passes through a lens, the signal level decreases at high-frequency components.
Therefore, it is desirable for the magnitude relationship of the weighting coefficients in the above formula to be aA2>bA2.
(Bottom-Right 22.5-Degree Direction (D2) Gradient: gradD2)
The example depicted in
The bottom-right 22.5-degree direction (D2) gradient is calculated on the basis of the following data.
Center component bottom-right 22.5-degree direction (D2) gradient: gradD2_center
Both-side component bottom-right 22.5-degree direction (D2) gradient: gradD2_wb
High-frequency component bottom-right 22.5-degree direction (D2) gradient: gradD2_high
It should be noted that, as depicted in
It should be noted that I(x,y) is the pixel value at the location of coordinate (x,y).
abs( ) represents an absolute value.
The bottom-right 22.5-degree direction (D) gradient corresponding to the coordinate location (2, 2) in the 6×6 pixels depicted in
It should be noted that, in the above formula, aD2 and bD2 are predefined weighting coefficients.
As previously described, generally, when incident light passes through a lens, the signal level decreases at high-frequency components.
Therefore, it is desirable for the magnitude relationship of the weighting coefficients in the above formula to be aD2>bD2.
(Top-Right 67.5-Degree Direction (A3) Gradient: gradA3)
The example depicted in
The top-right 67.5-degree direction (A3) gradient is calculated on the basis of the following data.
Center component top-right 67.5-degree direction (A3) gradient: gradA3_center
High-frequency component top-right 67.5-degree direction (A3) gradient: gradA3_high
It should be noted that, as depicted in
It should be noted that I(x,y) is the pixel value at the location of coordinate (x,y).
abs( ) represents an absolute value.
The top-right 67.5-degree direction (A3) gradient corresponding to the coordinate location (2, 2) in the 6×6 pixels depicted in
It should be noted that, in the above formula, aA3 and bA3 are predefined weighting coefficients.
As previously described, generally, when incident light passes through a lens, the signal level decreases at high-frequency components.
Therefore, it is desirable for the magnitude relationship of the weighting coefficients in the above formula to be aA3>bA3.
(Bottom-Right 67.5-Degree Direction (D3) Gradient: gradD3)
The example depicted in
The bottom-right 67.5-degree direction (D3) gradient is calculated on the basis of the following data.
High-frequency component bottom-right 67.5-degree direction (D3) gradient: gradD3_high
It should be noted that, as depicted in
It should be noted that I(x,y) is the pixel value at the location of coordinate (x,y).
abs( ) represents an absolute value.
The bottom-right 67.5-degree direction (D) gradient corresponding to the coordinate location (2, 2) in the 6×6 pixels depicted in
It should be noted that, in the above formula, aD3 and bD3 are predefined weighting coefficients.
As previously described, generally, when incident light passes through a lens, the signal level decreases at high-frequency components.
Therefore, it is desirable for the magnitude relationship of the weighting coefficients in the above formula to be aD3>bD3.
The direction determination processing of step 202 of the flowchart of
Processing next advances to step S203, and G interpolation processing is executed. This is G interpolation processing for estimating and setting G pixel values for RB locations in the quadripartite Bayer-type RGB array image. At the time of this interpolation processing, interpolation processing is performed in which the pixel value gradient information calculated in step S151 is applied, and a pixel value in a direction having a small pixel value gradient serves as a reference pixel value.
A specific example of interpolation processing is described with reference to
(Calculation Processing Example of an Interpolated Pixel in the Case of a 2, 2 Phase)
First, with respect to all eight directions in which gradient detection has been executed, interpolated pixel values, for which pixels of each of the directions serve as reference pixel values, are calculated. In other words, first, the following plurality of interpolated values are calculated:
an interpolated pixel value itp_GH for which pixels in the horizontal direction (H) serve as reference pixel values,
an interpolated pixel value itp_GV for which pixels in the vertical direction (V) serve as reference pixel values,
an interpolated pixel value itp_GA for which pixels in the top-right 45-degree direction (A) serve as reference pixel values,
an interpolated pixel value itp_GD for which pixels in the bottom-right 45-degree direction (D) serve as reference pixel values,
an interpolated pixel value itp_GA2 for which pixels in the top-right 22.5-degree direction (A2) serve as reference pixel values,
an interpolated pixel value itp_GA3 for which pixels in the top-right 67.5-degree direction (A3) serve as reference pixel values,
an interpolated pixel value itp_GD2 for which pixels in the bottom-right 22.5-degree direction (D2) serve as reference pixel values, and
an interpolated pixel value itp_GD3 for which pixels in the bottom-right 67.5-degree direction (D3) serve as reference pixel values.
In addition, on the basis of these interpolated pixel values, processing for selecting a direction having a small gradient for example is performed, and blend processing of the abovementioned calculated interpolated values that correspond to the selected direction and so forth is performed to decide a final interpolated value.
Calculation of the eight interpolated pixel values is executed in accordance with the following formula.
itp—GH=I(1,2)×1.25−I(5,2)×0.25
itp—GV=I(2,1)×1.25−I(2,5)×0.25
itp—GA=I(1,3)×0.5+I(3,1)×0.5
itp—GD=I(1,2)×0.375+I(3,4)×0.125+I(2,1)×0.375+I(4,3)×0.125
itp—GA2=I(0,2)×0.1875+I(3,1)×0.3125+I(1,3)×0.3+I(4,2)×0.2
itp—GD2=I(5,2)×0.225+I(2,1)×0.025+I(1,2)×0.525+I(4,3)×0.225
itp—GA3=I(2,0)×0.2+I(1,3)×0.3+I(3,1)×0.1875+I(2,4)×0.3125
itp—GD3=I(2,1)×0.525+I(3,4)×0.225+I(1,2)×0.225+I(2,5)×0.025 [Mathematical Formula 9]
(Calculation Processing Example of an Interpolated Pixel in the Case of a 3, 2 Phase)
First, with respect to all eight directions in which gradient detection has been executed, interpolated pixel values, for which pixels of each of the directions serve as reference pixel values, are calculated. In other words, first, the following plurality of interpolated values are calculated:
an interpolated pixel value itp_GH for which pixels in the horizontal direction (H) serve as reference pixel values,
an interpolated pixel value itp_GV for which pixels in the vertical direction (V) serve as reference pixel values,
an interpolated pixel value itp_GA for which pixels in the top-right 45-degree direction (A) serve as reference pixel values,
an interpolated pixel value itp_GD for which pixels in the bottom-right 45-degree direction (D) serve as reference pixel values,
an interpolated pixel value itp_GA2 for which pixels in the top-right 22.5-degree direction (A2) serve as reference pixel values,
an interpolated pixel value itp_GA3 for which pixels in the top-right 67.5-degree direction (A3) serve as reference pixel values,
an interpolated pixel value itp_GD2 for which pixels in the bottom-right 22.5-degree direction (D2) serve as reference pixel values, and
an interpolated pixel value itp_GD3 for which pixels in the bottom-right 67.5-degree direction (D3) serve as reference pixel values.
In addition, on the basis of these interpolated pixel values, processing for selecting a direction having a small gradient for example is performed, and blend processing of the abovementioned calculated interpolated values that correspond to the selected direction and so forth is performed to decide a final interpolated value.
Calculation of the eight interpolated pixel values is executed in accordance with the following formula.
itp—GH=I(4,2)×1.25−I(0,2)×0.25
itp—GV=I(3,1)×1.25−I(3,5)×0.25
itp—GA=I(3,1)×0.375+I(1,3)×0.125+I(4,2)×0.375+I(2,4)×0.125
itp—GD=I(2,1)×0.5+I(4,3)×0.5
itp—GA2=I(0,2)×0.025+I(3,1)×0.225+I(1,3)×0.225+I(4,2)×0.525
itp—GD2=I(5,2)×0.3125+I(2,1)×0.1875+I(1,2)×0.2+I(4,3)×0.3
itp—GA3=I(3,1)×0.525+I(2,4)×0.225+I(4,2)×0.225+I(3,5)×0.025
itp—GD3=I(2,1)×0.3125+I(3,4)×0.1875+I(3,0)×0.2+I(4,3)×0.3 [Mathematical Formula 10]
(Calculation Processing Example of an Interpolated Pixel in the Case of a 2, 3 Phase)
First, with respect to all eight directions in which gradient detection has been executed, interpolated pixel values, for which pixels of each of the directions serve as reference pixel values, are calculated. In other words, first, the following plurality of interpolated values are calculated:
an interpolated pixel value itp_GH for which pixels in the horizontal direction (H) serve as reference pixel values,
an interpolated pixel value itp_GV for which pixels in the vertical direction (V) serve as reference pixel values,
an interpolated pixel value itp_GA for which pixels in the top-right 45-degree direction (A) serve as reference pixel values,
an interpolated pixel value itp_GD for which pixels in the bottom-right 45-degree direction (D) serve as reference pixel values,
an interpolated pixel value itp_GA2 for which pixels in the top-right 22.5-degree direction (A2) serve as reference pixel values,
an interpolated pixel value itp_GA3 for which pixels in the top-right 67.5-degree direction (A3) serve as reference pixel values,
an interpolated pixel value itp_GD2 for which pixels in the bottom-right 22.5-degree direction (D2) serve as reference pixel values, and
an interpolated pixel value itp_GD3 for which pixels in the bottom-right 67.5-degree direction (D3) serve as reference pixel values.
In addition, on the basis of these interpolated pixel values, processing for selecting a direction having a small gradient for example is performed, and blend processing of the abovementioned calculated interpolated values that correspond to the selected direction and so forth is performed to decide a final interpolated value.
(Calculation Processing Example of an Interpolated Pixel in the Case of a 3, 3 Phase)
First, with respect to all eight directions in which gradient detection has been executed, interpolated pixel values, for which pixels of each of the directions serve as reference pixel values, are calculated. In other words, first, the following plurality of interpolated values are calculated:
an interpolated pixel value itp_GH for which pixels in the horizontal direction (H) serve as reference pixel values,
an interpolated pixel value itp_GV for which pixels in the vertical direction (V) serve as reference pixel values,
an interpolated pixel value itp_GA for which pixels in the top-right 45-degree direction (A) serve as reference pixel values,
an interpolated pixel value itp_GD for which pixels in the bottom-right 45-degree direction (D) serve as reference pixel values,
an interpolated pixel value itp_GA2 for which pixels in the top-right 22.5-degree direction (A2) serve as reference pixel values,
an interpolated pixel value itp_GA3 for which pixels in the top-right 67.5-degree direction (A3) serve as reference pixel values,
an interpolated pixel value itp_GD2 for which pixels in the bottom-right 22.5-degree direction (D2) serve as reference pixel values, and
an interpolated pixel value itp_GD3 for which pixels in the bottom-right 67.5-degree direction (D3) serve as reference pixel values.
In addition, on the basis of these interpolated pixel values, processing for selecting a direction having a small gradient for example is performed, and blend processing of the abovementioned calculated interpolated values that correspond to the selected direction and so forth is performed to decide a final interpolated value.
(Selection of Final Interpolated Pixel Value from Interpolated Pixel Values Based on the Reference Pixels in the Eight Directions Calculated)
As described with reference to
In addition, on the basis of these interpolated pixel values, processing for selecting a direction having a small gradient for example is performed, and blend processing of the abovementioned calculated interpolated values that correspond to the selected direction and so forth is performed to decide a final interpolated value.
Artifacts increase if determination is erroneously made in an orthogonal direction, and therefore an algorithm for selecting the direction having a small gradient (grad) from orthogonal directions is applied.
For example, blend processing that uses interpolated pixel values corresponding to the direction selected in accordance with the abovementioned algorithm is executed to calculate a final interpolated pixel value.
Specifically, for example, if the largest value of gradHV, gradAD, and gradA2A3D2D3 is equal to or less than a threshold value, a smoothed signal of the surrounding pixels is output as an interpolated pixel value. (Noise countermeasure for flat sections)
When gradA2A3D2D3 is the smallest value, interpolation is performed from those directions, and in other cases, interpolated values are blended in accordance with the values of gradHV and gradAD.
A final interpolated pixel value is calculated in accordance with this kind of processing.
Returning to the flow of
In step S203, when the G interpolation processing that is processing for setting a G pixel value for an RB pixel location finishes, processing advances to step S204.
In step S204, it is determined whether or not the pixel location in question is a G estimation pixel location for which estimation of a G pixel value has been performed. In other words, it is determined whether or not the pixel location in question is a pixel for which a G pixel value, calculated by means of subjecting a pixel location that was originally an RB pixel to interpolation processing, has been set. If the pixel location in question is a G estimation pixel location, processing moves to the output selection processing of step S213.
If the pixel location in question is not a G estimation pixel location, processing moves to step S205, and it is determined whether or not a false color is detected.
The false color detection processing of step S205 is described.
The quadripartite Bayer-type RGB array image is the same as the Bayer array image in that the RB sampling rate is low compared to G. Consequently, with regard to RB, aliasing is likely to occur in low-frequency bands.
For example, in contrast to G in which aliasing does not occur and there is a high-frequency component in ½ of the Nyquist frequency, in other words in ½ Nq, in RB the high-frequency component is lost due to aliasing. Aliasing is detected by using this and comparing the Laplacians for appropriate bands of a G signal and an RB signal.
Specifically, for example, G_lpl and RB_lpl are calculated by means of gradients (or Laplacians (lpl)) in which ½ of the Nyquist frequency, in other words ½ Nq, can be detected, and the difference therebetween is used for detection.
If a false color is detected in step S205, false color correction is executed in step S206. Known processing can be applied for the false color correction processing.
A specific example of false color correction is described. With regard to false color correction, for example, in an ISP, an LMTX for the detection region is weakly applied. Alternatively, in the processing in a CIS, processing such as achromatization can be applied by means of reverse WB.
After the false color correction in step S206, processing advances to step S207, and RB estimation processing is executed. This processing is processing for setting RB in RGB locations in the quadripartite Bayer-type RGB array required to convert the quadripartite Bayer-type RGB array image into a Bayer array image.
A specific example of this RB estimation processing is described with reference to
In this RB estimation processing, as depicted in
Next, processing is described for the case where, in step S201 in the flow of
In this case, processing advances to step S211.
In step S211, the processing is the same as step S204, in other words, it is determined whether or not the pixel location in question is a G estimation pixel location for which estimation of a G pixel value has been performed. In other words, it is determined whether or not the pixel location in question is a pixel for which a G pixel value, calculated by means of subjecting a pixel location that was originally an RB pixel to interpolation processing, has been set. If the pixel location in question is a G estimation pixel location, processing moves to the output selection processing of step S213.
If the determination is “no”, processing advances to step S212.
In step S212, RB estimation processing that is the same as step S207 is executed.
In this RB estimation processing, as depicted in
In step S213, selection processing for an output pixel is performed. In other words, from the results of:
“yes” in step S204,
the processing result of step S207,
“yes” in step S211, and
the processing result of step S212,
pixel values corresponding to a Bayer array are selected and serve as output pixels.
In step S214, it is determined whether or not the processing for all processing-target pixels has been completed. If there remain pixels for which processing is not complete, the processing of step S201 and thereafter is executed for the unprocessed pixels.
The processing is finished on the basis of the determination that all pixel processing has been completed. [5. Summary of the Configuration of the Present Disclosure]
An embodiment of the present disclosure has been described above in detail with reference to a specific embodiment. However, it is obvious that a person skilled in the art could amend or substitute the embodiment without deviating from the gist of the present disclosure. In other words, the present invention has been disclosed in the form of an exemplification, and should not be interpreted in a restrictive manner. The claims section should be taken into consideration when determining the gist of the present disclosure.
It should be noted that the technology disclosed in the present description can also have a configuration such as the following.
(1) An image processing device including an image signal correction unit that executes correction processing for a pixel signal of an input image,
wherein the image signal correction unit inputs, as the input image, a mosaic image in which pixel blocks configured of a plurality of same-color pixels are arrayed,
from among constituent pixels of the input image, detects pixel value gradients in eight directions in a conversion-target pixel location that is targeted for color conversion processing,
decides an interpolated pixel value calculation mode for the conversion-target pixel on the basis of the pixel value gradients in the eight directions, and
calculates an interpolated pixel value for the conversion-target pixel location in accordance with the decided processing mode.
(2) The image processing device according to (1), wherein the image signal correction unit executes remosaic processing for inputting, as the input image, a mosaic image in which pixel blocks configured of a plurality of same-color pixels are arrayed, and converting the pixel array of the input image into a different pixel array, and the image signal correction unit, in the remosaic processing, from among constituent pixels of the input image, detects pixel value gradients in eight directions in a conversion-target pixel location that is targeted for color conversion processing, decides an interpolated pixel value calculation mode for the conversion-target pixel on the basis of the pixel value gradients in the eight directions, and calculates an interpolated pixel value for the conversion-target pixel location in accordance with the decided processing mode.
(3) The image processing device according to (1) or (2), wherein the image signal correction unit executes remosaic processing for inputting, as the input image, an image having a quadripartite Bayer-type RGB array in which colors of a Bayer-type RGB array are implemented as an array of 2×2 pixel units, and converting the input image into a Bayer array.
(4) The image processing device according to any of (1) to (3), wherein the image signal correction unit executes processing for selecting pixels in a direction having a small gradient, as reference pixels on the basis of the pixel value gradients in the eight directions, and calculating an interpolated pixel value for the conversion-target pixel by means of blend processing of the pixel values of the selected reference pixels.
(5) The image processing device according to any of (1) to (4), wherein, in the case where the largest value of the pixel value gradients in the eight directions is equal to or less than a predetermined threshold value, the image signal correction unit executes processing for calculating, as an interpolated pixel value for the conversion-target pixel, a smoothed signal based on the pixel values of the pixels surrounding the conversion-target pixel.
(6) The image processing device according to any of (1) to (5), wherein the image signal correction unit, in the calculation processing for the pixel value gradients in the eight directions, when calculating a gradient in at least one direction,
calculates low-frequency component gradient information in which pixels of the same color included in the input image are applied,
high-frequency component gradient information in which pixels of the same color included in the input image are applied, and
different-color component gradient information in which pixels of different colors included in the input image are applied, and calculates a pixel value gradient by weighted addition of the three items of gradient information.
(7) The image processing device according to any of (1) to (6), wherein the image signal correction unit, in the calculation processing for the pixel value gradients in the eight directions, when calculating a gradient in at least one direction,
calculates a low-frequency component gradient in which pixels of the same color included in the input image are applied,
a high-frequency component gradient in which pixels of the same color included in the input image are applied, and
a mid-frequency component gradient in which pixels of the same color included in the input image are applied, and calculates a pixel value gradient by weighted addition of the three items of gradient information.
(8) The image processing device according to any of (1) to (7), wherein the image signal correction unit, in the calculation processing for the pixel value gradients in the eight directions, when calculating a gradient in at least one direction, calculates a low-frequency component gradient in which pixels of the same color included in the input image are applied, and a high-frequency component gradient in which pixels of the same color included in the input image are applied, and calculates a pixel value gradient by weighted addition of the two items of gradient information.
(9) The image processing device according to any of (1) to (8), wherein the image signal correction unit, in the calculation processing for the pixel value gradients in the eight directions, when calculating gradients in a vertical direction and a horizontal direction,
calculates low-frequency component gradient information in which pixels of the same color included in the input image are applied,
high-frequency component gradient information in which pixels of the same color included in the input image are applied, and
different-color component gradient information in which pixels of different colors included in the input image are applied, and calculates a pixel value gradient by weighted addition of the three items of gradient information,
when calculating gradients in a top-right 45-degree direction and a bottom-right 45-degree direction,
calculates low-frequency component gradient information in which pixels of the same color included in the input image are applied,
high-frequency component gradient information in which pixels of the same color included in the input image are applied, and
mid-frequency component gradient information in which pixels of the same color included in the input image are applied, and calculates a pixel value gradient by weighted addition of the three items of gradient information,
and when calculating gradients in a top-right 22.5-degree direction, a bottom-right 22.5-degree direction, a top-right 67.5-degree direction, and a bottom-right 67.5-degree direction,
calculates low-frequency component gradient information in which pixels of the same color included in the input image are applied, and
high-frequency component gradient information in which pixels of the same color included in the input image are applied, and
calculates a pixel value gradient by weighted addition of the two items of gradient information.
(10) The image processing device according to any of (1) to (9), wherein the image signal correction unit executes interpolated pixel value calculation processing in which reference pixel locations are altered in accordance with constituent pixel locations of the pixel blocks configured of the plurality of same-color pixels.
In addition, a method for the processing executed in the abovementioned device and system, a program that causes the processing to be executed, and a recording medium having the program recorded thereon are also included in the configuration of the present disclosure.
Furthermore, the series of processing described in the description can be executed by hardware, or software, or alternatively a combined configuration of both. In the case where processing by means of software is executed, the program in which the processing sequence is recorded can be installed in and executed from a memory in a computer incorporated in dedicated hardware, or the program can be installed in and executed by a general-purpose computer on which the various kinds of processing can be executed. For example, the program can be recorded in advance on a recording medium. Other than being installed on a computer from a recording medium, the program can be received via a network such as a LAN (local area network) or the Internet, and installed on a recording medium such as a hard disk that is provided internally.
It should be noted that the various kinds of processing described in the description may not only be executed in a time-sequential manner in accordance with that described, but may also be executed in a parallel or discrete manner in accordance with the processing capabilities of the device that executes processing or as required. Furthermore, a system in the present description is a logical set configuration of a plurality of devices, and is not restricted to the constituent devices being in the same enclosure.
As described above, according to the configuration of an embodiment of the present disclosure, a device and a method for executing remosaic processing for performing conversion into an image of a difference pixel array are realized.
Specifically, remosaic processing for inputting, as an input image, a mosaic image in which pixel blocks configured of a plurality of same-color pixels are arrayed, and converting the pixel array of the input image into a different pixel array is executed. In this remosaic processing, from among constituent pixels of the input image, pixel value gradients in eight directions in a conversion-target pixel location that is targeted for color conversion processing are detected, an interpolated pixel value calculation mode for the conversion-target pixel is decided on the basis of the pixel value gradients in the eight directions, and an interpolated pixel value for the conversion-target pixel location is calculated in accordance with the decided processing mode. For example, an image having a quadripartite Bayer-type RGB array in which colors of a Bayer-type RGB array are implemented as an array of 2×2 pixel units is converted into a Bayer array.
Number | Date | Country | Kind |
---|---|---|---|
2011-190052 | Aug 2011 | JP | national |
2011-290332 | Dec 2011 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2012/066670 | 6/29/2012 | WO | 00 | 2/11/2014 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2013/031367 | 3/7/2013 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6781626 | Wang | Aug 2004 | B1 |
20070110319 | Wyatt et al. | May 2007 | A1 |
20070116375 | Utsugi et al. | May 2007 | A1 |
20080062409 | Utsugi | Mar 2008 | A1 |
20080088857 | Zimmer et al. | Apr 2008 | A1 |
20090128662 | Moon et al. | May 2009 | A1 |
20090263017 | Tanbakuchi | Oct 2009 | A1 |
20100141812 | Hirota | Jun 2010 | A1 |
20110050918 | Tachi | Mar 2011 | A1 |
20120076420 | Kono et al. | Mar 2012 | A1 |
20130216130 | Saito et al. | Aug 2013 | A1 |
Number | Date | Country |
---|---|---|
11-029880 | Feb 1999 | JP |
2000-069491 | Mar 2000 | JP |
2010-136225 | Jun 2010 | JP |
2011-055038 | Mar 2011 | JP |
WO 2005101854 | Oct 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20140253808 A1 | Sep 2014 | US |