This application claims priority to Japanese Patent Application No. 2007-340414 filed on Dec. 28, 2007, which is incorporated herein by reference in its entirety.
The present invention relates to an imaging device, and in particular, to pixel interpolation in an imaging device.
Conventionally, there are cases in which pixel addition is executed along a horizontal direction or along a vertical direction within a sensor in a digital camera having an imaging element such as a CCD, in order to improve sensitivity. When the pixel addition is executed, however, because the resolution differs in a direction in which the addition is executed and in a direction in which the addition is not executed, the image becomes unnatural. For example, when pixels are added along the vertical direction, although the resolution in the horizontal direction is maintained, the resolution in the vertical direction is reduced. In consideration of this, a method is proposed in which the resolution is recovered by executing a pixel interpolation process along the direction in which the pixels are added and the resolution is reduced.
JP 2004-96610A discloses a solid-state imaging device having a solid-state imaging element comprising a plurality of photoelectric conversion elements arranged in a two-dimensional matrix, wherein the solid-state imaging element comprises a CCD driving unit which adds signal charges of a plurality of pixels along a vertical direction and outputs, from the solid-state imaging element, the added signal charges sequentially for each line, and a storage unit which stores an output signal of the solid-state imaging element.
JP 2007-13275A discloses an imaging device having an imaging element which images an object, the imaging device comprising a pixel adding unit which applies a pixel addition process to add pixel values of a plurality of pixels of the same color to an image of the object imaged by the imaging element and comprising pixels having different color information depending on a position within the image, to amplify brightness of the image of the object, and a phase adjusting unit which adjusts a phase of a color component interpolated for each pixel by a color interpolation process based on the placement, in a pixel space, of the pixels of the image of the object which changes by the pixel addition process by the pixel adding unit.
Generally, the pixel interpolation process is executed using a plurality of pixels at around a pixel position to be interpolated. For example, 8 pixels including 2 pixels in the horizontal direction, 2 pixels in the vertical direction, and 4 pixels in the diagonal direction with respect to the pixel position to be interpolated are used for interpolation. However, simple interpolation with the 8 pixels at the periphery of the pixel position to be interpolated for image data in which the pixel addition is executed along the horizontal or vertical direction cannot achieve an accurate interpolation, because the resolution differs between the horizontal direction and the vertical direction, and thus the degree of influence or the degree of correlation to the pixel position to be interpolated differs between the horizontal and vertical directions.
One object of the present invention is to provide an imaging device in which the pixel interpolation can be executed with high precision for image data obtained by adding pixels along the horizontal direction or the vertical direction.
According to one aspect of the present invention, there is provided an imaging device comprising an imaging element, a reading unit which reads a pixel signal from the imaging element while adding a plurality of the pixel signals along a horizontal direction or a vertical direction, and outputs as an R pixel signal, a G pixel signal, and a B pixel signal, and an interpolation unit which interpolates the R pixel signal, the G pixel signal, and the B pixel signal, the interpolation unit interpolating the G pixel signal using an adjacent pixel in a direction which is not the direction of the addition among the horizontal direction and the vertical direction and interpolating the R pixel signal and the B pixel signal using an adjacent pixel in the direction which is not the direction of the addition among the horizontal direction and the vertical direction and the interpolated G pixel signal.
According to another aspect of the present invention, it is preferable that, when the direction of the addition among the horizontal direction and the vertical direction is defined as a first direction and the direction which is not the direction of the addition among the horizontal direction and the vertical direction is defined as a second direction, the interpolation unit interpolates the R pixel signal and the B pixel signal along the second direction using an adjacent pixel along the second direction, and interpolates the R pixel signal and the B pixel signal along the first direction using an adjacent pixel along the first direction and the interpolated G pixel signal.
According to another aspect of the present invention, it is preferable that the interpolation unit removes noise along the first direction of the interpolated G pixel signal when the interpolation unit interpolates the R pixel signal and the B pixel signal using the interpolated G pixel signal.
According to the present invention, pixel interpolation can be executed with a high precision for image data obtained by adding pixels along the horizontal direction or the vertical direction.
Preferred embodiments of the present invention will be described in detail by reference to the drawings, wherein:
A preferred embodiment of the present invention will now be described with reference to the drawings and exemplifying a digital camera as an imaging device.
A CCD 12 converts an object image formed by the optical system into an electric signal and outputs the electric signal as an image signal. The CCD 12 has a color filter array of a Bayer arrangement. Timing for reading of the image signal from the CCD 12 is set by a timing signal from a timing generator (TG). Alternatively, a CMOS may be used as the imaging element in place of the CCD.
A CDS 14 executes a correlated double sampling process on an image signal from the CCD 12 and outputs the processed signal.
An A/D 16 converts the image signal sampled by the CDS 14 into a digital image signal and outputs the digital image signal. The digital image signal comprises color signals, that is, an R pixel signal, a G pixel signal, and a B pixel signal.
A filter median point movement filter 18 converts an image signal, when the median points of image signals which are read from the CCD 12 do not match each other, so that the median points match each other for later processes. When the median points of the image signals which are read from the CCD 12 match each other, the filter median point movement filter 18 allows the input image signal to pass through the filter. In other words, the filter median point movement filter 18 is switched between an operation state and a non-operation state according to the reading method from the CCD 12.
An image memory 20 stores image data.
A sigma (Σ) noise filter 22 removes noise in the image data.
A CFA interpolation unit 24 interpolates the R pixel, G pixel, and B pixel, and outputs as an r pixel signal, a g pixel signal, and a b pixel signal.
A brightness and color difference conversion unit 26 converts the r pixel signal, g pixel signal, and b pixel signal in which interpolation is applied to the pixels into a brightness signal Y and color difference signals CR and CB, and outputs the resulting signals.
A median noise filter 28 removes noise in the brightness signal Y.
An edge processing unit 30 executes a process to enhance an edge of the brightness signal Y from which the noise is removed.
A chroma noise filter 34 is a low-pass filter, and removes noise in the color difference signals CB and CR.
An RGB conversion unit 36 re-generates, from the brightness signal Y and the color difference signals from which the noise is removed, an R pixel signal, a G pixel signal, and a B pixel signal.
A WB (white balance)/color corrections/γ correction unit 38 applies a white balance correction, a color correction and a γ correction to the R pixel signal, G pixel signal, and B pixel signal. The white balance correction, color correction, and γ correction are known techniques, and will not be described here.
A brightness and color difference conversion unit 40 again converts the R pixel signal, G pixel signal, and B pixel signal to which various processes are applied into a brightness signal Y and color difference signals CB and CR, and outputs the resulting signals.
An adder 42 adds the brightness signal in which the edge is enhanced by the edge processing unit 30 and the brightness signal from the brightness and color difference conversion unit 40, and outputs the resulting signal as a brightness signal YH.
An image memory 46 stores the brightness signal YH from the adder 42 and the color difference signals CB and CR from the brightness and color difference conversion unit 40.
A compression and extension circuit 52 compresses the brightness and color difference signals stored in the image memory 46 and stores on a recording medium 54 or extends compressed data and stores in the image memory 46.
An LCD 44 displays the image data stored in the image memory 46. The LCD 44 displays a preview image or an imaged image.
An operation unit 56 includes a shutter button and various mode selection buttons. The operation unit 56 may be constructed with a touch panel.
A memory controller 50 controls writing and reading of the image memories 20 and 46.
A CPU 58 controls operations of various units. More specifically, the CPU 58 controls the timing generator TG 48 according to an operation signal from the operation unit 56 to start reading of signals from the CCD 12, and controls the memory controller 50 to control writing to and reading from the image memory 20. In addition, the CPU 58 controls the operations of the image memory 46 and the compression and extension circuit 52 to write an image that has been subjected to a compression process to the recording medium 54, or read data from the recording medium 54 so that the extension process is applied to the data and an image is displayed on the LCD 44. When the user selects a particular mode, the CPU 58 controls the white balance according to the selection.
The filter median point movement filter 18 in
R1′=(2R1+R2)/3
Gr1′=(2Gr1+Gr2)/3
In addition, when the B pixels are B1 and B2 and the Gb pixels are Gb1 and Gb2, new pixels B1′ and Gb1′ are generated by:
B1′=(B1+2B2)/3
Gb1′=(Gb1+2Gb2)/3
With these calculations, the median point of the Gr pixel and the R pixel and the median point of the Gb pixel and the B pixels are equidistantly arranged. Similar calculations are repeated for other pixels. The positions of median points are moved because the process becomes complicated if the positions of the median points are not equidistant during noise processing and interpolation process at the later stages, because these processes takes into consideration the peripheral pixels.
Next, the sigma (Σ) noise filter 22 in
After the comparison with the peripheral pixels L20 and L24 along the horizontal direction, the pixel L22 to be processed is compared with peripheral pixels along directions other than the horizontal direction. More specifically, difference values (L22-Lxx) (Lxx=L00, L02, L04, L40, L42, L44) are sequentially calculated, and it is determined whether or not the absolute value of the difference value is smaller than a predetermined noise level N/2. Here, it should be noted that the value with which the absolute value of the difference value is to be compared is not N, but rather is N/2 which is smaller than N. Because the resolution in the vertical direction is lower compared to the resolution of the horizontal direction, the degree of influence or degree of correlation to the pixel L22 to be processed is small, and thus the noise level with which the absolute value of the difference value is to be compared is set at a smaller value. When the absolute value of the difference value is smaller than the noise level N/2, the peripheral pixel is added to AVG and the count value is incremented by 1, and AVG obtained through the addition is divided by the count value C at the end to calculate an ultimate average value.
With the above-described process, for example, when the difference value between the pixels L22 and L20 is smaller than the noise level N and the difference values between the pixels L22 and L40 and between the pixels L22 and L42 are smaller than the noise level N/2 among pixels L00-L44, a new pixel value of the pixel L22 is calculated with AVG=(1.22+L20+L40+L42)/4.
Next, an operation of the median noise filter 28 in
In a case where the median noise filter 28 applies a process to replace the pixel to be processed by a median using 4 peripheral pixels positioned along the diagonal directions as shown in
Next, an operation of the CFA interpolation unit 24 of
R=Gb1′+{(R1−Gr1′)+(R2−Gr2′)}/2
As is clear from this equation, the interpolation of the R pixel along the vertical direction uses not only R1 and R2, but also Gr1′, Gr2′, and Gb1′ of the G pixels.
B=Gr2′+{(B1−Gb1′)+(B2−Gb2′)}/2
Gr2′, Gb1′, and Gb2′ are pixel values of interpolated G pixels after passing through the median filter in the vertical direction.
As described, in the present embodiment, when pixels are added along the vertical direction in order to improve sensitivity of the CCD 12, the degree of influence of the peripheral pixels along the vertical direction is reduced, or pixels along the vertical direction which are closer to the pixel to be processed in the noise processing in the noise filter and pixel interpolation process are employed. With this structure, the precision of the noise process and the interpolation process can be improved.
In the present embodiment, an example configuration is described in which the pixels are added along the vertical direction, but a configuration in which the pixels are added along the horizontal direction can be treated in a similar manner. In this case, the horizontal direction in the present embodiment may be interpreted to be the vertical direction and the vertical direction in the present embodiment may be interpreted to be the horizontal direction.
In addition, the vertical direction of the present embodiment may be interpreted to be a perpendicular direction and the horizontal direction may be interpreted to be a direction orthogonal to the perpendicular direction. The horizontal and vertical directions when the digital camera is set at a vertical orientation for imaging and at a horizontal orientation for imaging are the perpendicular direction and the direction orthogonal to the perpendicular direction (horizontal direction), respectively.
Moreover, with regard to the operation of the chroma noise filter (low pass filter) 34 of the present embodiment also, by adjusting the weight in the vertical direction, it is possible to apply noise processing in consideration of the lower resolution in the vertical direction.
Similarly, with regard to the operation of the edge processing unit 30 of the present embodiment also, an edge enhancing process in consideration of the lower resolution in the vertical direction can be executed by adjusting the weight along the vertical direction.
Number | Date | Country | Kind |
---|---|---|---|
2007-340414 | Dec 2007 | JP | national |