1. Field of the Invention
The present invention relates to an image processing apparatus and an image processing method. More particularly, the present invention relates to an image processing apparatus, an image processing method, and a program for correcting chromatic aberration caused by optical factors and suppressing color blur.
2. Description of the Related Art
Recent digital camcorders and digital cameras employ an image sensor having a large number of pixels to provide high-quality images. On the other hand, with pixel miniaturization and a decrease in lens size, color blur due to chromatic aberration is likely to occur because of wavelength-induced fluctuations in imaging position. Various methods have been proposed as techniques for suppressing color blur in a captured image.
For example, Japanese Patent Application Laid-Open No. 2000-076428 discusses a technique for suppressing color blur where a lens used for image capture is identified, relevant chromatic aberration information is read to generate correction parameters, and coordinates of target color signals are shifted based on the correction parameters.
However, the color blur characteristics intricately change with the image height position from the optical axis center to the pixel of interest, zoom lens position, diaphragm aperture diameter, and focus lens position. With the configuration discussed in Japanese Patent Application Laid-Open No. 2000-076428, in which the chromatic aberration information is read, chromatic aberration information is stored for each of the image height position, zoom lens position, diaphragm aperture diameter, focus lens position, and lens type.
Therefore, a large memory capacity is necessary to store these pieces of chromatic aberration information. Further, color blur may largely spread under certain photographing conditions, such as sunbeams streaming through leaves.
Another technique for suppressing color blur is proposed. With this technique, instead of reading prestored lens chromatic aberration information, an area in which color blur occurs is extracted and color blur suppression is applied to the area. For example, Japanese Patent Application Laid-Open No. 2007-195122 discusses a technique for suppressing color blur of image data based on gradient values of edges of the image data.
However, when suppressing color blur of image data based on gradient values of edges of the image data, a change in frequency characteristics of the image data input to the color blur suppression processor may cause insufficient or excessive color blur suppression possibly degrading the image quality. For example, when an image sensor is driven by a plurality of driving methods such as addition readout and non-addition readout, the frequency characteristics of image data generated by the image sensor change depending on the selected driving method.
When the method for driving the image sensor is changed from non-addition readout to addition readout, high-frequency components in the image data decrease by the addition average of pixel output signals. More specifically, even with an identical subject, an addition readout method may provide smaller gradient values of image data than a non-addition readout method.
Therefore, the amount of color blur suppression calculated based on gradient values of edges of image data has been different between the non-addition readout method and the addition readout method.
The present invention is directed to an image processing apparatus and an image processing method capable of suitable color blur suppression even when the method for driving the image sensor is changed.
According to an aspect of the present invention, an image processing apparatus includes a gradient detection unit configured to obtain a gradient value from image data, a suppression coefficient calculation unit configured to calculate a suppression coefficient based on the gradient value, a suppression unit configured to perform image processing to suppress color blur in an area of the image data to be subjected to color blur suppression, based on the suppression coefficient, wherein, when the gradient value exceeds a threshold value, the suppression coefficient calculation unit calculates the suppression coefficient so as to suppress color blur in the area to be subjected to color blur suppression, and wherein the suppression coefficient calculation unit sets the threshold value so that a gradient value range required for color blur suppression when an image sensor is driven by an addition readout method to generate the image data is wider than a gradient value range required when the image sensor is driven by a non-addition readout method.
Further features and aspects of the present invention will become apparent from the following detailed description of exemplary embodiments with reference to the attached drawings.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features, and aspects of the invention and, together with the description, serve to explain the principles of the invention.
Various exemplary embodiments, features, and aspects of the invention will be described in detail below with reference to the drawings.
Referring to
An analog front end (AFE) 12 is an analog front-end circuit that converts analog RGB signals output from the image sensor 11 into digital RGB signals. Hereinafter, the digital signals output from the AFE 12 are collectively referred to as image data.
A gradient detector 131 calculates a gradient value Sr denoting the gradient of the R digital signal output from the AFE 12. A gradient detector 132 calculates a gradient value Sg denoting the gradient of the G digital signal output from the AFE 12. A gradient detector 133 calculates a gradient value Sb denoting the gradient of the B digital signal output from the AFE 12.
A suppression coefficient calculation unit 14 calculates suppression coefficients (Kr, Kg, and Kb) for each RGB color of the image data based on gradient values (Sr, Sg, and Sb) for each RGB color output from the gradient detectors 131 to 133 and setting information output from a control unit 16 (described below). The suppression coefficient calculation unit 14 outputs the suppression coefficients (Kr, Kg, and Kb) to a suppression processor 15.
The suppression processor 15 calculates a luminance signal Y and color-difference signals Cr and Cb based on the image data output from the AFE 12. Then, based on the suppression coefficients (Kr, Kg, and Kb) for each RGB color output from the suppression coefficient calculation unit 14, the suppression processor 15 calculates a suppression coefficient for each of the color-difference signals Cr and Cb, and then applies color blur suppression processing to the color-difference signals Cr and Cb. Then, the suppression processor 15 outputs color-difference signals Cr′ and Cb′ after color blur suppression processing and the luminance signal Y as image data after color blur suppression.
The control unit 16 is a microcomputer, which selects the method for driving the image sensor 11. In the present exemplary embodiment, the control unit 16 instructs the image sensor 11 to change the driving method between vertical 2-pixel addition readout and non-addition readout. The control unit 16 outputs setting information to the suppression coefficient calculation unit 14 according to the driving method instruction given to the image sensor 11.
A memory 17 temporarily stores the image data composed of Y, Cr′, and Cb′ output from the suppression processor 15 as well as the image data read from a storage medium 21 and decoded by a coder/decoder 20.
An image converter 18 reads the image data stored in the memory 17 and then converts the image data into a format suitable for a display unit 19. The display unit 19, e.g., a liquid crystal display (LCD), displays images by using the image data converted by the image converter 18.
The coder/decoder 20 codes the image data stored in the memory 17, and stores the coded data in the storage medium 21. The coder/decoder 20 also decodes image data read from the storage medium 21 and stores the decoded data in the memory 17. The storage medium 21 is capable of storing not only image data generated by the imaging apparatus but also image data written by external apparatuses together with coded drive information on an image sensor used to capture the image data.
Each of the gradient detectors 131 to 133 performs first-order differentiation of the image data to calculate its gradient values. In the present exemplary embodiment, a Sobel filter is used to obtain variations between pixels of the image data.
The suppression coefficient calculation unit 14 determines a suppression coefficient for each color from the gradient values calculated by the gradient detectors 131 to 133.
As illustrated in
A threshold value 31 is set for a gradient value. When the gradient value equals the threshold value 31 or below, the suppression coefficient is set to 0.0. When the gradient value exceeds the threshold value 31, the suppression coefficient increases in proportion to the increase in gradient value. When the suppression coefficient reaches 1.0, it is retained to 1.0 regardless of further increase in gradient value.
The suppression coefficient is obtained for each RGB color. The suppression coefficients Kr, Kg, and Kb are calculated from the gradient values Sr, Sg, and Sb, respectively, and then output to the suppression processor 15.
The suppression coefficient calculation unit 14 can change the gradient value range subjected to color blur suppression by changing the threshold value 31. Likewise, the suppression coefficient calculation unit 14 can change the strength of color blur suppression with respect to the gradient value by changing a variation 32.
The control unit 16 outputs the threshold value 31 and the variation 32 to the suppression coefficient calculation unit 14 as the setting information. In other words, the control unit 16 can control the gradient value range subjected to color blur suppression as well as the strength of color blur suppression with respect to the gradient value.
A luminance converter 41 generates a luminance signal Y from the digital RGB signals output from the AFE 12. A Cr converter 42 and a Cb converter 43 generate color-difference signals Cr and Cb, respectively, from the digital RGB signals. Since the technique for generating the luminance signal Y and the color-difference signals Cr and Cb from the digital RGB signals is well known, detailed descriptions thereof are not included herein.
From the suppression coefficients Kr and Kg calculated by the suppression coefficient calculation unit 14, the Cr converter 42 calculates a suppression coefficient Kcr for the color-difference signal Cr by using the following formula (1).
Kcr=1−(0.8×Kr+0.2×Kg) (1)
The Cr converter 42 outputs a product of the suppression coefficient Kcr and the color-difference signal Cr as a color-difference signal Cr′ after color blur suppression processing. When the gradient value Sr for the R digital signal and the gradient value Sg for the G digital signal are large enough, both the suppression coefficients Kr and Kg become 1.0 because of the suppression function illustrated in
On the other hand, when both the gradient values Sr and Sg are small enough, both the suppression coefficients Kr and Kg become 0.0 because of the suppression function illustrated in
Further, when the gradient value Sr is small, color blur is not likely to occur regardless of the gradient value Sg. In this case, since the suppression coefficient Kcr is comparatively close to 1, the value of the color-difference signal Cr is output as the color-difference signal Cr′ without significantly being reduced. In this way, the gradient values of an area are used not only to determine whether the area is subjected to color blur suppression but also to determine the degree of color blur suppression processing.
Likewise, from the suppression coefficients Kb and Kg calculated by the suppression coefficient calculation unit 14, the Cb converter 43 calculates a suppression coefficient Kcb for the color-difference signal Cb by using the following formula (2).
Kcb=1−(0.8×Kb+0.2×Kg) (2)
The Cb converter 43 outputs a product of the suppression coefficient Kcb and the color-difference signal Cb as the color-difference signal Cb′ after color blur suppression processing.
As illustrated in
As a result, the vertical 2-pixel addition readout method applies color blur suppression processing to a wider gradient value range and accordingly to smaller gradient values than the non-addition readout method. In other words, the vertical 2-pixel addition readout method sets up a wider gradient value range to be subjected to color blur suppression processing than the non-addition readout method.
As illustrated in
As described above, according to the present exemplary embodiment, the control unit 16 changes the gradient value range to be subjected to color blur suppression processing depending on whether the image sensor 11 is driven by the addition or non-addition readout method. This configuration makes suitable color blur suppression processing possible even when the method for driving the image sensor 11 is changed between addition readout and non-addition readout.
A second exemplary embodiment will be described below. The configuration of an imaging apparatus as an exemplary image processing apparatus according to the second exemplary embodiment is similar to that of the first exemplary embodiment illustrated in
In the present exemplary embodiment, the control unit 16 sets a smaller value as the threshold value 31 in the setting information for a larger number of addition-averaged pixels in the case of addition readout of the image sensor 11. In other words, in the case of vertical 3-pixel addition readout, the control unit 16 changes the threshold value 31 to a smaller value than that in the case of vertical 2-pixel addition readout. Further, the control unit 16 may change the variation 32 to a larger value for a larger number of addition-averaged pixels in the case of addition readout of the image sensor 11.
As described above, according to the present exemplary embodiment, the gradient value range to be subjected to color blur suppression processing is changed according to the number of pixels in the case of addition readout of the image sensor 11.
More specifically, the control unit 16 changes the number of addition-averaged pixels in the case of addition readout of the image sensor 11 from the first number to a second number that is larger than the first number, thus extending the gradient value range to be subjected to color blur suppression processing. This configuration enables suitable color blur suppression processing even when the number of addition-averaged pixels is changed in the case of addition readout of the image sensor 11.
A third exemplary embodiment will be described below.
The present exemplary embodiment differs from the first exemplary embodiment (the block diagram of
Each of the band limiters 221 to 223 changes a filter coefficient according to the instruction of the control unit 16 to change the pass band illustrated in
In the present exemplary embodiment, the image sensor 11 changes the driving method between vertical 2-pixel addition readout and non-addition readout. As illustrated in
As mentioned in the first exemplary embodiment,
Therefore, when the vertical image sensor 11 performs vertical 2-pixel addition readout, the control unit 16 instructs each of the band limiters 221 to 223 to change the filter coefficient to provide a smaller pass band than that for non-addition readout.
When the image data obtained with vertical 2-pixel addition readout illustrated in
Since the image data before passing the filter illustrated in
Then, as illustrated in
Likewise, as illustrated in
Similarly to the second exemplary embodiment, the filter coefficient for each of the band limiters 221 to 223 may be changed when changing the number of addition-averaged pixels in the case of addition readout. The gradient of image data decreases with increasing number of addition-averaged pixels. Therefore, the larger the number of addition-averaged pixels, the narrower the pass-band setting for the filter characteristics of the band limiters 221 to 223 needs to be.
A fourth exemplary embodiment will be described below.
The present exemplary embodiment differs from the first exemplary embodiment (the imaging apparatus illustrated in
The storage medium 21 stores coded image data together with coded drive information on the readout method of the image sensor used to capture the image data. The coder/decoder 20 reads data from the storage medium 21, decodes it into Y—Cr—Cb-based image data, and stores the decoded image data in the memory 17 together with the drive information on the readout method of the image sensor.
The control unit 16 acquires the drive information stored in the memory 17 and, similarly to the first exemplary embodiment, outputs setting information to the suppression coefficient calculation unit 14 according to the readout method of the image sensor in the drive information.
The YCrCb-based image data stored in the memory 17 is output to the RGB converter 23 and the suppression processor 24. The RGB converter 23 converts the YCrCb-based image data into RGB-based image data by using a well-known method. Since the technique for generating digital RGB signals from the luminance signal Y and the color-difference signals Cr and Cb is well known, detailed description will not be included herein.
When RGB image data is obtained, the suppression coefficient calculation unit 14 calculates gradient values (Sr, Sg, and Sb) for each RGB color output from the gradient detectors 131 to 133, and also calculates suppression coefficients (Kr, Kg, and Kb) for each RGB color of the image data based on the setting information output from the control unit 16.
The method for calculating the suppression coefficients Kcr and Kcb for the color-difference signals Cr and Cb, and the color blur suppression processing therefor are similar to those described in the first exemplary embodiment. Then, the suppression processor 24 outputs the color-difference signals Cr′ and Cb′ that underwent color blur suppression processing and the luminance signal Y as image data after color blur suppression. The coder/decoder 20 codes the signals Y, Cr′, and Cb′ stored in the memory 17 and stores the coded data in the storage medium 21.
As described above, the present exemplary embodiment can obtain similar effect to each exemplary embodiment mentioned above not only as an imaging apparatus but also as an image reproducing apparatus.
A fifth exemplary embodiment will be described below.
The imaging apparatus illustrated in
The above-described exemplary embodiments can be attained not only as the imaging apparatus and image reproducing apparatus described above but also as software executed by a computer (or central processing unit (CPU), microprocessor unit (MPU), etc.) in a system or the apparatus. Color blur suppression processing may be applied to image data received in a storage medium.
Therefore, a computer program itself supplied to a computer to attain the above-mentioned exemplary embodiments also embodies the present invention. The computer program for attaining the above-mentioned exemplary embodiments may be stored in a computer-readable storage medium, for example, a floppy disk, hard disk, optical disk, magneto-optical disk (MO), CD-ROM, CD-R, CD-RW, magnetic tape, nonvolatile memory card, ROM, DVD (DVD-ROM or DVD-R), or the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all modifications, equivalent structures, and functions.
This application claims priority from Japanese Patent Application No. 2009-134186 filed Jun. 3, 2009 and Application No. 2010-039068 filed Feb. 24, 2010, which are hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2009-134186 | Jun 2009 | JP | national |
2010-039068 | Feb 2010 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
6853400 | Matama | Feb 2005 | B1 |
20060039590 | Lachine et al. | Feb 2006 | A1 |
20090148015 | Bohm et al. | Jun 2009 | A1 |
20090257672 | Sullender | Oct 2009 | A1 |
20100239184 | Furukawa et al. | Sep 2010 | A1 |
Number | Date | Country |
---|---|---|
2000-076428 | Mar 2000 | JP |
2007-195122 | Aug 2007 | JP |
Number | Date | Country | |
---|---|---|---|
20100309320 A1 | Dec 2010 | US |