1. Field of the Invention
The present invention relates to an image processing method, and relates in particular to color suppression of photographed images.
2. Description of the Related Art
In recent years, digital cameras that use CCDs (Charge Coupled Devices) to shoot images have become widespread. Transmittance of a pass filter of the imaging element differs by color, and color saturation signal values differ by color as well. Thus, in a photographed image, as a result of signal values of colors reaching saturation not simultaneously but rather sequentially, in areas of high luminance, there can occur the phenomenon of “false color” whereby color that is not inherently present in the area is applied to it, so that color is not reproduced accurately.
In order to address this problem, there has been disclosed a technique of using luminance information in a photographed image to suppress color signal gain in high luminance areas, in order to reduce chroma and suppress the occurrence of false color.
However, with the technique mentioned above, since gain is suppressed over a wide spectrum, chroma becomes reduced even for color signals that are not at saturation. Accordingly, in a high luminance area having a luminance signal above a certain level, chroma is suppressed completely so that tone gradations cannot be preserved, resulting in the problem of color loss.
With the foregoing in view, it is an object of the present invention to restrain the occurrence of false color, as well as preserve tone information of the original color signals.
In order to solve at least in part the problem described above, the first invention has the following structure. Specifically, it essentially resides in An image processing method off processing an image data being presented in a color space that is defined with color components of three colors, the method comprising:
According to the present invention, the occurrence of false color can be suppressed, while carrying out tone reproduction that preserves chroma gradations of the original signals.
The invention in a second arrangement thereof essentially resides in An image processing method implemented by an image processing device when the image processing device photographed bright achromatic color, in a color coordinate system defined by color components of three colors, the method comprising:
According to the present invention, image processing device is able to map with preserving continuity of chroma, not only for saturated color signals but for color signals surrounding the saturated color signals as well. Accordingly, image processing device is able to restrain the occurrence of false color, whereby areas of high luminance become discolored when bright achromatic color is photographed, and to reproduce gradations of chroma of the original color signals with high accuracy.
These and other objects, features, aspects, and advantages of the present invention will become more apparent from the following detailed description of the preferred embodiments with the accompanying drawings.
a and 3b schematically illustrate the color suppression process in the first embodiment;
a and 6b schematically illustrate the mapping process in the first embodiment;
a and 7b schematically illustrate the mapping process in the first embodiment;
a and 8b schematically illustrate the mapping process in the first embodiment;
a and 9b schematically illustrate a color space after the color suppression process in the first embodiment;
a is an illustration showing the environmental conditions in the second embodiment;
b is an illustration showing the structure of the image data in the second embodiment;
c is an illustration showing the Exif Informarion in the second embodiment;
a is function blocks showing the gain level setting module in the second embodiment;
b is an illustration showing a gain table in the second embodiment;
A1. System Arrangement:
The image processing device 100 is an ordinary computer. When the image processing device 100 is input image data 10 from the digital camera 200. The image processing device 100 performs image processing including color suppression processing on the image data 10, and converts it to image data 20 of bitmap format.
The functional blocks of the image processing device 100 are also shown in the drawing. The image processing device 100 has a main control module 101, an image input module 102, an image processing module 103, and an image storage unit 104. The image storage unit 104 has a predetermined area of the hard disk drive 107 of the image processing device 100. The other functional blocks are implemented as software, and are controlled by the main control module 101. The functional blocks could instead be implemented as hardware.
The image input module 102 inputs image data 10 that is targeted for processing, and handing the image data over to the image processing module 103. The image processing module 103 performs image processing including color suppression processing on the image date handed over from the image input module 102, and convert it to image data 20. The specifics of image processing will be described later. The main control module 101 receives the converted image data 20 from the image processing module 103 and records it in the image storage unit 104.
A2. Image Processing:
When input of image data 10 is detected, the defective pixel correcting module 150 performs processing on the image data 10 to correct any defective pixels. A defective pixel refers, in the case that a defective pixel is included in the pixels that make up the image sensor, to a pixel that corresponds to this defective pixel in the shot image data 10; such pixel data is set of inaccurate signal information having been set. On the basis of signal information of pixels surrounding the defective pixel, the defective pixel correcting module 150 corrects the pixel value of the defective pixel.
The color complement processing module 151 executes color complement processing on the defect-corrected image data. Color complement processing refers to a process of calculating color components other than filter color for each pixel datum of the image data 10, filling in any missing data for each data point so that each now has pixel values of the R, G, and B color components. By means of complement processing, RGB data in a color space dependent on the device characteristics of the digital camera 200 is created. In the first embodiment, since a CCD is employed as the image sensor, the color space so created shall be termed the CCDRGB color space.
The color suppression processing module 152 executes color suppression processing on the image data 10 having undergone color complement processing. Color suppression processing is a process that modifies the R, G, and B color components to suppress chroma. The color suppression process suppresses the occurrence of false color or color loss due to differences in filter transmittance by color component. The process constitutes a substantial part of the invention, and will be described in detail later.
The color conversion processing module 153 executes color conversion processing on the image data 10 having undergone color suppression processing. Color conversion processing is a process for converting to a color space appropriate for a particular purpose. In the first embodiment, since the CCDRGB color space is employed, conversion to the RGB color space performed.
Inverse y correction processing module 154, using a gamma value that indicates characteristics of the output device, executes inverse gamma correction on the image data 10 having undergone color conversion processing.
The image processing module 103 performs the processes mentioned above, converting the RAW format image data 10 to image data 20 of bitmap format. The image data 20 is recorded in the image storage unit 104.
In addition to the processes mentioned above, the image processing module 103 may also execute typical picture quality adjustments such as color balance (white balance), contrast, hue, and sharpness corrections, and the like.
A3. Color Suppression Process:
a and 3b are illustrations showing a model explaining of the color suppression process.
In the CCD which constitutes the image sensor of the first embodiment there is disposed a filter that transmits the RGB color components. Since transmittance through the filter differs by color component, color component do not reach saturation when at the same level, but instead reach saturation individually. In the color space 300 of the first embodiment, when bright achromatic color such as white is photographed, saturation is reached in the order G component, B component, R component. As shown in the drawing, let the point at which the G component reaches saturation be designated as saturation point P1, the point at which the B component reaches saturation as saturation point P2, and the point at which the R component reaches saturation as saturation Point P3. Let the saturation segment joining saturation points P1 and P2 be designated saturation segment 320, and similarly let the saturation segment joining saturation points P2 and P3 be designated saturation segment 330.
Signal values on the saturation segments 320, 330 are mapped to signal values that correspond to signal values on the saturation segments 320, 330 on the achromatic axis 310. Specifically, this is achieved by making color component ratios of signals on the saturation segment 320 identical to color component ratios of signals on the achromatic axis 310. Since on the saturation segment 320 the G component is at saturation, the G component is varied to set color component ratios for the three colors, which will be identical to the signal color component ratios of signals on the achromatic axis. On the saturation segment 330, since the G component and the B component are at saturation, in similar way the G component and the B component will be varied to set color component ratios identical to color component ratios of signals on the achromatic axis.
b is a diagonal view of the color space 300 shown in
The color suppression processing module 152 of the first embodiment maps signal values on the saturation segments 320, 330, as well as color signals present surrounding the saturation segments 320, 330. The displacement level entailed in mapping of color signals present surrounding the saturation segments 320, 330 is determined so as to be inversely proportional to distance from the saturation segments. That is, for signals present in areas in proximity to the saturation segments 320, 330, mapped with preserving the continuity of original signals in the direction towards the achromatic axis 310. Chroma of signals on the saturation segments 320, 330 changes precipitously before and after the mapping process, so by also mapping nearby signal, chroma gradation of original signals on the saturation segments 320, 330 and signals around the saturation segments 320, 330 are able to be preserve with a high degree of accuracy. The color suppression process of the first embodiment will be described in greater detail herein below, with reference to
A4. Mapping Process:
In order to reduce the volume of calculations, the color suppression processing module 152 initially performs normalization so that when signal values of the three colors in the color space 300 are equal quantities, achromatic color results (Step S10). The computational equation is given below.
ra=r/Rg;
ga=g/Gg;
ba=b/Bg;
Rsa=Rs/Rg;
Gsa=Gs/Gg;
Bsa=Bs/Bg; Eq. (1)
The description now continues referring back to
ra+ba>2*Bsa and
ra≦Rsa and
ba≦Bsa; Eq. (2)
Next, the color suppression processing module 152 expands the B component of the color signals in the area selected by Eq. (2) (Step S12). The color suppression processing module 152 executes the process by applying Eq. (3).
ra=ra;
ba=ra+2*ba−2*Bsa; Eq. (3)
a and 6b show schematically the mapping process performed by the color suppression processing module 152; the process is performed in Steps S11-S12 of
ra+ba=2Bsa;
That is, the area represented by Eq. (2) is the area 510 indicated by hatching in the drawing, and has signals whose R, B components have values included in P2-P3-Q2′. Eq. (3) is an equation for varying the B component irrespective of the G component, to shift any signal D1(ra, rb) in area 510 to D1′(ra, ba′).
b is the color space 400 after Eq. (3) has been performed. As shown, signals within the candidate area for mapping 800 included Q2-P3-P2-Q5-Q6-Q4′ are shifted in the direction of arrow A, and mapped into an area included Q2-P3-P2-Q5-Q6-Q4.
The description now continues referring back to
ra+ga>2*Gsa and
ra−ba≦0 and
ra<Rsa and
ga<Gsa; Eq. (4)
The color suppression processing module 152 now expands the G component of the color signals in the area selected by Eq. (4) (Step S17). The color suppression processing module 152 executes the process by applying Eq. (8).
ra′=ra;
ga′=ra+2*ga−2*Gsa;
ba′=ba; Eq. (8)
a and 7b are illustrations showing the mapping process in the first embodiment; the process is performed in Steps S13, S17 of
ra+ga=2*Gsa;
That is, the area represented by Eq. (4) includes the areas 520 and 610 indicated by hatching in
As shown in
The description now continues referring back to
ga+ba>2*Gsa and
ra−ba<0 and
ga<Gsa and
ba<Rsa; Eq. (5)
The color suppression processing module 152 now expands the G component of the color signals in the area selected by Eq. (5) (Step S18). The color suppression processing module 152 executes the process by applying Eq. (6).
ra′=ra;
ga′=ra+2*ga−2*Gsa;
ba′=ba; Eq. (6)
a and 8b are illustrations explaining in model fashion the mapping process in the first embodiment; the process is performed in Steps S14-S15 of
ga+ba=2*Gsa;
That is, the area represented by Eq. (5) includes the areas 530 and 710 indicated by hatching in
In
The description now continues referring back to
ra′=ra;
ga′=ga;
ba′=ba; Eq. (7)
The color suppression processing module 152 performs a process to return to the original color space 300 from the normalized color space 400 (Step S18). The color suppression processing module 152 executes the process by applying Eq. (9).
r′=ra′*Rg;
g′=ga′*Gg;
b′=ba′*Bg; Eq. (9)
The color space 300 on which the above process has been performed is shown in
The color suppression processing module 152 compresses and transfers the image data 10 subjected to the above processing, so that conversion processing will be able to be performed in the color conversion processing module 153.
According to the image processing device of the first embodiment set forth hereinabove, it is possible to restrain the occurrence of false color with preserving chroma gradations, even when bright achromatic color is photographed.
The mapping model 900 explains the result of suppressing chroma over a wide spectrum. As shown in the drawing, all signals in an area 902 of luminance k2 and above (hatched module) and a high luminance area 901 have a constant level of decline in chroma. Thus, all signals in the high luminance area 901 of luminance k1 and above (the gray area) have chroma of “0” and are mapped to the achromatic axis Y. Accordingly, chroma changes precipitously, and color loss sometimes occurs.
In the present invention, as shown by the mapping model 910, the displacement level entailed in mapping varies continuously. Since, chroma gradations is able to be preserved with high accuracy before and after, the image processing device is able to map to perform natural-looking tone reproduction, and is able to restrain the occurrence of false color.
In the present invention, since a candidate area for mapping is set on the basis of the achromatic axis and saturation segments, the area is able to be set easily, and the calculation load is able to be reduced.
In the first embodiment, the color suppression process is performed using Rg, Gg, Bg which denote R, G, B color component ratios when an achromatic color subject is photographed. In the second embodiment, the color suppression process is performed with consideration given to gain level used in the white balance process. The white balance process is a process for preventing a photographed image from assuming color cast different from the actual color of the subject due to a different illuminant, and is executed by multiplying the color component values by different gain levels, depending on the illuminant.
In this second embodiment, the process indicated below is performed. Using gain level established in association with any particular specified illuminant, adjustment is performed such that R, G, B color component values of coordinate points on the achromatic axis are identical, and the achromatic axis is established on an identical axis to an achromatic axis defined on the basis of the specified illuminant. The color suppression process is performed on the basis of the established achromatic axis. Different values are established for gain level, depending on the illuminant.
In this second embodiment, the digital camera 200 is equipped with a function whereby, on the basis of user input, illuminant data that identifies the illuminant at the time of shooting is set in the image data 30. On the basis of an environmental condition set in the image data 30, the image processing device 100 carries out the white balance process. The following detailed description of environmental conditions makes reference to
B1. Illuminant Information:
a,11b, and 11c explain environmental conditions in the second embodiment.
As shown in the drawing, the environmental conditions “sunny”, “cloudy” etc. are displayed, together with radio buttons 221, 222, 223, 224. The user operates the control buttons 201-205 to select the desired environmental condition. In the second embodiment, “sunny” has been selected as the environmental condition.
b is an illustration showing an exemplary data arrangement of the image data 30. The image data 30 has Exif information 31 and data 32. The environmental condition set at the time of shooting is set in the Exif information 31. The following description of an example of Exif information content makes reference to
c is an illustration showing exemplary settings in the Exif information 31. As illustrated, the Exif information 31 records information relating to the image data 30 such as “image title” and “shooting date” as well as the shooting environment at the time the image data 30 was shot, such as “illuminant information”, “color space information”, “ISO sensitivity”, “exposure time”, “subject distance”, “focal distance” and the like. As indicated by the broken lines in the drawing, the “illuminant information” of the image data 30 has been set to “sunny.” The Exif information 31 is able to be easily referenced, and various values are able to be acquired, making it suitable as material for deciding illuminant information.
B2. Functional Blocks:
a and
The gain level specification information acquiring module 160 refers to the Exif information 31 of the image data 30 to acquire “illuminant information.”In this second embodiment, “sunny” is recorded in the “illuminant information” of the Exif information 31.
The gain level specifying module 161 has a gain table 170. The specifics of the gain table 170 are shown in
The gain level specifying module 161 acquires the “illuminant information” from the gain level specification information acquiring module 160, and identifies a gain level corresponding to the acquired “illuminant information.” In this second embodiment, since the gain level specifying module 161 has acquired the information “sunny” as the illuminant information from the gain level specification information acquiring module 160, the gain level for the illuminant type “sunny” is used. As shown in the drawing, for image data set with the illuminant type “sunny”, gain levels when carrying out the white balance process are “Rgain” of “1.90”, “Ggain” of “1.00”, and “Bgain” of “1.71.” The following description of the specifics of the color suppression process using gain levels identified in this way refers to
B3. Color Suppression Process:
The color suppression processing module 152 applies the following Eq.(10) to make identical the color component ratios of the three colors R, G, B of coordinate points on the achromatic axis (Step S20).
Rg=1/Rgain;
Gg=1/Ggain;
Bg=1/Bgain; Eq.(10)
Using Rg, Gg, Bg calculated with Eq. (10), the color suppression processing module 152 executes the color suppression process (Steps S21-S29). The process from Step S21-Step S29 is a process similar to the process of Steps S10-S17 of
The color suppression processing module 152 applies the following Eq.(11) to calculate coordinate points of the color space after the color suppression process (Step S29).
r′=ra′;
g′=ga′;
b′=ba′; Eq. (11)
According to the image processing system of second embodiment described hereinabove, using gain levels employed in the white balance process, adjustments are made so that color component values of the three colors of coordinate points on the achromatic axis become identical, and the color suppression process is then executed on the basis of the adjusted achromatic axis. Accordingly, in image data shot under various illuminants, the color suppression process is able to be performed on the basis of the appropriate achromatic axis, so that tone reproduction is able to be performed with high accuracy.
Also, in this second embodiment, since illuminant information is acquired from Exif information that is recorded in the image data, illuminant information is able to be acquired easily, and the processing load on the image processing device is able to be reduced.
C. Variations:
(1):
In First embodiment, saturation takes place in the order G-B-R, but is not limited to this. With the color suppression processing method of the present invention, it is possible to achieve natural-looking tone reproduction regardless of the order of saturation.
(2):
In The first embodiment, an RGB color coordinate system is employed as the color coordinate system based on color signals of three colors, but the invention is not limited to this, it being possible to employ other color coordinate systems. In the first embodiment, an area is set as a candidate for mapping on the basis of saturation segments and the achromatic axis, but it would be acceptable instead for the user to specify an arbitrary area. An area of less than a predetermined a distance from the saturation segments or the like would be acceptable as well.
(3):
In the second embodiment, illuminant information set at the time of shooting image data is recorded in the Exif information, to which reference is made to acquire the illuminant information and set gain levels; however, this arrangement is not limiting. For example, the illuminant should be set arbitrarily at the time of image processing by the image processing device 100. In this case, as shown in
According to this variation, even if illuminant information has not been set at the time of shooting, illuminant information is able to be set easily after the fact to set gain levels, so that convenience is able to be improved.
(4):
In the second embodiment, making reference to illuminant information recorded in Exif information, gain levels are set on the basis of a gain table 170 provided in advance, but this arrangement is not limiting. For example, it would be acceptable to analyze pixel values of image data and calculate gain levels on the basis of the results of the analysis. In this case, this could be realized by means of a process such as the following.
The gain level setting module 155 samples pixels in the image data 30, in proximity to where an achromatic subject is photographed (Step S30), and calculates average values of the R, G, B color component values of the pixel group (Step S31). The gain level setting module 155 calculates gain level for each of the R, G, B color components, such that the calculated average values are equal (Step S32).
Using the gain levels calculated by means of the above process, the color suppression processing module 152 executes the color suppression process.
According to this variation, since appropriate gain levels are able to be calculated flexibly, the accuracy of tone reproduction is able to be improved.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
The Japanese patent applications as the basis of the priority claim of this application are incorporated in the disclosure here of by reference:
Number | Date | Country | Kind |
---|---|---|---|
2004-101896 | Mar 2004 | JP | national |
2004-306417 | Oct 2004 | JP | national |