The present invention relates to an image processing apparatus and method, and a method of manufacturing the image processing apparatus. More particularly, the present invention relates to an image processing apparatus and method in which more faithful colors are reproduced and noise is reduced, and to a method of manufacturing the image processing apparatus.
In recent years, image processing apparatuses (digital cameras, color scanners, etc.) intended for consumers, and image processing software have come into wide use, and the number of users who edit, by themselves, images obtained by, for example, taking pictures has increased.
Along with this situation, there has also been a very strong demand for high quality images (demand for better color, demand for reduction in noise, etc.). The current situation is that more than half of users cite good image quality as a first condition when purchasing a digital camera, and the like.
In a digital camera, generally, a color filter 1 of the three primary colors RGB, shown in
An offset correction processing section 21 removes offset components contained in an image signal supplied from a front end 13 for performing a predetermined process on a signal obtained by the CCD imaging device, and outputs the obtained image signal to a white-balance correction processing section 22. The white-balance correction processing section 22 corrects the balance of each color on the basis of the color temperature of the image signal supplied from the offset correction processing section 21 and the difference in the sensitivity of each filter of the color filter 1. The color signal obtained as a result of a correction being made by the white-balance correction processing section 22 is output to a gamma correction processing section 23. The gamma correction processing section 23 makes gamma correction on the signal supplied from the white-balance correction processing section 22, and outputs the obtained signal to a vertical-direction time-coincidence processing section 24. The vertical-direction time-coincidence processing section 24 is provided with a delay device, so that signals having vertical deviations in time, which are supplied from the gamma correction processing section 23, are made time coincident.
An RGB signal generation processing section 25 performs an interpolation process for interpolating the color signal supplied from the vertical-direction time-coincidence processing section 24 in the phase of the same space, a noise removal process for removing noise components of the signal, a filtering process for limiting the signal band, and a high-frequency correction process for correcting high-frequency components of the signal band, and outputs the obtained RGB signals to a luminance signal generation processing section 26 and a color-difference signal generation processing section 27.
The luminance signal generation processing section 26 combines the RGB signals supplied from the RGB signal generation processing section 25 at a predetermined combination ratio in order to generate a luminance signal. The color-difference signal generation processing section 27 likewise combines the RGB signals supplied from the RGB signal generation processing section 25 at a predetermined combination ratio in order to generate color-difference signals (Cb, Cr). The luminance signal generated by the luminance signal generation processing section 26 and the color-difference signals generated by the color-difference signal generation processing section 27 are output to, for example, a monitor provided outside the signal processing section 11.
It is common practice that, in this manner, image processing is performed on the original signal by a linear transform after gamma processing is performed thereon.
As a condition for determining a color filter, firstly, “color reproduction characteristics” for reproducing a color faithful to how the color appears to the eyes of a human being is one example. These “color reproduction characteristics” are formed of an “appearance of color” meaning that the color is brought closer to a color which is seen by the eyes of a human being, and “color discrimination characteristics” (metamerism matching) meaning that colors which are seen as different by the eyes of a human being are reproduced as different colors and colors which are seen as the same are reproduced as the same color. Secondly, the satisfaction of “physical limitations” when producing a filter, such as the spectral components having positive sensitivity and spectral sensitivity characteristics having one peak, is another example. Thirdly, a consideration of “noise reduction characteristics” is a further example.
In order to produce and evaluate a color filter with importance placed on “color reproduction characteristics”, hitherto, for example, a filter evaluation coefficient such as a q factor, a μ factor, or an FOM (Figure of Merit) has been used. These coefficients take a value of 0 to 1; the closer the spectral sensitivity characteristics of the color filter to a linear transform of the spectral sensitivity characteristics (color matching function) of the eyes of a human being, the greater value the coefficients take, that is, the coefficients indicate a value closer to 1. In order to make the values of these coefficients closer to 1, the spectral sensitivity is made to satisfy the Luther condition.
However, if the color filter is designed so as to satisfy the Luther condition, the color filter has negative spectral components or becomes such that a plurality of peak values occur, as shown in
Therefore, when the color filter is designed by considering the above-described “physical limitations” in addition to the Luther condition, the spectral sensitivity characteristics usually become characteristics, shown in
However, in a filter having spectral sensitivity characteristics shown in
Therefore, in order that “noise reduction characteristics” be satisfied, it is considered that the portion where the spectral sensitivity characteristics of R overlap the spectral sensitivity characteristics of G is decreased even if the “color reproduction characteristics” is sacrificed somewhat, and, for example, the filter is made to have the spectral sensitivity characteristics shown in
However, in the case of a filter having such characteristics, there is a problem in that, for example, so-called “color discrimination characteristics” are degraded, such as objects which are seen to the eyes as having different colors being photographed as the same color by a digital camera.
The degradation of the “color discrimination characteristics” are further described as follows. That is,
In
Furthermore, in color filter evaluation by using a q factor, a μ factor, or an FOM, “noise reduction characteristics” are not considered, and the filter is not desirable from the viewpoint of “noise reduction characteristics”. Nevertheless, the highest evaluation (the value of the coefficient is 1) is indicated with respect to a filter that satisfies both “color reproduction characteristics” and “physical limitations” (filter, shown in
The present invention has been made in view of such circumstances. The present invention aims to be capable of reproducing more faithful colors and reducing noise.
The image processing apparatus of the present invention includes: extraction means for extracting first to third light of the three primary colors, and fourth light having a high correlation with the second light among the first to third light of the three primary colors; conversion means for converting the first to fourth light extracted by the extraction means into corresponding first to fourth color signals; and signal generation means for generating fifth to seventh color signals corresponding to the three primary colors on the basis of the first to fourth color signals, wherein the signal generation means generates the fifth to seventh color signals on the basis of a conversion equation provided to minimize the difference, at a predetermined evaluation value, between a reference value computed in accordance with a predetermined color patch and an output value computed by spectral sensitivity characteristics of the extraction means in accordance with the color patch.
The extraction means for extracting the first to fourth light may have a unit composed of first to fourth extraction sections for extracting the first to fourth light, respectively, and the second extraction section and the fourth extraction section for extracting the second light and the fourth light, respectively, may be positioned diagonally at the unit.
The second extraction section and the fourth extraction section may have spectral sensitivity characteristics which closely resemble visible sensitivity characteristics of a luminance signal.
The first to third light of the three primary colors may be red, green, and blue light, respectively, and the fourth light may be green light.
The difference may be a difference in an XYZ color space.
The difference may be a difference in a uniform perceptual color space.
The difference may be propagation noise for color separation.
The image processing method for use with an image processing apparatus of the present invention includes: an extraction step of extracting first to third light of the three primary colors, and fourth light having a high correlation with the second light among the first to third light of the three primary colors; a conversion step of converting the first to fourth light extracted in the process of the extraction step into corresponding first to fourth color signals; and a signal generation step of generating fifth to seventh color signals corresponding to the three primary colors on the basis of the first to fourth color signals, wherein the signal generation step generates the fifth to seventh color signals on the basis of a conversion equation provided to minimize the difference, at a predetermined evaluation value, between a reference value computed in accordance with a predetermined color patch and an output value computed by spectral sensitivity characteristics of the extraction means in accordance with the color patch.
The method of manufacturing an image processing apparatus of the present invention includes: a first step of providing conversion means; and a second step of producing, in front of the conversion means provided in the process of the first step, extraction means for extracting first to third light of the three primary colors, and fourth light having a high correlation with the second light among the first to third light of the three primary colors by determining spectral sensitivity characteristics using a predetermined evaluation coefficient.
In the second step, as the extraction means, a unit composed of first to fourth extraction sections for extracting the first to fourth light, respectively, may be formed, and the second extraction section and the fourth extraction section for extracting the second light and the fourth light, respectively, may be positioned diagonally at the unit.
The evaluation coefficient may be an evaluation coefficient for approximating the spectral sensitivity characteristics of the second extraction section and the fourth extraction section to visible sensitivity characteristics of a luminance signal.
The evaluation coefficient may be an evaluation coefficient in which noise reduction characteristics as well as color reproduction characteristics are considered.
In the second step, the first to third light of the three primary colors may be red, green, and blue light, respectively, and the fourth light may be green light.
The manufacturing method may further include a third step of producing generation means for generating fifth to seventh color signals corresponding to the three primary colors on the basis of the first to fourth color signals generated by converting the first to fourth light by the conversion means.
In the digital camera shown in
As indicated by the short dashed line in
As will be described in detail later, by setting the number of types of colors of images obtained by the image sensor 45 to 4 so as to increase color information to be obtained, when compared to the case in which only three types of colors (RGB) are obtained, it is possible to represent colors more accurately, and the reproduction (“color discrimination characteristics”) such that colors which are seen different to the eyes of a human being are reproduced as different colors and colors which are seen the same are reproduced as the same color can be improved.
As can be seen from the visible sensitivity curve shown in
As a filter evaluation coefficient used when the four-color color filter 61 is determined, for example, a UMG (Unified Measure of Goodness) in which both “color reproduction characteristics” and “noise reduction characteristics” are considered is used.
In the evaluation using UMG, merely the satisfaction of the Luther condition by the filter to be evaluated does not cause the evaluation value to be increased, and the overlap of the spectral sensitivity distribution of each filter is also taken into consideration. Therefore, when compared to the case of the color filter evaluated using a q factor, a p factor, or an FOM, noise can be reduced even more. That is, as a result of the evaluation using an UMG, the spectral sensitivity characteristics have a certain degree of overlap. However, since a filter in which substantially all characteristics do not overlap like the R characteristics and the G characteristics of
The reason why noise is suppressed by the fourth filter (G2 filter) will be mentioned. As the cell size of CCDs is minimized to increase the number of pixels, the spectral sensitivity curves of the primary-color filters become thick in order to improve the sensitivity efficiency, and the overlap of the filters tends to increase. The addition of another filter under such circumstances has the effect of suppressing the original overlap of the three primary colors, with the result that noise is prevented.
As shown in
In comparison, for the UMG used when the four-color color filter 61 is determined, a plurality of filters can be evaluated at one time, the spectral reflectance of the object is considered, and the reduction of noise is considered.
The details of the q factor are disclosed in “H. E. J. Neugebauer “Quality Factor for Filters Whose Spectral Transmittances are Different from Color Mixture Curves, and Its Application to Color Photography” JOURNAL OF THE OPTICAL SOCIETY OF AMERICA, VOLUME 46, NUMBER 10”. The details of the p factor are disclosed in “P. L. Vora and H. J. Trussell, “Measure of Goodness of a set of color-scanning filters”, JOURNAL OF THE OPTICAL SOCIETY OF AMERICA, VOLUME 10, NUMBER 7”. The details of the FOM are disclosed in “G. Sharma and H. J. Trussell, “Figures of Merit for Color Scanners, IEEE TRANSACTION ON IMAGE PROCESSING, VOLUME 6”. The details of the UMG are disclosed in “S. Quan, N. Ohta, and N. Katoh, “Optimal Design of Camera Spectral Sensitivity Functions Based on Practical Filter Components”, CIC, 2001”.
Referring back to
The aperture stop 43 adjusts the passage (aperture) of light collected by the lens 42 so as to control the amount of light received by an image sensor 45. The shutter 44 controls the passage of light collected by the lens 42 in accordance with instructions from the microcomputer 41.
The image sensor 45 further includes an imaging device composed of a CCD and a CMOS (Complementary Metal Oxide Semiconductor). The image sensor 45 converts light which is input via the four-color color filter 61 formed in front of the imaging device into electrical signals, and outputs four types of color signals (R signal, G1 signal, G2 signal, and B signal) to the front end 47. The image sensor 45 is provided with the four-color color filter 61 of
The front end 47 performs a correlation double sampling process for removing noise components, a gain control process, a digital conversion process, etc., on the color signal supplied from the image sensor 45. The image data obtained as a result of various processing being performed by the front end 47 is output to the camera system LSI 48.
As will be described in detail later, the camera system LSI 48 performs various processing on the image data supplied from the front end 47 in order to generate, for example, a luminance signal and color signals, outputs the color signals to an image monitor 50, whereby an image corresponding to the signals is displayed.
An image memory 49 is composed of, for example, DRAM (Dynamic Random Access Memory), SDRAM (Synchronous Dynamic Random Access Memory), and the like, and is used as appropriate when the camera system LSI 48 performs various processing. An external storage medium 51 formed by a semiconductor memory, a disk, etc., is configured in such a manner as to be loadable into the digital camera of
The image monitor 50 is formed by, for example, an LCD (Liquid Crystal Display), and displays captured images, various menu screens, etc.
A signal processing section 71 performs various processing, such as an interpolation process, a filtering process, a matrix computation process, a luminance signal generation process, and a color-difference signal generation process, on four types of color information supplied from the front end 47, and, for example, outputs the generated image signals to the image monitor 50 via a monitor interface 77.
Based on the output from the front end 47, an image detection section 72 performs detection processing, such as autofocus, autoexposure, and auto white balance, and outputs the results to the microcomputer 41 as appropriate.
A memory controller 75 controls transmission and reception of data among the processing blocks or transmission and reception of data among predetermined processing blocks and the image memory 49, and, for example, outputs image data supplied from the signal processing section 71 via a memory interface 74 to the image memory 49, whereby the image data is stored.
An image compression/decompression section 76 compresses, for example, the image data supplied from the signal processing section 71 at a JPEG format, and outputs the obtained data via the microcomputer interface 73 to the external storage medium 51, whereby the image data is stored. The image compression/decompression section 76 further decompresses (expands) the compressed data read from the external storage medium 51 and outputs the data to the image monitor 50 via the monitor interface 77.
An offset correction processing section 91 removes noise components (offset components) contained in the image signal supplied from the front end 47, and outputs the obtained image signal to a white-balance correction processing section 92. The white-balance correction processing section 92 corrects the balance of each color on the basis of the color temperature of the image signal supplied from the offset correction processing section 91 and the difference in the sensitivity of each filter of the four-color color filter 61. The color signals obtained as a result of a correction being made by the white-balance correction processing section 92 are output to a vertical-direction time-coincidence processing section 93. The vertical-direction time-coincidence processing section 93 is provided with a delay device, so that signals having vertical deviations in time, which are supplied from the white-balance correction processing section 92, are made time coincident (corrected).
A signal generation processing section 94 performs an interpolation process for interpolating color signals of 2×2 pixels of the minimum unit of RG1G2B, which are supplied from the vertical-direction time-coincidence processing section 93, in the phase of the same space, a noise removal process for removing noise components of the signal, a filtering process for limiting the signal band, and a high-frequency correction process for correcting high-frequency components of the signal band, and outputs the obtained RG1G2B signals to the linear matrix processing section 95.
Based on predetermined matrix coefficients (a 3×4 matrix), the linear matrix processing section 95 performs a computation of the RG1G2B signals in accordance with the following equation (1), and generates the RGB signals of the three colors.
The R signal generated by the linear matrix processing section 95 is output to a gamma correction processing section 96-1, the G signal is output to a gamma correction processing section 96-2, and the B signal is output to a gamma correction processing section 96-3.
The gamma correction processing sections 96-1 to 96-3 make a gamma correction on each of the RGB signals output from the linear matrix processing section 95, and output the obtained RGB signals to a luminance (Y) signal generation processing section 97 and a color-difference (C) generation processing section 98.
The luminance signal generation processing section 97 combines the RGB signals supplied from the gamma correction processing sections 96-1 to 96-3 at a predetermined combination ratio in accordance with the following equation (2), generating a luminance signal.
Y=0.2126R+0.7152G+0.0722B (2)
The color-difference signal generation processing section 98 likewise combines the RGB signals supplied from the gamma correction processing sections 96-1 to 96-3 at a predetermined combination ratio, generating color-difference signals (Cb, Cr). The luminance signal generated by the luminance signal generation processing section 97 and the color-difference signals generated by the color-difference signal generation processing section 98 are, for example, output to the image monitor 50 via the monitor interface 77 of
In the digital camera having the above-described configuration, when the capturing of an image is instructed, the microcomputer 41 controls the TG 46 so that an image is captured by the image sensor 45. That is, the four-color color filter 61 formed in front of the imaging device such as a CCD making up the image sensor 45 allows light of four colors to be transmitted therethrough, and the transmitted light is captured by the CCD imaging device. The light captured by the CCD imaging device is converted into four-color color signals, and the signals are output to the front end 47.
The front end 47 performs a correlation double sampling process for removing noise components, a gain control process, a digital conversion process, etc., on the color signals supplied from the image sensor 45, and outputs the obtained image data to the camera system LSI 48.
In the signal processing section 71 of the camera system LSI 48, offset components of the color signals are removed by the offset correction processing section 91, and the balance of each color is corrected by the white-balance correction processing section 92 on the basis of the color temperature of the image signal and the difference in the sensitivity of each filter of the four-color color filter 61.
Signals having vertical deviations in time, which are corrected by the white-balance correction processing section 92, are made time coincident (corrected) by the vertical-direction time-coincidence processing section 93. The signal generation processing section 94 performs an interpolation process for interpolating color signals of 2×2 pixels of the minimum unit of RG1G2B, which are supplied from the vertical-direction time-coincidence processing section 93, in the phase of the same space, a noise removal process for removing noise components of the signal, a filtering process for limiting the signal band, a high-frequency correction process for correcting high-frequency components of the signal band, and the like.
Furthermore, in the linear matrix processing section 95, the signal (RG1G2B signal) generated by the signal generation processing section 94 is converted in accordance with predetermined matrix coefficients (a 3×4 matrix), generating three color RGB signals. The R signal generated by the linear matrix processing section 95 is output to the gamma correction processing section 96-1, the G signal is output to the gamma correction processing section 96-2, and the B signal is output to the gamma correction processing section 96-3.
The gamma correction processing sections 96-1 to 96-3 make gamma correction on each of the RGB signals obtained by the processing of the linear matrix processing section 95. The obtained RGB signals are output to the luminance signal generation processing section 97 and the color-difference signal generation processing section 98. In the luminance signal generation processing section 97 and the color-difference signal generation processing section 98, the R signal, the G signal, and the B signal, which are supplied from the gamma correction processing sections 96-1 to 96-3, are combined at a predetermined combination ratio, generating a luminance signal and color-difference signals. The luminance signal generated by the luminance signal generation processing section 97 and the color-difference signals generated by the color-difference signal generation processing section 98 are output to the image compression/decompression section 76 of
As described above, since one piece of image data is formed on the basis of four kinds of color signals, the reproduction characteristics become closer to that which appears to the eyes of a human being.
On the other hand, when the playback (display) of the image data stored in the external storage medium 51 is instructed, the image data stored in the external storage medium 51 is read by the microcomputer 41, and the image data is output to the image compression/decompression section 76 of the camera system LSI 48. In the image compression/decompression section 76, the compressed image data is expanded, and an image corresponding to the data obtained via the monitor interface 77 is displayed on the image monitor 50.
Next, referring to the flowchart in
In step S1, a four-color color filter determination process for determining the spectral sensitivity characteristics of the four-color color filter 61 provided in the image sensor 45 of
After the four-color color filter 61 is determined and the matrix coefficients are determined, in step S3, the signal processing section 71 of
Here, object colors which are referred to when “color reproduction characteristics”, “color discrimination characteristics”, etc., are evaluated will now be described. The object colors are computed by the value such that the product of the “spectral reflectance of the object”, the “spectral energy distribution of standard illumination”, and the “spectral sensitivity distribution (characteristics) of a sensor (color filter) for sensing an object” is integrated in the range of the visible light region (for example, 400 to 700 nm). That is, the object colors are computed by the following equation (3).
Object color=k∫vis(Spectral reflectance of an object)·(Spectral energy distribution of illumination)·(Spectral sensitivity distribution of a sensor for sensing an object)dλ (3)
For example, when a predetermined object is observed by the eye, the “spectral sensitivity distribution of the sensor” of equation (3) is represented by a color matching function, and the object colors of the object are represented by tristimulus values of X, Y, and Z. More specifically, the X value is computed by equation (4-1), the Y value is computed by equation (4-2), and the Z value is computed by equation (4-3). The value of the constant k in equations (4-1) to (4-3) is computed by equation (4-4).
X=k∫visR(λ)·P(λ)·{overscore (x)}(λ)dλ (4-1)
Y=k∫visR(λ)·P(λ)·{overscore (y)}(λ)dλ (4-2)
Z=k∫visR(λ)·P(λ)·{overscore (z)}(λ)dλ (4-3)
When the image of the predetermined object is captured by the image processing apparatus such as a digital camera, the “spectral sensitivity characteristics of a sensor” of equation (3) above are represented by the spectral sensitivity characteristics of the color filter, and for the object colors of the object, the object colors of the color values of the number of filters (for example, the RGB values (three values) in the case of RGB filters (three kinds)) are computed. When the image processing apparatus is provided with RGB filters for detecting three kinds of colors, specifically, the R value is computed by equation (5-1), the G value is computed by equation (5-2), and the B value is computed by equation (5-3). Furthermore, the value of the constant kr in equation (5-1) is computed by equation (5-4), the value of the constant kg in equation (5-2) is computed by equation (5-5), and the value of the constant kb in equation (5-3) is computed by equation (5-6).
R=kr∫visR(λ)·P(λ)·{overscore (r)}(λ)dλ (5-1)
G=kg∫visR(λ)·P(λ)·{overscore (g)}(λ)dλ (5-2)
B=kb∫visR(λ)·P(λ)·{overscore (b)}(λ)dλ (5-3)
Next, referring to the flowchart in
For determining the four-color color filter, there are various methods. A description is given below of an example of a process in which RGB filters are used as a basis (one of the existing G filters (of
In step S21, a color target used for computing the UMG values is selected. For example, in step S21, a color target containing a lot of color patches representing existing colors and containing a lot of color patches with importance placed on the memorized colors of a human being (skin color, green of plants, blue of the sky, etc.) is selected. Examples of the color target include IT8.7, a Macbeth color checker, a GretagMacbeth digital camera color checker, CIE, and a color bar.
Furthermore, depending on the purpose, a color patch that can be a standard may be created from the data, such as an SOCS (Standard Object Color Spectra Database), and it may be used. The details of the SOCS are disclosed in “Joji TAJIMA, “Statistical Color Reproduction Evaluation by Standard Object Color Spectra Database (SOCS)”, Color Forum JAPAN 99”. A description is given below of a case in which the Macbeth color checker is selected as a color target.
In step S22, the spectral sensitivity characteristics of the G2 filter are determined. Spectral sensitivity characteristics that can be created from existing materials may be used. Also, assuming a virtual curve C(λ) by a cubic spline curve (three-order spline function) shown in
In this example, only the filter G2 is added. Alternatively, only the R filter and the B filter of the filters (R, G, G, B) of
In step S23, a filter to be added (G2 filter) and the existing filters (R filter, G1 filter, and B filter) are combined to create a minimum unit (set) of a four-color color filter. In step S24, an UMG is used as the filter evaluation coefficient with respect to the four-color color filter produced in step S23, and the UMG values is computed.
As described with reference to
In step S25, it is determined whether or not the UMG value computed in step S24 is greater than or equal to “0.95”, which is a predetermined threshold value. When it is determined that the UMG value is less than “0.95”, the process proceeds to step S26, where the produced four-color color filter is rejected (not used). When the four-color color filter is rejected in step S26, the processing is thereafter terminated (processing of step S2 and subsequent steps of
On the other hand, when it is determined in step S25 that the UMG value computed in step S24 is greater than or equal to “0.95”, in step S27, the four-color color filter is assumed as a candidate filter to be used in the digital camera.
In step S28, it is determined whether or not the four-color color filter which is assumed as a candidate filter in step S27 can be realized by existing materials and dyes. When materials, dyes, etc., are difficult to obtain, it is determined that the four-color color filter cannot be recognized, and the process proceeds to step S26, where the four-color color filter is rejected.
On the other hand, when it is determined in step S28 that the materials, dyes, etc., can be obtained and the four-color color filter can be realized, the process proceeds to step S29, where the produced four-color color filter is determined as a filter to be used in the digital camera. Thereafter, the processing of step S2 and subsequent steps of
In
As a result of using the four-color color filter determined in the above-described manner, in particular, the “color discrimination characteristics” among the “color reproduction characteristics” can be improved.
From the viewpoint of light use efficiency, in the manner described above, it is preferable that a filter having a high correlation with the G filter of the existing RGB filter be used as a filter (G2 filter) to be added. In this case, it is empirically preferable that the peak value of the spectral sensitivity curve of the filter to be added exist in the range of 495 to 535 nm (in the vicinity of the peak value of the spectral sensitivity curve of the existing G filter).
When a filter having a high correlation with the existing G filter is added, the four-color color filter can be produced by only using one of the two G filters which make up the minimum unit (R, G, G, B) of
When the four-color color filter is produced in the manner described above and it is provided in the digital camera, four types of color signals are supplied to the signal processing section 71 of
Next, referring to the flowchart in
For the color target to be used in the processing of
In step S41, for example, common daylight D65 (illumination light L(λ)), which is regarded as a standard light source in CIE (Commission Internationale del'Eclairange), is selected as illumination light. The illumination light may be changed to illumination light in an environment where the image processing apparatus is expected to be frequently used. When there are a plurality of illumination environments to be assumed, a plurality of linear matrixes may be provided. A description will now be given below of a case in which the daylight D65 is selected as illumination light.
In step S42, reference values Xr, Yr, and Zr are computed. More specifically, the reference value Xr is computed by equation (7-1), Yr is computed by equation (7-2), and Zr is computed by equation (7-3).
Xr=k∫visR(λ)·L(λ)·{overscore (x)}(λ)dλ (7-1)
Yr=k∫visR(λ)·L(λ)·{overscore (y)}(λ)dλ (7-2)
Zr=k∫visR(λ)·L(λ)·{overscore (z)}(λ)dλ (7-3)
The constant k is computed by equation (8).
k=1/∫visL(λ)·y(λ)dλ (8)
For example, when the color target is a Macbeth color checker, reference values for 24 colors are computed.
Next, in step S43, the output values Rf, G1f, G2f, and Bf of the four-color color filter are computed. More specifically, Rf is computed by equation (9-1), G1f is computed by equation (9-2), G2f is computed by equation (9-3), and Bf is computed by equation (9-4).
Rf=kr∫visR(λ)·L(λ)·{overscore (r)}(λ)dλ (9-1)
G1f=kg1∫visR(λ)·L(λ)·{overscore (g1)}(λ)dλ (9-2)
G2f=kg2∫visR(λ)·L(λ)·{overscore (g2)}(λ)dλ (9-3)
Bf=kb∫visR(λ)·L(λ)·{overscore (b)}(λ)dλ (9-4)
The constant kr is computed by equation (10-1), the constant kg1 is computed by equation (10-2), the constant kg2 is computed by equation (10-3), and the constant kb is computed by equation (10-4).
kr=1/∫visL(λ)·{overscore (r)}(λ)dλ (10-1)
kg1=1/∫visL(λ)·{overscore (g1)}(λ)dλ (10-2)
kg2=1/∫visL(λ)·{overscore (g2)}(λ)dλ (10-3)
kb=1/∫visL(λ)·{overscore (b)}(λ)dλ (10-4)
For example, when the color target is a Macbeth color checker, reference values Rf, G1f, G2f, and Bf for 24 colors are computed.
In step S44, a matrix used to perform a conversion for approximating the filter output value computed in step S43 to the reference value (XYZref) computed in step S42 is computed by, for example, a least square error method in the XYZ color space.
For example, when a 3×4 matrix to be computed is assumed to be A expressed by equation (11), the matrix transform (XYZexp) is expressed by the following equation (12).
The square of the error (E2) of the matrix transform (equation (12)) with respect to a reference value is expressed by the following equation (13), and based on this equation, the matrix A for minimizing the matrix transform error with respect to the reference value is computed.
E2=|XYZref−XYZexp|2 (13)
Furthermore, the color space used in the least square error method may be changed to that other than the XYZ color space. For example, by performing identical computations after the color space is converted into a Lab, Luv, or Lch color space which is uniform to the perception of a human being (uniform perceptual color space), a linear matrix that allows color reproduction with a small amount of perceptional error can be computed. Since the values of these color spaces are computed by a non-linear transform from the XYZ values, a non-linear calculation algorithm is used also in the least square error method.
As a result of the above-described computation, for example, a matrix coefficient for the filter having the spectral sensitivity characteristics shown in
In step S45, a linear matrix is determined. For example, when the final RGB image data to be produced is represented by the following equation (15), the linear matrix (LinearM) is computed as shown below.
RGBout=[Ro, Go, Bo]t (15)
That is, when illumination light is D65, a conversion equation for converting an sRGB color space into the XYZ color space is represented by equation (16) containing an ITU-R709.BT matrix, and equation (17) is computed by a reverse matrix of the ITU-R709.BT matrix.
Based on the matrix conversion equation of equation (12) and the reverse matrix of the ITU-R709.BT matrix of equation (15) and equation (17), equation (18) is computed. In the right side of equation (18), a linear matrix as the value in which the reverse matrix of the ITU-R709.BT matrix and the above-described matrix A are multiplied together is contained.
That is, the 3×4 linear matrix (LinearM) is represented by equation (19-1), and the linear matrix for the four-color color filter having the spectral sensitivity characteristics of
The linear matrix computed in the above-described manner is provided to the linear matrix processing section 95 of
Next, a description is given of an evaluation performed in step S6 of
When a comparison is made, for example, between the color reproduction characteristics of the image processing apparatus (the digital camera of
For example, the color difference in the Lab color space between the output value when a Macbeth chart is photographed by each of two kinds of image processing apparatus (a digital camera provided with a four-color color filter, and a digital camera provided with a three-color color filter) and the reference value is computed by the following equation (20).
ΔE=√{square root over ((L1−L2)2+(a1−a2)2+(b1−b2)2)} (20)
In
In the foregoing, as shown in
According to the present invention, captured colors can be reproduced faithfully.
Furthermore, according to the present invention, the “color discrimination characteristics” can be improved.
In addition, according to the present invention, the “color reproduction characteristics” and the “noise reduction characteristics” can be improved.
According to the present invention, the “appearance of the color” can be improved.
Number | Date | Country | Kind |
---|---|---|---|
2002-078854 | Mar 2002 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP03/02101 | 2/26/2003 | WO | 8/15/2005 |