1. Field of the Invention
The invention describes local control of color
2. Description of Related Art
A number of color models have been developed that attempt to represent a gamut of colors, based on a set of primary colors, in a three-dimensional space. Each point in that space depicts a particular hue; some color models also incorporate brightness and saturation. One such model is referred to as the RGB (Red, Green, Blue) color model. A common representation of the prior art RGB color model is shown in the
As illustrated, the first coordinate (r) represents the amount of red present in the hue; the second coordinate (g) represents green; and the third (b) coordinate refers to the amount of blue. Since each coordinate must have a value between 0 and 1 for a point to be on or within the cube, pure red has the coordinate (1, 0, 0); pure green is located at (0, 1, 0); and pure blue is at (0, 0, 1). In this way, the color yellow is at location (1, 1, 0), and since orange is between red and yellow, its location on this cube is (1, ½, 0). It should be noted that the diagonal D, marked as a dashed line between the colors black (0, 0, 0) and white (1, 1, 1), provides the various shades of gray.
In digital systems capable of accommodating 8-bit color (for a total of 24-bit RGB color), the RGB model has the capability of representing 2563, or more, than sixteen million colors representing the number of points within and on the cube 100. However, when using the RGB color space to represent a digital image, each pixel has associated with it three color components representing one of Red, Green, and Blue image planes. In order, therefore, to manage color in an image represented in the RGB color space by removing, for example, excess yellow due to tungsten filament based illumination, all three color components in RGB color space are modified since each of the three image planes are cross related. Therefore, when removing excess yellow, for example, it is difficult to avoid affecting the relationship between all primary colors represented in the digital image. The net result being that important color properties in the image, such as flesh tones, typically do not appear natural when viewed on an RGB monitor.
It is realized then, that the RGB color space may not be best for enhancing digital images and an alternative color space, such as a hue-based color space, may be better suited for addressing this technical problem. Therefore, typically when enhancing a digital image by, for example, color correction, the digital image is converted from the RGB color space to a different color space more representative of the way humans perceive color. Such color spaces include those based upon hue since hue is a color attribute that describes a pure color (pure yellow, orange, or red). By converting the RGB image to one of a hue-based color space, the color aspects of the digital image are de-coupled from such factors as lightness and saturation.
One such color model is referred to YUV color space. The YUV color space defines a color space in terms of one luminance (Y) and two chrominance components (UV) where Y stands for the luminance component (the brightness) and U and V are the chrominance (color) components that are created from an original RGB source. The weighted values of R, G and B are added together to produce a single Y signal, representing the overall brightness, or luminance, of that spot. The U signal is then created by subtracting the Y from the blue signal of the original RGB, and then scaling; and V by subtracting the Y from the red, and then scaling by a different factor. This can be accomplished easily with analog circuitry.
Unfortunately, however, since the UV space is partitioned using squares, interpolations occur that are not parallel to hue or saturation in most areas. This causes visible artifacts since the control grids are coarse. Such artifacts include undesired hues at the grid boundaries since the definable grids are not fine enough to prevent these effects. For example, flesh tone adjustments cause undesirable changes to hues near the flesh tone. If the intended adjustment occurs on a point surrounded by fine grids then a reasonable adjustment can be made. However when the color to be adjusted is bordered by coarse and fine grids, then either a coarse grid is adjusted, modifying colors not intended for manipulation, or edge effects can occur if the coarse grid is not modified, since no fading is done. Furthermore, the color adjustments occur in the UV plane irrespective of luminance (Y) value of the input, and cannot affect the luminance value itself. This is not desirable in some cases: for example, flesh tone may be best modified in the middle luminance band, with reduced effects in high/low luminance ranges, while the red axis control may be best modified in the low luminance range.
Therefore, what is desired is a method that acts directly upon hue, saturation, and luminance value of a pixel instead of its U and V value.
Broadly speaking, the invention describes a method, system, and apparatus that directly acts upon the hue, saturation, and luminance value of a pixel instead of its U and V value. Additionally, instead of dividing the color space into uniform areas, one described embodiment uses multiple user-defined regions. In this way, since detection and correction region is defined explicitly, the user is assured no colors other than those he chooses to affect will be changed. Because the detection and correction region is defined explicitly, the user is assured no colors other than those he chooses to affect will be changed. One described embodiment adds the ability to define a pixel's adjustment based on its input luminance value in addition to its color and provides the ability to modify the pixel's luminance. The detection of a pixel is based on its hue, saturation, and luminance value, so a single set of values can define the correction for an entire hue. This simplifies the program compared to other systems in which multiple correction values were needed to affect a single hue across all saturation values. Correction fading occurs in the hue, saturation, and luminance directions instead of the U and V directions as with other systems such that smooth fading can be used without affecting hues other than those specified.
As a method, the invention is performed by converting the pixel's color space from Cartesian coordinates to polar coordinates, determining whether the pixel lies within a 3-dimensional region described by a set of region parameters, applying a correction factor based upon the pixel's location in the 3-dimensional region, and converting the pixel's polar coordinates to Cartesian coordinates.
Other aspects and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the invention.
Reference will now be made in detail to a particular embodiment of the invention an example of which is illustrated in the accompanying drawings. While the invention will be described in conjunction with the particular embodiment, it will be understood that it is not intended to limit the invention to the described embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention.
Broadly speaking, the invention describes a method, system, and apparatus that directly acts upon the hue, saturation, and luminance value of a pixel instead of its U and V value only. Additionally, instead of dividing the color space into uniform areas, one described embodiment uses multiple user-defined regions. In this way, since detection and correction regions are defined explicitly, it is assured no colors other than those chosen to be affected will be changed. One described embodiment adds the ability to define a pixel's adjustment based on its input luminance value in addition to its color and therefore provides the ability to modify the pixel's luminance. Since the detection of a pixel is based on its hue, saturation, and luminance value, a single set of values can define the correction for an entire hue. This approach is great improvement over systems in which multiple correction values were needed to affect a single hue across all saturation values. Furthermore, smooth fading can be used without affecting hues other than those specified since correction fading occurs in the hue, saturation, and luminance directions instead of the U and V directions provided with other systems.
The invention will now be described in terms of a system based upon a video source and a display such as a computer monitor, television (either analog or digital), etc. In the case of a television display,
In the digital format, each pixel is represented by a brightness, or luminance component (also referred to as luma, “Y”) and color, or chrominance, components. Since the human visual system has much less acuity for spatial variation of color than for brightness, it is advantageous to convey the brightness component, or luma, in one channel, and color information that has had luma removed in the two other channels. In a digital system each of the two color channels can have considerably lower data rate (or data capacity) than the luma channel. Since green dominates the luma channel (typically, about 59% of the luma signal comprises green information), it is sensible, and advantageous for signal-to-noise reasons, to base the two color channels on blue and red. In the digital domain, these two color channels are referred to as chroma blue, Cb and chroma red Cr.
In composite video, luminance and chrominance are combined along with the timing reference ‘sync’ information using one of the coding standards such as NTSC, PAL or SECAM. Since the human eye has far more luminance resolving power than color resolving power, the color sharpness (bandwidth) of a coded signal is reduced to far below that of the luminance.
Referring now to
In the case where the image source 402 provides an analog image signal, an analog-to-digital converter (A/D) 408 is connected to the analog image source 404. In the described embodiment, the A/D converter 408 converts an analog voltage or current signal into a discrete series of digitally encoded numbers (signal) forming in the process an appropriate digital image data word suitable for digital processing.
Accordingly,
Accordingly, the AID converter 408 uses what is referred to as 4:x:x sampling to generate a scan line data word 600 (formed of pixel data words 500) as shown in
Referring back to
In order to preserve memory resources and bandwidth, the input pixel format detection and converter unit 702 detects the input pixel format and if determined to not be YUV color space, the input pixel data word format is converted to the YUV color space based upon any well known conversion protocols based upon the conversion shown in
In addition to providing a single format, the described embodiment utilizes multiple region definitions plus their associated correction parameters as illustrated in FIGS. 9 and
In the described embodiment, each region has its own unique user-configurable values for all parameters θcenter, θaperture, R1, R2, Y1, and Y2, Hue_offset, Hue_gain, Sat_offset, Sat_gain, Lum_offset, Lum_gain, U_offset, and V_offset (see Table 1 in
Referring back to
At 1116 and 1118, respectively, the region selector 705 determines the primary (and secondary in an implementation that allows overlapping regions) detected region address of the pixel. The primary region is the detected region with the lowest address number, and the secondary region is that with the second-lowest number. For example, if a pixel is within the overlapping area of regions 3 and 6, the primary region is 3, and the secondary is 6. If the pixel is not within any defined region, both the primary and the secondary regions are equal to MAX_REGION at 1120 and 1122, respectively.
To facilitate the linear fade from the edge of the full correction (“hard”) area through the fade region to the nearby non-corrected pixels, the distance from the edge of the hard area of a pixel in the fade area must be calculated. Then, later in the correction block, if a pixel is, for example, ⅓ of the way from the hard area to the outer edge of the fade area, then (1−⅓)=⅔ of the specified correction will be applied. A pixel within the hard area of a region will cause a distance of 0 to be generated, indicating full strength correction throughout the hard region. Each pixel channel (hue angle, saturation magnitude, and luminance) has an associated distance calculation that is output separately from the distance calculation block 706. The hue θ (Th) path is calculated according to the process 1200 shown by the flowchart of
The correction blocks, one for each of the primary and secondary detected regions, encapsulate all the operations necessary to apply the appropriate region-based corrections to input pixels. Each block takes as input a hue angle, saturation value, and luminance value and outputs a corrected hue angle, saturation value, and luminance value. In addition, the primary correction block also outputs the calculated Fade_factor. The correction block/function handles pixels differently depending on whether they lie in the “hard” region (non fade region) or lie in the fade region around the “hard” region. For a pixel inside the “hard” region, hue gain is applied to bring the hue further apart or closer to the region's theta-center. Saturation and luminance gain decreases or increases saturation and luminance for pixels in the region. Once the respective gains are applied, region specific hue, saturation, and luminance offsets are added
The application of a fade factor to the regional corrections is now described. Throughout the region's hard area the full regional correction values are applied. However, from the outer edge of the hard area to the outer edge of the fade area, the strength of correction declines linearly from 1× correction (full strength) to 0× correction (uncorrected pixels outside the region). Conceptually, the fade factor is simply
[1−(Sdist—1/fade—dist_hue)]×[1−(Sdist—2/fade—dist—sat)]×[1−(Sdist—3/fade—dist—lum)],
where sdist_x is the output of the region distance calculation block for each channel, and fade_dist_x is the length of the fade region in the relevant direction. Dividers are avoided by allocating registers to hold the values for 1/fade_dist_x, which are calculated externally. One of the five registers simply contains the value 1/Th_fade. The other regions contain the inverse of the values Rsoftlower, Rsoftupper, Ysoftlower, Ysoftupper. These values are themselves calculated as the fade distance given clamping to 0 and 255. For example, Rsoftlower=min(R1, Rfade), and Rsoftupper=min(255-R2, Rfade).
As with the other correction paths (saturation and luminance), the hue correction path applies a hue gain and offset to the input hue value. However, the different operation of the hue gain function necessitates a difference in the hue correction path. First, θ_diff is calculated as the signed difference between the region center angle θ_centre and the pixel hue angle θ. That is, if the saturation is zero, the region centre angle is used, and then a decision to use this value, or a value ±360 degrees is taken based on the region border angles. θ_diff is then clamped to ±Theta_ap. This clamped value is multiplied by θ_gain and rightshifted three bits. This has the effect of moving the pixel's hue either towards or away from the center of the region, depending on the sign of θ_gain. Adding θ_add to this value gives the total correction θ_totoffset to be applied to the pixel within the hard area of the region. θ_totoffset is multiplied by Fade_factor to reduce the correction strength if the pixel lies within the fade area, and the faded correction amount is added to the original hue angle θ. Finally, the corrected output is reduced to a modulo 360 value before being output from the correction block as θ_corr.
First, the input saturation value R is multiplied with Rgain and rightshifted 7 bits to give X=R*Rgain/128. This value is subtracted from R to isolate the amount of correction introduced by the gain. The saturation offset Radd is then added in to give the total saturation correction value, R_totoffset. The correction is then faded by multiplication with Fade_factor, added to the pixel saturation R, and clamped and rounded to the correct output bit width before being output as Rcorr. The luminance correction path is identical to the saturation correction path.
The U and V offset for a region are registered parameters for each region that give the amount of offset in the U or V direction. This offset is applied after the polar-to Cartesian conversion. This allows chroma adjustments that mere hue and saturation adjustments may not be sufficient to handle. For example, for blue shift it is desirable to have all pixels near grey (i.e. in a low-saturation circle centered on the origin) to shift towards high-saturation blue. Given the arbitrary hue angles of pixels in this region, neither pure hue nor pure saturation adjustments can achieve this. Therefore, a U and V offset is needed.
If a pixel lies within the overlapping area of two regions, then hue, saturation, and luminance corrections will be applied first in the Primary Correction Block, then in the Secondary Correction Block. If the pixel lies only within one region though, only the correction from the Primary Correction Block should be applied. The Secondary Correction Block should be bypassed to maintain the best possible precision of the pixel data. The Overlap Enable block uses the Overlap_Detected signal generated from the Region Selector to choose the output of either the Primary or Secondary Correction Block. It also calculates the total U and V offset to apply: either the sum of the U/V offsets from both Correction Blocks, or the Primary Correction Block U/V offsets only. To facilitate the ability to fade the U/V correction in the fade area of the region, the U/V offset is passed into the correction block to be multiplied by the Fade_factor. The results, Ucorr and Vcorr, are output from the correction block to be processed and applied to the corrected pixel later.
The final operation before pixels are output from the involves adding U and V offsets. These offsets are register parameters that were faded in the Correction Blocks and added together in the Overlap Enable block (0). They are now added into the output U and V channels respectively of the output pixel. The corrected YUV values are lastly clamped to a range of 0 to 255 to obtain Yfinal, Ufinal, and Vfinal. The last step is to mux the corrected final values and the original input values. If the pixel was detected as being in at least one region then the corrected YUV values Yfinal, Ufinal, Vfinal, are output from the block as Yout, Uout, Vout. If not, the original input pixel value Yin, Uin, Vin is output.
Although only a few embodiments of the present invention have been described, it should be understood that the present invention may be embodied in many other specific forms without departing from the spirit or the scope of the present invention. The present examples are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
While this invention has been described in terms of a preferred embodiment, there are alterations, permutations, and equivalents that fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing both the process and apparatus of the present invention. It is therefore intended that the invention be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
This patent application takes priority under 35 U.S.C. 119(e) to (i) U.S. Provisional Patent Application No.: 60/678,299 (Attorney Docket No. GENSP188P) filed on May 5, 2005, entitled “DETECTION, CORRECTION FADING AND PROCESSING IN HUE, SATURATION AND LUMINANCE DIRECTIONS” by Neal, et al. that is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
60678299 | May 2005 | US |