Image sensors in digital still cameras (DSCs) and the like produce a mesh of pixels. The color of each pixel can be represented by a vector known as a tristimulus vector. The tristimulus vector is comprised of three elements or coordinates, which define a point in a given color space. The mapping between the tristimulus vector and a particular color is related to the physical properties of the input (or source) device (e.g., a DSC).
Conventionally, color correction is performed to map the tristimulus vector of a pixel of the input device to a tristimulus vector that describes the same color using the primaries (light sources, such as red, green and blue (RGB)) of a target or output device. Color correction may be performed by an image signal processor (ISP) within the input or output device. Example output devices include display devices such as a cathode-ray tube (CRT) displays, liquid crystal displays (LCDs), plasma displays, organic light emitting diode (OLED) displays, etc.
In one example, conventional color correction is performed by applying a single linear transformation on the entire input color space of the input device. More advanced methods of color correction partition the color space into sectors and apply a linear transformation on each sector.
When applying color correction to the input color space, some of the output colors will be out of range of the output gamut, which refers to the range of colors capable of display by the output device. Gamut mapping is performed to enable display of colors outside the bounds of the output gamut. There are several conventional approaches to gamut mapping.
In one example, linear color correction transformation is performed, and then all values outside of the output gamut are clipped. However, this approach creates distortion in the hue of the colors. In addition, there is information loss because a range of tristimulus vectors that are outside the output gamut are mapped to a single tristimulus vector on the boundary of the output gamut.
Example embodiments provide methods and apparatuses for gamut compression, which preserves the original hue of color while distorting luminance and/or saturation.
According to at least some example embodiments, white balance (WB) and/or gray balance is assumed to be performed prior to color correction. Example embodiments are compatible with, for example, standard linear color correction, which preserves gray colors, hue-partitioned linear color correction, etc.
At least some example embodiments allow color correction with control as to how to represent colors that are outside of the output gamut, but without distorting the hue of the original color. Example embodiments do not require an implementation of three dimensional look up table (LUT) and are relatively hardware efficient.
Gamut mapping methods according to at least some example embodiments may be used together with linear or linear-like color correction methods in order to handle colors that fall outside the output gamut.
At least one example embodiment provides a method for out-of-gamut color correction of an image for display by an output device having a corresponding output color gamut. The image includes a plurality of pixels and each of the plurality of pixels has a corresponding pixel vector. According to at least this example embodiment, the image is color corrected by compressing pixel vectors having a maximal component located outside of the output color gamut to within the output color gamut while retaining a hue of the image.
According to at least some example embodiments, at least a first maximal component of at least a first of the pixel vectors is compared with a gamut threshold value, and the first pixel vector is compressed if the first maximal component exceeds the gamut threshold value. The first pixel vector is not compressed if the first maximal component does not exceed the gamut threshold value. The input pixel vectors may and/or should be white balanced prior to the color correcting.
According to at least some example embodiments, the first pixel vector corresponds to a first pixel among the plurality of pixels.
According to one or more example embodiments, a target luminance for the first pixel is calculated based on a weighted luminance metric for the first pixel and the gamut threshold value. The first pixel vector is then compressed at least partially based on the calculated target luminance. The target luminance is equal to a minimum value from among the weighted luminance metric and the gamut threshold value.
According to one or more other example embodiments, a target luminance for the first pixel is calculated based on a weighting constant, a luminance of the first pixel and the gamut threshold value. The first pixel vector is then compressed based on the calculated target luminance.
According to one or more other example embodiments, the target luminance for the first pixel is set equal to zero, and the first pixel vector is compressed based on the set target luminance.
According to at least some example embodiments, an input saturation pixel vector associated with the first pixel is calculated. At least one component of the input saturation pixel vector represents a maximum value for a color in an input color space, which is a color space associated with an image acquisition device having acquired the image. An output saturation pixel vector is then calculated by applying a color correction matrix to the input saturation pixel vector. The first pixel vector is compressed based on the target luminance and the output saturation pixel vector.
According to at least some example embodiments, the pixel vectors are compressed by applying a compression factor to the pixel vectors. In one example, the pixel vectors are compressed by: shifting tristimulus vectors associated with the first pixel such that a target luminance for the first pixel is located at an origin of an output color space; calculating a compression factor based on the shifted tristimulus vectors; and compressing the pixel vectors based on the compression factor, the shifted input pixel vector and the target luminance. According to at least this example embodiment, the tristimulus vectors include at least an input pixel vector representing a color of the first pixel.
According to at least some example embodiments, the pixel vectors are generated by applying color correction (e.g., linear or piece-wise linear color correction) to a plurality of input pixel vectors. Each of the plurality of input pixel vectors represents a color of a corresponding one of the plurality of pixels.
At least one other example embodiment provides an electronic imaging system configured to perform out-of-gamut color correction of an image for display by an output device having a corresponding output color gamut. The image includes a plurality of pixels and each of the plurality of pixels has a corresponding pixel vector. According to at least this example embodiment, the electronic imaging system includes: an image signal processor configured to color correct an image for display by the output device by compressing pixel vectors having a maximal component located outside of the output color gamut to within the output color gamut while retaining a hue of the image.
According to at least some example embodiments, the electronic imaging system further includes: an image sensor configured to acquire the image by converting incident light into a digital output code; a display device configured to display the color corrected image; and/or a memory configured to store the color corrected image.
Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustrative purposes only of selected example embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
Detailed example embodiments are disclosed herein. However, specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments. Example embodiments may be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein. Moreover, example embodiments are to cover all modifications, equivalents, and alternatives falling within the scope of this disclosure. Like numbers refer to like elements throughout the description of the figures.
Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. The terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of this disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
When an element is referred to as being “connected” or “coupled” to another element, it may be directly connected or coupled to the other element or intervening elements may be present. By contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.).
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or simultaneously or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In some cases, portions of example embodiments and corresponding detailed description are described in terms of software or algorithms and symbolic representations of operations performed by, for example, an image signal processor (ISP). These descriptions and representations are the ones by which those of ordinary skill in the art effectively convey the substance of their work to others of ordinary skill in the art. An algorithm, as the term is used here, and as it is used generally, is conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of optical, electrical, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
In the following description, at least some example embodiments will be described with reference to acts and symbolic representations of operations (e.g., in the form of flowcharts) that may be implemented as program modules or functional processes including routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types and may be implemented in hardware such as ISPs in digital still cameras (DSCs) or the like.
Unless specifically stated otherwise, or as is apparent from the discussion, terms such as “processing,” “computing,” “calculating,” “determining,” “displaying” or the like, refer to the action and processes of a computer system, ISP or similar electronic computing device, which manipulates and transforms data represented as physical, electronic quantities within registers and/or memories into other data similarly represented as physical quantities within the memories, registers or other such information storage r display devices.
Note also that software implemented aspects of example embodiments are typically encoded on some form of computer readable storage medium. The computer readable storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or “CD ROM”), and may be read only or random access. Example embodiments are not limited by these aspects of any given implementation.
Example embodiments of methods for color correction will be discussed in more detail below. As an example, methods for color correction will be described with reference to a color correction matrix (CCM) unit of an ISP.
Although example embodiments are discussed herein as “units,” these components may also be referred to as “circuits” or the like. For example, the CCM unit may be referred to as a CCM circuit.
Referring to
The pixel array 100 includes a plurality of pixels P arranged in an array of rows ROW_1 through ROW_N and columns COL_1 through COL_N. Each of the plurality of select lines RRL corresponds to a row of pixels in the pixel array 100. In
In more detail with reference to example operation of the image sensor in
The pixel array 110 may also include a color filter array (CFA) following, for example, a Bayer pattern.
The analog to digital converter (ADC) 104 converts the output voltages from the ith row of readout pixels into a digital signal (or digital code) DOUT. The ADC 104 outputs the digital signal DOUT to an image signal processor (not shown in
Referring to
The image sensor 300 may be an image sensor as described above with regard to
The ISP 302 processes the captured image data for storage in the memory 308 and/or display by the display 304. In more detail, the ISP 302 is configured to: receive digital image data from the image sensor 300; perform image processing operations on the digital image data; and output a processed image. An example embodiment of the ISP 302 will be discussed in greater detail below with regard to
The ISP 302 is also configured to execute a program and control the electronic imaging system. The program code to be executed by the ISP 302 may be stored in the memory 308. The memory 308 may also store digital image data acquired by the image sensor and processed by the ISP 302. The memory 308 may be any suitable volatile or non-volatile memory.
The electronic imaging system shown in
The electronic imaging system shown in
Referring to
The CCM unit 220 performs color correction on the white balanced digital image data and outputs color corrected digital image data to a gamma correction unit 230. In one example, during or after color correction, the CCM unit 220 compresses pixel values outside of a color gamut of an output device to within the output color gamut while preserving a hue of the acquired image. The color correction applied by the CCM unit 220 may be linear in the entire gamut space or piece-wise linear in sub-spaces of the gamut space. For example, the color correction may be linear in two-dimensional sub-spaces including the main diagonal of the gamut and the pixel being corrected. Example compression methods will be discussed in more detail below.
Still referring to
The chromatic aberrations unit 240 reduces or eliminates chromatic aberration in the gamma corrected digital image data and outputs the resultant digital image data for storage in the memory 308 and/or display by the display 304.
Still referring to
Example embodiments provide methods and apparatuses (e.g., image processing apparatuses, digital still cameras, digital imaging systems, electronic devices, etc.) capable of selectively compressing one or more pixel component values to within a valid range supported by the output system/device. For convenience, a range of [0.0 to 1.0] is used as an example valid range. In this case, the maximum value of 1.0 corresponds to 256 in case of 3×8 bit pixel.
To reduce information loss due to clipping, example embodiments compress of pixel component values outside the output gamut to within the bounds of the output gamut. As mentioned above, the output gamut refers to the gamut of the output or target device (e.g., a display device). When compressed, the colors in this predefined, given or desired region may be distorted, but the information is retained/regained, rather than lost as in the conventional art.
Methods described herein may be performed at the color correction matrix (CCM) unit 220 shown in
Referring to
As discussed above, the mapping between an input pixel vector and a particular color of a pixel is related to the physical properties of the image sensor or other input device. In one example, the physical properties of the input device provide the input device with a particular color gamut. A particular color gamut includes a finite number of possible colors because. And, the available gamut of the input device has boundaries or limits on the available colors within the input color space. In
Still referring to
A property of a linear transformation is that a straight line is transformed to another straight line. According to example embodiments, the target luminance yt is preserved in both the input color space and the tristimulus color space for the output device (referred to as the output color space). This is seen when comparing
In more detail with regard to
l
2
=b
2−(yt,yt,yt) (1)
The output gamut also has boundaries or limits within the output color space. In
In
Referring to
max(P2)=max(P2,x,P2,y,P2,z) (2)
The boundary of the output gamut is 1.0 in the output space. The gamut threshold value TH_VAL is a parameter, which is indicative of the impact of the algorithm on input pixel information. For example, the smaller the gamut threshold value TH_VAL, the larger the impact of the compression algorithm on the input color space. The gamut threshold value TH_VAL may be set by a user as desired.
Still referring to
Returning to S400, if max(P2) is greater than the gamut threshold value TH_VAL, then at S402 the CCM unit 220 calculates the target luminance yt for the input pixel PIX1 from the AWB unit 210 at S402. The target luminance yt is a target luminance vector for compression algorithms. And, as mentioned above, the target luminance yt is comprised of three components (yt,yt,yt).
According to at least some example embodiments, there is a tradeoff with regard to whether to preserve luminance or saturation of pixel color. The CCM unit 220 calculates the target luminance yt to approach this tradeoff. The target luminance yt is located on the gray axis of the input gamut, an example of which is shown in
According to at least some example embodiments, the CCM unit 220 calculates the target luminance yt in the YUV plane. Accordingly, the CCM unit 220 initially calculates YUV values for the input pixel vector P1.
In one example, the CCM unit 220 calculates the target luminance yt based on the luminance y1 of the input pixel PIX1, a weighting constant α assigned to preserve luminance or brightness, and the gamut threshold value TH_VAL.
In a more specific example, the CCM unit 220 calculates the target luminance yt according to Equation (3) shown below.
y
t=min(αy1,TH—VAL) (3)
In Equation (3), α is a weighting constant between about 0 and about 1.0, which represents the weight given to preserve brightness or luminance. The weighting constant α may be set by a user as desired. Luminance y1 refers to the original luminance of the input pixel PIX1. Thus, αy1 represents a weighted luminance metric or weighted luminance value for the input pixel PIX1. In Equation (3), the target luminance yt is equal to the minimum value among αy1 and the gamut threshold value TH_VAL.
In another example, the target luminance yt is calculated based on the weighting constant α, the gamut threshold value TH_VAL, and a distance dy.
Referring again to
In a more specific example, the target luminance yt is calculated according to Equation (4) shown below.
y
t=min(max(y−α·dy,0),TH—VAL) (4)
In Equation (4), α is the above-described weighting constant and the distance dy is calculated according to Equation (5) shown below.
d
y=√{square root over (u2+ν2)} (5)
In Equation (5), u and ν are the chrominance components (coordinates) of the YUV pixel vector P3 in the YUV color space.
As shown by Equation (4), the CCM unit 220 calculates the target luminance yt by taking the maximum value from among (y−α·dy) and 0, and then taking the minimum value from among the gamut threshold value TH_VAL and the above-mentioned maximum value.
In yet another example, yt is set to 0. In this example, more weight is given to preserve the saturation value of the pixel color at the expense of the saturation.
Referring back to
As shown in
Because the saturation pixel vector b1, the input pixel vector P1 and the target luminance yt lie on the same line l1, the saturation pixel vector b1 is given by Equation (6) shown below.
b
1
=A·(P1−yt)+yt (6)
Moreover, max(b1)=1.0, and thus, simple substitution obtains Equation (7) shown below.
1.0=max(b1)=max(A·(P1−yt)+yt)=A(max(P1)−yt)+yt (7)
Given Equation (7), the slope A of the line l1 can be calculated according to Equation (8) shown below because max(P1) and yt are known.
Once having calculated the slope A, the saturation pixel vector b1 for the input pixel PIX1 can be calculated according to Equation (6) shown above.
Returning to
b
2
=M·b
1 (9)
In Equation (9), M is a color correction matrix for the desired output device. The color correction matrix M and the input saturation pixel vector b1 are combined using vector multiplication.
Still referring to
As shown in
As shown in
In a more specific example, vector m2 may be given by Equation (10) shown below.
m
2
=B·(b2−yt)+yt (10)
Because the maximal component of m2 (denoted max(m2)) is 1.0, simple substitution provides Equation (11) shown below.
1.0=max(m2)=max(B·(b2−yt)+yt)=B(max(b2)−yt)+yt (11)
Given Equation (11), the slope B of the line l2 can be calculated according to Equation (12) shown below because max(b2) and yt are known.
Once having calculated the slope B, vector m2 can be calculated according to Equation (10) shown above.
As shown in
Although the CCM unit 220 may calculate tristimulus vector t2, this vector need not be calculated. To perform the compression methods discussed herein, only the maximal component of vector t2 need be known. And, this maximal value is the same as the gamut threshold value TH_VAL, which is compared with max(P2) at S400.
Still referring to
The compressing of the output pixel vector P2 will be discussed in more detail with regard to the flow chart shown in
The graph shown in
Referring to
The compression factor ƒ(x) is calculated by the CCM unit 220 at S704.
In this example, with regard to
Referring to
Also in
Referring to
Said another way, line L52 represents the output of the CCM unit 220 after applying a gamut compression method described herein (e.g., with regard to
In the example shown in
In this example, h and w are given by Equations (16) and (17), respectively.
h=max—m′2−(t2−yt)=1.0−t2=1.0−TH—VAL (16)
w=max—b′2−(t2−yt)=max(b2)−t2=max(b2)−TH—VAL (17)
Still referring to
Line L53 in
According to at least this example embodiment, the CCM unit 220 generates a family of factor functions for compressing the output pixel vector x into an output signal y with a lower dynamic range, depending on the required compression ratio given by the particular w and h values shown in
Referring to
According to at least one example embodiment, a factor function, such as the factor function corresponding to line L53, is generated by decreasing the slope by a factor of 2 at each sample point and distributing the y-axis sample points according to the logarithmic distribution given by Equations (18) and (19) shown below.
y
0=0 (18)
y
i=1−2−i (19)
In Equation (19), yi is given by Equation (20) shown below.
y
i
=y
i−1
+Δy
i−1 (20)
And, Δyi is given by Equation (21) shown below.
Δyi=2−(i+1) (21)
Further, the change Δxi in the x-value of the sample points is 1 as shown below in Equation (22).
In the example shown in
Because all slopes in this example are powers of 2, hardware computation may be more efficient without using a divider.
Still referring to
is an integer, the Nth interval ΔxN is less than or equal to 1 (e.g., N=3 in
the slope
is calculated using a divider. However, the average number of divisions required by this algorithm is still less than the number of compressed signals because not all signals fall into the interval
According to at least some example embodiments, the sample points may be distributed equally or in logarithmic inverse order along the y-axis. However, the logarithmic distribution may better fit gamut mapping applications.
The compression function g′(x) for specific ratio
may be given by Equation (23) shown below.
In Equation (23), I is determined as an integer part of x: i=└x┘.
The functions shown in
Returning to
P
MAP
=f(P′2)×P′2+yt Equation (18)
According to Equation (18), the compressed output pixel vector PMAP is calculated based on the target luminance yt, the shifted output pixel vector P′2 and a compression factor f (P′2), which is calculated as a function of the shifted output pixel vector P′2. In this example, the compressed output pixel vector determined according to the system with a target luminance yt located at the origin (e.g., as shown in
Although example embodiments of compression algorithms are described herein with regard to color correction and/or gamut mapping, it will be understood that compression algorithms described herein may be implemented in connection with other applications. For example, methods and apparatuses described herein may be applicable to any signal compression application where a family of compression curves is needed to be applied to signals with minimum amount of calculations.
The foregoing description of example embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular example embodiment are generally not limited to that particular example embodiment, but where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.