1. Technical Field
This disclosure relates generally to additive color systems, and, more specifically, to improving the gamut representative by an additive color system.
2. Description of the Related Art
The human eye perceives color through three types of cone cells within the eye's retina. The first type (referred to as S type) is stimulated by light having a wavelength of 420-440 nm corresponding to the color blue. The second type (referred to as M type) is stimulated by light having a wavelength of 534-545 nm corresponding to the color green. The third type (referred to as L type) is stimulated by light having a wavelength of 564-580 nm corresponding to the color red. When a particular color of light enters a person's eye, the color simulates each cone-cell type differently depending upon each type's sensitivity to that color's wavelength. The brain then interprets the different reactions as the particular color. For example, if the color yellow is being viewed, the cone cells favoring green and red will be stimulated more than the cone cells favoring blue. The stronger reaction of the cone cells favoring green and red and the weaker reaction of the cells favoring blue will cause the brain to conclude that the color is, in fact, yellow.
Modern computing devices attempt to create the perception of different colors by using additive color systems in which different primary colors (e.g., red, green, and blue; cyan, yellow, magenta; etc.) are combined to stimulate the cone-cell types in the same manner as if the actual color were viewed. Computing devices typically vary the intensities of each primary to create the appropriate reactions for a particular color. These intensities are often encoded as set of values referred to as a pixel. An image can be represented as a combination of multiple pixels.
The present disclosure relates to devices that employ additive color systems. In one embodiment, a device (such as a camera) that supports a color gamut that is larger than the gamut of a color space may be configured to represent colors outside the color space's gamut by encoding pixel data using an extended range format (rather than using only colors within the color space's gamut). In one embodiment, the device may represent colors that fall outside of the gamut by using color component values that are less 0.0 or greater than 1.0, and may represent colors that fall within the color space's gamut by using color component values within the range of 0.0 to 1.0.
In one embodiment, a device (such as a display) that implements the color space but has a larger gamut may receive pixel data using this extended range format. Instead of producing colors limited to the color space's gamut, the device may produce colors within the larger gamut of the device. In some embodiments, the device may also be configured to receive pixel data that does not use the extended range format, and still produce colors for the color space.
In some embodiments, devices may also be configured to apply a gamma correction function on pixel data represented in the extended range format even if the pixel data includes negative color component values.
This specification includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.
“Comprising.” This term is open-ended. As used in the appended claims, this term does not foreclose additional structure or steps. Consider a claim that recites: “An apparatus comprising one or more processor units . . . .” Such a claim does not foreclose the apparatus from including additional components (e.g., a network interface unit, graphics circuitry, etc.).
“Configured To.” Various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks. In such contexts, “configured to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs those task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. §112, sixth paragraph, for that unit/circuit/component. Additionally, “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configure to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
“First,” “Second,” etc. As used herein, these terms are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.). For example, in a block of data having multiple portions, the terms “first” and “second” portions can be used to refer to any two portions. In other words, the “first” and “second” portions are not limited to an initial two portions.
The present disclosure begins with describing embodiments of an extended range color space with respect to
Turning now to
In general, a color space is a model used by an additive color system for representing colors numerically in terms of a set of primary colors. One common set of primary colors is the set red, green, and blue (represented by the axes in diagram 100). As noted above, by varying the intensities of these colors, a range of different colors can be produced. These intensities are typically defined numerically as color component values within a range between 0.0-1.0, where a color component value of 0.0 represents no intensity for (or the lack of) a particular primary color and 1.0 represents the maximum intensity for the color. For example, black is producible when none of the primaries have any intensity (shown as the coordinate (0,0,0)), and white is producible when each of the primary colors has a maximum intensity (shown as the coordinate (1,1,1)). The range of colors that can be produced varying the color component values is referred to as a gamut of a color space and may be considered as a multidimensional shape (e.g., shown in diagram 100 as a cube). In other words, a color space can represent any color within the shape, but no colors outside of the shape.
The particular gamut of a color space is a function of the selected primary colors, the “purity” of those colors (i.e., whether a primary color is a composite of a narrower band of light frequencies (thus being purer) or a wider frequency band (being less pure)), and the number of primaries. Accordingly, color space 110 is limited to producing colors within its cube based on these properties of its primaries.
To ensure that colors are consistent from one device to the next, various standardized color spaces have been developed that specify predefined primary colors. Pixel data is then conveyed from device to another in terms of these predefined primaries. For example, the color space sRGB specifies the primaries red, green, and blue, and further defines red as having x, y, and z chromaticity values of 0.6400, 0.3300, and 0.0300 (as defined in terms of CIE 1931 XYZ color space); green as having x, y, and z chromaticity values of 0.3000, 0.6000, and 0.1000; and blue as having x, y, and z chromaticity values of 0.1500, 0.0600, and 0.7900. If a device (such as a cathode ray tube (CRT) display) supports sRGB and receives pixel encoded according to sRGB, the device may map the pixel values to voltages to produce colors consistent with sRGB. For example, if a pixel has color component values of 1.0 for green and 0.0 for the other colors, the display may be configured to produce a color of green corresponding to the sRGB green primary.
Color space 110, in one embodiment, is a standardized color space, which may be supported by various devices. Color space 110 may be any of various color spaces such as sRGB, Adobe RGB (ARGB), cyan magenta yellow key (CMYK), YCbCr, CIE 1931 XYZ, etc. In many instances, a device that supports color space 110 may be capable of having a gamut is that greater than the gamut of color space 110 (e.g., due to being able to produce purer primary colors). However, without the benefit of the present disclosure, the extra gamut of the device may go unused because of the limitations imposed by color space 110.
In various embodiments, a device may be configured to express colors that fall outside of the gamut of color space 110 by using color component values outside the range of 0.0 to 1.0 (i.e., that fall within an extended range of the range 0.0-1.0). Representing component values within this extended range creates the effect of another color space 120 that has a larger gamut (i.e., an “extended range color space”). In some embodiments, devices that support such an extended range color space 120 may still produce the same colors producible by color space 110 for values within the range 0.0-1.0, but may also produce colors outside of the gamut for color space 110 for values outside of that range. For example, in one embodiment, if color space 110 is sRGB and a pixel specifies values of 1.0 for green and 0.0 for blue and red, a device may produce the sRGB primary color for green by polluting a purer form of green with red and blue components. However, if the pixel instead has negative values for blue and red, the device may produce the purer form of green (which falls outside of the gamut for sRGB) by not polluting it with red and blue components.
In the illustrated embodiment, color component values of extended range color space 120 vary within the range of −0.75 to 1.25. In other embodiments, different boundaries may be used. These boundaries may have the same interval 2.00 (e.g., −1.00 to 1.00) or different intervals (e.g., −2.00 to 2.00). In some embodiments, different respective ranges may be used for different color component values, and not all ranges may be extended ranges (in other words, one or more color component values may vary only within the range of 0.0-1.0). Although diagram 100 has axes corresponding to an RGB-type color space, extended range color space 120 may be applicable to any suitable color space; accordingly, color component values may also be expressed in terms of chromaticity, luminance, etc.
Turning now to
As shown, color space 110 occupies only a subset of block 152 as only a subset of the visible color spectrum may be representable using color space 110. The corners of the triangle represent the particular primary colors of color space 110. The area within the triangle represents the possible colors producible by combining the primaries. The corners of the triangle do not touch the outline of block 152 as they are not the purest possible colors.
In the illustrated embodiment, color space 120 includes a larger gamut that includes the gamut for space 110 as indicated by the triangle for space 120 encompassing the triangle for space 110. As shown, in some embodiments, color space 120 may permit sufficient range of color component values to represent nonexistent colors.
Turning now to
In the illustrated embodiment, data for a given pixel is arranged according to format 210 into a 64-bit block with the bits being labeled from 0-63. The 64-bit block includes a first portion of 16 bits (corresponding to the bits labeled 0-15) for the blue color component value, a second portion of 16 bits (corresponding to the bits labeled 16-31) for the green color component value, a third portion of 16 bits (corresponding to the bits labeled 32-47) for the red color component value, and a final fourth portion of 16 bits (corresponding to the bits labeled 48-63) for an alpha value (as used herein, the term “alpha” refers to an amount of transparency (or opacity) for a given pixel; in one embodiment, an alpha value may vary only within the range of 0.0-1.0). Each portion includes six initial bits of unused padding (which may be all zero bits, in one embodiment) and ten bits indicative of the color component value.
In various embodiments, pixel data may be arranged differently than shown in format 210 (on this note, pixel data may also be arranged differently than shown in slides 2B-2F depicting other formats). Accordingly, each color's portion may be arranged in a different order—e.g., in one embodiment, the alpha-value portion may be the initial portion rather than the last portion. More or less padding bits may be present in each portion. Portions may also be larger or smaller than 16 bits. Bits may also be arranged according to little endianness or big endianness.
Bits for a given color component (e.g., the 10-bit portions in format 210) may be mapped to values in an extended range in any suitable manner such as described next with
Turning now to
In various embodiments, different bit mappings other than mapping 212 may be used. In some embodiments, a device may be configured to support a programmable mapping that can be adjusted by varying one or more parameters associated with the mapping. Accordingly, in one embodiment, the particular boundaries corresponding to the maximum and minimum decimal values may be programmable (e.g., the values 0.75 and 1.25 may be changeable by a user). In another embodiment, the interval may be fixed, but a particular offset value corresponding to, for example, 0.0 may be changeable For example, selecting the offset value 512 (instead of 384) may cause the mapping to represent the range of −1.0 to 1.0 if the interval is fixed at 2.0. In some embodiments, the interval may also be adjustable (e.g., changed from 2.0 to 4.0).
Such a mapping may also be applicable to other pixel formats such as described next.
Turning now to
Turning now to
Turning now to
Turning now to
Various ones of formats 210-250 may usable in an image pipeline such as described next.
Turning now to
Input stages 310, in one embodiment, process source data 302 into pixel data usable by other stages in pipeline 300. In some embodiments, input stages 310 may be performed by devices such as cameras, scanners, or other image-capturing devices. In some embodiments, input stages 310 may be performed by a graphics processing unit (GPU) to render pixel data for display. Accordingly, source data 302 may correspond to voltages produced by an image sensor responsive to captured light, instructions for a rendering engine, etc. In some embodiments, stages 310 may produce and/or operate on pixel data encoded in the expanded range color space 120 described above. Various input stages 310 are described in further detail with respect to
Processing stages 320, in one embodiment, are intermediary stages that operate on image information once it is in pixel data form. In various embodiments, stages 320 may be implemented by hardware dedicated to performing various pixel manipulation operations and/or software executing on a processor. In some embodiments, stages 320 may be performed by hardware or software that also implements ones of stages 310 and/or stages 330. In various embodiments, stages 320 may receive and/or operate on pixel data encoded in the expanded range color space 120 described above. Various processing stages 320 are described in further detail with respect to
Output stages 330, in one embodiment, process pixel data into an output 304. In some embodiments, output stages 330 may be performed by a display (such as a television set, a computer screen, cinema screen), a printer, etc. In some embodiments, output stages 330 may receive pixel data encoded in the expanded range color space 120 and produce a corresponding output based on the pixel data. Various output stages 330 are described in further detail with respect to
Turning now to
ADC stage 410, in one embodiment, represents operations that may be performed to produce a digital form of data 402. In one embodiment, stage 410 may include capturing voltages produced by an image sensor responsive to received light. In some embodiments, this digital form may undergo further processing in additional stages until it is in a pixel form (shown as device formatted pixel data 412) corresponding to the color space of the device. In one embodiment, this color space may not be a standardized color space, but rather one that is dictated by properties of the device.
Color space conversion 420, in one embodiment, converts color component values 412 into a color component values 422 encoded in a color space, which may be supported by subsequent stages 310-330. In one embodiment, this conversion may be performed using a transfer function that includes one or more matrix multiplications. Two non-limiting examples of such transfer functions are depicted below.
The first transfer function converts color component values encoded in sRGB color space to color component values encoded in YCbCr601 color space (a standard used in standard definition televisions).
The second transfer function converts color component values encoded in sRGB color space to color component values encoded in YCbCr709 color space (a standard used in HDTV).
In some embodiments, the converted pixel data 422 produced in stage 420 may be encoded in the extended range color space 120 described above.
Gamma correction 430, in one embodiment, corrects nonlinearity in pixel data 422. In some instances, an input device capturing light (such as the one producing data 402) may produce a non-linear change in output in response to a linear increase in the intensity of the light. Still further, the device may have different sensitivities for particular frequencies of light. These issues can cause nonlinearity to be present in pixel data 422. In many instances, gamma correction may account for this issue. In various embodiments, performance of stage 430 may include applying a gamma correction function such as described below with respect to
Turning now to
Graphics pipeline stages 450, in one embodiment, interpret commands 452 (which, in the illustrated embodiment, are 3D API commands such as OPENGL, DIRECT 3D, etc.—in other embodiments, a different form of input may be used) to perform various operations such as primitive generation, scaling, rotating, translating, clipping, texturing, lighting, shading, etc.
Rasterization stage 460, in one embodiment, is a latter stage in a graphics pipeline in which data generated from subsequent stages is processed into pixel data 462 corresponding to a two-dimensional image space. Pixel data 462 produced during stage 460 may then be stored in a frame buffer until pulled for subsequent usage (e.g., display). In some embodiments, the pixel data 462 produced during stage 460 may be encoded using extended range color space 120.
Turning now to
Turning now to
Gamma correction stage 610, in one embodiment, corrects for nonlinearity that may be subsequently introduced into pixel data 602 once it becomes output 304. Similar to stage 430, in many instances, devices that produce an output (such as various displays) may not produce a linear increase in light intensity of a primary color in response to a linear increase in the color component value. Gamma correction may be performed before hand to account for this non-linearity. In various embodiments, performance of stage 610 may include applying a gamma correction function such as described below with respect to
Color space conversion stage 620, in one embodiment, converts corrected pixel data 612 into pixel data 622 encoded in the color space of the device producing output 304. In various embodiments, this conversion may include applying a transfer function similar to the ones described above. In some embodiments, pixel data 612 encoded in extend range color space 120 may be converted in stage 620 into a non-extend range color space (i.e., one having the range of 0.0-1.0) as pixel data 622. In some embodiments, pixel data 622 may be processed in additional stages 330 before proceeding to DAC stage 630.
DAC stage 630, in one embodiment, generates analog signals (for output 304) corresponding to the color component values of pixel data 622. Accordingly, in some embodiments, stage 630 may include mapping the color component values to a corresponding range of voltages producible by the device. For example, in the case of CRT displays, the voltages produced in stage 630, in one embodiment, may be those applied to the phosphor in the display's screen to produce colors for an image.
Turning now to
In step 710, a device receives a first set of color component values corresponding to a first color space. In some embodiments, step 710 may include the device receiving the first set of color component values via a transmission from another device, reading the first set of color component values from memory, creating the first set of color component values from source data, etc. In one embodiment, the first color space may be a non-extended range color space. In another embodiment, the first color space may be an extended range color space that permits a color component value to vary within a range having a first portion less than 0.0, a second portion between 0.0 and 1.0, and a third portion greater than 1.0. As discussed above, in some embodiments, this first portion (e.g., from −0.75 to 0.0, in one embodiment) is larger than the third portion (e.g., from 1.0 to 1.25, in one embodiment).
In some embodiments, step 710 may include receiving the set of color component values as a set of bits representing an unsigned value such as discussed with respect to
In step 720, the device converts the first set of color component values to a second set of color component values corresponding to a second color space. In various embodiments, step 720 may include applying a transfer function that includes one or more matrix multiplications such as described above. As with step 710, in one embodiment, the second color space is an extended range color space; in another embodiment, the second color space is a non-extended range color space. In some embodiments, performance of step 720 may correspond with any one of color space conversions stages 420, 510, or 620 described above.
Tuning now to
As shown, the above function is a piecewise function that species a linear function if PixelIn is within the range of −z to +z and exponential functions if PixelIn is outside of that range. In various embodiments, the values M and z may be determined based on the non-linearity characteristics of a given device and may be any suitable values; the value n is an offset value and may be determined based on M and z. In various embodiments, the value y (specified in the exponential functions) is the inverse a gamma value λ, which, in some embodiments, is 1.8, 2.2, etc. Accordingly, function 800 may be applied to both negative color component values and positive color component values including those greater than 1.0.
Turning now to
Method 900 begins in step 910 with a device receiving pixel data including one or more color component values. As noted above, these values may be positive or negative values. In step 920, the device determines whether the color component values fall outside of the range from −z to +z. If a given value is within the range (e.g., a value between 0.0 and −z), method 900 proceeds to step 930. Otherwise, method proceeds to step 940. In step 930, the device applies a linear gamma correction function such as the function o=M*x described above. In step 940, the device applies an exponential function such as the functions o=−[(−x)y+n] or o=xy+n described above.
Turning now to
CPU 1010 may implement any instruction set architecture, and may be configured to execute instructions defined in that instruction set architecture. CPU 1010 may employ any microarchitecture, including scalar, superscalar, pipelined, superpipelined, out of order, in order, speculative, non-speculative, etc., or combinations thereof. CPU 1010 may include circuitry to implement microcoding techniques. CPU 1010 may include one or more processing cores each configured to execute instructions. CPU 1010 may include one or more levels of caches, which may employ any size and any configuration (set associative, direct mapped, etc.). In some embodiments, CPU 1010 may execute instructions that facilitate performance of various ones of stages in pipeline 300.
Graphics processing unit (GPU) 1020 may include any suitable graphics processing circuitry. Generally, GPU 1020 may be configured to render objects to be displayed into a frame buffer. GPU 1020 may include one or more graphics processors that may execute graphics software to perform a part or all of the graphics operation, and/or hardware acceleration of certain graphics operations. The amount of hardware acceleration and software implementation may vary from embodiment to embodiment. As discussed above, in some embodiments, GPU 1020 may perform various ones of stages in pipeline 300 such as ones of input stages 310 and/or stages 320.
Peripherals 1030 may include any desired circuitry, depending on the type of system 1000. For example, in one embodiment, system 1000 may be a mobile device (e.g. personal digital assistant (PDA), smart phone, etc.) and the peripherals 1030 may include devices for various types of wireless communication, such as WiFi, Bluetooth, cellular, global positioning system, etc. Peripherals 1030 may also include additional storage, including RAM storage, solid state storage, or disk storage. Peripherals 1030 may include user interface devices such as a display screen, including touch display screens or multitouch display screens, keyboard or other input devices, microphones, speakers, cameras, scanners, printing devices, etc. In some embodiments, peripherals 1030 may perform various ones of stages in pipeline 300 such as input stages 310 and output stages 330.
Image sensor pipeline (ISP) unit 1040 and memory scaler rotater (MSR) unit 1050 are embodiments of various dedicated hardware that may facilitate the performance of various stages in pipeline 300. In one embodiment, ISP unit 1040 is configured to receive image data from a peripheral device (e.g., a camera device), and to the process the data into a form that is usable by system 1000. In one embodiment, MSR unit 1050 is configured to perform various image-manipulation operations such as horizontal and vertical scaling, image rotating, color space conversion, dithering, etc. Accordingly, ISP unit 1040 and MSR unit 1050 may perform operations associated with stages 310 and 320.
Interconnect fabric 1060, in one embodiment, is configured to facilitate communications between units 1010-1070. Interconnect fabric 1060 may include any suitable interconnect circuitry such as meshes, network on a chip fabrics, shared buses, point-to-point interconnects, etc. In some embodiments, fabric 1060 may facilitate communication of pixel data having a format such as formats 210-250.
Memory 1070 may be any type of memory, such as dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM (including mobile versions of the SDRAMs such as mDDR3, etc., and/or low power versions of the SDRAMs such as LPDDR2, etc.), RAMBUS DRAM (RDRAM), static RAM (SRAM), etc. One or more memory devices may be coupled onto a circuit board to form memory modules such as single inline memory modules (SIMMs), dual inline memory modules (DIMMs), etc. Alternatively, the devices may be mounted with an integrated circuit implementing system 1000 in a chip-on-chip configuration, a package-on-package configuration, or a multi-chip module configuration. In some embodiments, memory 1070 may store pixel data having a format such as formats 210-250.
Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even where only a single embodiment is described with respect to a particular feature. Examples of features provided in the disclosure are intended to be illustrative rather than restrictive unless stated otherwise. The above description is intended to cover such alternatives, modifications, and equivalents as would be apparent to a person skilled in the art having the benefit of this disclosure.
The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims.