The present disclosure relates generally to techniques for dithering images using a luminance approach.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Electronic displays are typically configured to output a set number of colors within a color range. In certain cases, a graphical image to be displayed may have a number of colors greater than the number of colors that are capable of being shown by the electronic display. For example, a graphical image may be encoded with a 24-bit color depth (e.g., 8 bits for each of red, green, and blue components of the image), while an electronic display may be configured to provide output images at an 18-bit color depth (e.g., 6 bits for each of red, green, and blue components of the image). Rather than simply discarding least-significant bits, dithering techniques may be used to output a graphical image that appears to be a closer approximation of the original color image. However, the dithering techniques may not approximate the original image as closely as desired.
A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
The present disclosure generally relates to dithering techniques that may be used to display color images on an electronic display. The electronic display may include one or more electronic components, including a power source, pixelation hardware (e.g., light emitting diodes, liquid crystal display), and circuitry for receiving signals representative of image data to be displayed. In certain embodiments, the processor may be internal to the display while in other embodiments the processor may be external to the display and included as part of an electronic device, such as a computer workstation or a cell phone.
The processor may use dithering techniques, including luminance-based dithering techniques disclosed herein, to output color images on the electronic display. In luminance-based dithering, the relationship between a luminance and the color of pixels in a source image is determined. The color components of the source image (e.g., red, green, and blue components) are approximated to their nearest hardware color level. The hardware color level is then varied to more closely approximate the luminance of the source image. Color errors that may be introduced by approximating the luminance of the source image are then diffused to adjacent pixels.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
With the foregoing in mind, it may be beneficial to first discuss embodiments of certain display systems that may incorporate the dithering techniques as described herein. With this in mind, and turning now to the figures,
Regardless of its form (e.g., portable or non-portable), it should be understood that the electronic device 10 may provide for the processing of image data using one or more of the image processing techniques briefly discussed above, which may include luminance analysis and error diffusion dithering techniques, among others. In some embodiments, the electronic device 10 may apply such image processing techniques to image data stored in a memory of the electronic device 10. Embodiments showing both portable and non-portable embodiments of electronic device 10 will be further discussed below with respect to
As shown in
Before continuing, it should be understood that the system block diagram of the device 10 shown in
In addition to processing various input signals received via the input structure(s) 14, the processor(s) 16 may control the general operation of the device 10. For instance, the processor(s) 16 may provide the processing capability to execute an operating system, programs, user and application interfaces, and any other functions of the electronic device 10. The processor(s) 16 may include one or more microprocessors, such as one or more “general-purpose” microprocessors, one or more special-purpose microprocessors and/or application-specific microprocessors (ASICs), or a combination of such processing components. For example, the processor(s) 16 may include one or more reduced instruction set (e.g., RISC) processors, as well as graphics processors (GPU), video processors, audio processors and/or related chip sets. In certain embodiments, the processor(s) 16 may provide the processing capability to execute source code embodiments capable of employing the image processing techniques described herein.
The instructions or data to be processed by the processor(s) 16 may be stored in a computer-readable medium, such as a memory device 18. The memory device 18 may be provided as a volatile memory, such as random access memory (RAM) or as a non-volatile memory, such as read-only memory (ROM), or as a combination of one or more RAM and ROM devices. The memory 18 may store a variety of information and may be used for various purposes. For example, the memory 18 may store firmware for the electronic device 10, such as a basic input/output system (BIOS), an operating system, various programs, applications, or any other routines that may be executed on the electronic device 10, including user interface functions, processor functions, and so forth. In addition, the memory 18 may be used for buffering or caching during operation of the electronic device 10. For instance, in one embodiment, the memory 18 includes one or more frame buffers for buffering video data as it is being output to the display 28.
In addition to the memory device 18, the electronic device 10 may further include a non-volatile storage 20 for persistent storage of data and/or instructions. The non-volatile storage 20 may include flash memory, a hard drive, or any other optical, magnetic, and/or solid-state storage media, or some combination thereof. In accordance with aspects of the present disclosure, image data stored in the non-volatile storage 20 and/or the memory device 18 may be processed by the image processing circuitry 32 prior to being output on a display.
The embodiment illustrated in
The power source 26 of the device 10 may include the capability to power the device 10 in both non-portable and portable settings. The display 28 may be used to display various images generated by device 10, such as a GUI for an operating system, or image data (including still images and video data) processed by the image processing circuitry 32, as will be discussed further below. As mentioned above, the image data may include image data acquired using the imaging device 30 or image data retrieved from the memory 18 and/or non-volatile storage 20. The display 28 may be any suitable type of display, such as a liquid crystal display (LCD), plasma display, or an organic light emitting diode (OLED) display, for example. Additionally, as discussed above, the display 28 may be provided in conjunction with the above-discussed touch-sensitive mechanism (e.g., a touch screen) that may function as part of a control interface for the electronic device 10. The illustrated imaging device(s) 30 may be provided as a digital camera configured to acquire both still images and moving images (e.g., video).
The image processing circuitry 32 may provide for various image processing steps, such as spatial dithering, error diffusion, pixel color-space conversion, luminance determination, luminance optimization, image scaling, and so forth. In some embodiments, the image processing circuitry 32 may include various subcomponents and/or discrete units of logic that collectively form an image processing “pipeline” for performing each of the various image processing steps. These subcomponents may be implemented using hardware (e.g., digital signal processors or ASICs) or software, or via a combination of hardware and software components. The various image processing operations that may be provided by the image processing circuitry 32 and, particularly those processing operations relating to spatial dithering, error diffusion, pixel color-space conversion, luminance determination, and luminance optimization, will be discussed in greater detail below.
Referring again to the electronic device 10,
Turning to
Having provided some context with regard to various forms that the electronic device 10 may take and now turning to
A pixel group 50 is depicted in greater detail and includes four adjacent pixels 52, 54, 56, and 58. In the depicted embodiment, each pixel of the display device 28 may include three sub-pixels capable of displaying a red (R), a green (G), and a blue (B) color. The human eye is capable of perceiving a particular RGB color combination and translating the combination into a certain color. By varying the individual RGB intensity levels, a number of colors may be displayed by each individual pixel. For example, a pixel having a level of 50% R, 50% G, and 50% B may be perceived colored gray, while a pixel having a level of 100% R, 100% G, and 0% B may be perceived as colored yellow.
The number of colors that a pixel is capable of displaying is dependent on the hardware capabilities of the display 28. For example, a display 28 with a 6-bit color depth for each sub-pixel is capable of producing 64 (26) intensity levels for each of the R, G, and B color components. The number of bits per sub-pixel, e.g. 6 bits, is referred to as the pixel depth. At a pixel depth of 6 bits, 262,144 (26×26×26) color combinations are possible, while at pixel depth of 8 bits, 16,777,216 (28×28×28) color combinations are possible. Although the visual quality of images produced by an 8-bit pixel depth display 28 may be superior to the visual quality of images produced by a display 28 using 6-bit pixel depth, the cost of the 8-bit display 28 is also higher. Accordingly, it would be beneficial to apply imaging processing techniques, such as the techniques described herein, that are capable displaying a source image with improved visual reproduction even when utilizing lower pixel depth displays 28. Further, a source image may contain more more colors than those supported by the display 28, even displays 28 having higher pixel depths. Accordingly, it would also be beneficial to apply imaging processing techniques that are capable of improved visual representation of any number of colors. Indeed, the image processing techniques described herein, such as those described in more detail with respect to
Turning to
The source image 62 may first undergo color decomposition (block 64). The color decomposition of block 64 is capable of decomposing the color of each pixel of the source image 62 into the three RGB color levels. That is, the RGB intensity levels for for each pixel may be determined by the color decomposition. Such a decomposition may be referred to as a three-channel decomposition, because the colors may be decomposed into a red channel, a green channel, and a blue channel, for example.
In the depicted embodiment, the source image 62 may also undergo luminance analysis (block 66). A luminance is related to the perceived brightness of an image or an image component (such as a pixel) to the human eye. Further, humans typically perceive colors as having different luminance even if each color has equal radiance. For example, at equal radiances, humans typically perceive the color green as having higher luminance than the color red. Additionally, the color red is perceived as having a higher luminance than the color blue. In one example, a luminance formula Y may be arrived at by incorporating observations based on the perception of luminance by humans as defined below.
Y=0.30R+0.60G+0.10B
Indeed, the luminance equation Y above is an additive formula based on 30% red, 60% green, and 10% blue chromaticities (e.g., color values). The luminance formula Y can thus use the RGB color levels of a pixel to determine an approximate human perception of the luminance of the pixel. It is to be understood that because of the variability of human perception, among other factors, the values used in the luminance equation are approximate. Indeed, in other embodiments, the percentage values for R, G, and B may be different. For example, in another embodiment, the values may be approximately 29.9% red, 58.7% green, and 11.4% blue.
In another example, a formula Y′ may be arrived at by applying a gamma transformation to each color value R, G, B. More specifically, the color values R, G, and B may be gamma transformed into a linear space by raising each respective color value to a power coefficient such as 2.2, resulting in R′=R2.2, G′=G2.2, and Y′=Y2.2. Accordingly, the gamma transformation into linear space may result in the formula defined below.
Y′=0.30R′+0.60G′+0.10B′
It is to be understood that, in other embodiments, the power coefficient 2.2 may have other values, such as 1.5, 1.8, 1.9, 2.0, 2.1, 2.3, 2.4, 2.5, or 2.8. Additionally, the percentage values for R′, G′, and B′ may also be different. For example, in another embodiment, the values may be approximately 29.9% R′, 58.7% G′, and 11.4% B′. In certain embodiments, the gamma-transformed luminance Y′ may be derived by using the source image 62 RGB values and used, for example, during a luminance-based dithering (block 68) to arrive at a gamma-approximated luminance, as described in more detail below with respect to
The Y or Y′ luminance value of each pixel may then be utilized for luminance-based dithering (block 68) of the source image 62. In one embodiment of luminance-based dithering, the image may be manipulated by first approximating the color of an area of the source image to be represented by a display pixel to the nearest hardware-supported color. The hardware-supported color area may then be manipulated to more closely approximate the luminance (e.g., Y or Y′) of the original source image area. Such a manipulation may result in deviances from the original image (i.e., “quantization errors”). Accordingly, techniques such as error diffusion may be employed that diffuse the “errors” into adjacent pixels in the image. The image manipulations described with respect to logic 60 are capable of utilizing a single frame of the image and thus may be employed in a wide variety of devices, including devices having limited computational resources. Accordingly, the dithering techniques described herein, such as the luminance-based dithering technique described in more detail with respect to
The source image 62 may then be divided into image areas, and one of the image areas may then be selected (block 80). In one embodiment, the selected image area 82 may be composed of a single pixel. In other embodiments, the selected image area 82 may be composed of multiple adjacent pixels. A hardware color approximation process, e.g., color quantization (block 84), may then be applied to the selected image area 82. In color quantization (block 84), the image area 82 may have its original RGB color components approximated to the nearest RGB color components that are supported by the hardware. As mentioned above, the original RGB color components may be stored at a higher level pixel depth, such as an 8-bit pixel depth, while the hardware may support a lower level pixel depth, such as a 6-bit pixel depth. Accordingly, a suitable algorithm may be used to find the nearest RGB color levels supported by the hardware.
In some instances, for an image area 82 having a color component value between two adjacent hardware levels, the color component value may be approximated based on its most significant bits. For example, an 8-bit source image color level may be converted to a 6-bit hardware supported color level by using the first six bits of the eight bits of the source image color level. Suppose that the 8-bit red color level is the decimal value “213”, which is equivalent to the binary value “11010101.” The 8-bit red color level could be converted to a 6-bit red color level by using the first six bits, i.e., the binary value “110101” which is equivalent to the decimal value “53.” Accordingly, the decimal value “53” may then be assigned as the red level of a color quantized image area 88. The green and blue color levels may be similarly converted from a higher pixel depth (e.g., 8-bits) to a lower pixel depth (e.g., 6-bits), resulting in the color quantized image area 88.
The logic 74 may then apply luminance approximation (block 96) to the color quantized image area 88. In the luminance approximation (block 96), the color quantized image area 88 may have its RGB color components modified to more closely approximate the luminance (e.g., Y or Y′) of the original colors of the image area 82. As mentioned above, an equation such as the luminance equations Y or Y′, may be used to first calculate the hardware luminance Yhw of the image area 82. In embodiments where the image area 82 may be composed of a single pixel, the luminance equation Y or Y′ would result in a single Yhw value. In embodiments where the image area 82 includes multiple pixels, the luminance for the multi-pixel image area 82 may then be further derived by averaging the luminance values of each pixel, finding a median luminance of the luminance values, or selecting one of the multiple luminance values. The hardware luminance Yhw may then be adjusted so as to more closely approximate the luminance of the original image (e.g., original image luminance Y or Y′).
In certain embodiments, the hardware luminance Yhw may be adjusted by adding and/or subtracting from one or more RGB color levels, as described in more detail below with respect to
As mentioned above, humans perceived luminance based on an additive contribution of colors. Some colors, such as green, contribute to luminance more than other colors, such as red or blue. Accordingly, the luminance of the source image may be more closely approximated by taking into consideration the contribution made by each color to the luminance. Green, for example, may contribute approximately 60% to luminance, while red may contribute approximately 30%, and blue may contribute approximately 10%. Thus, increasing the green color component by a factor of “+1” (i.e., one color level), for example, will increase the luminance approximately 500% (i.e., six times) more than increasing the blue color component by the same “+1” factor.
In one example, if the source luminance 108 is relatively close in value to the hardware luminance 106, then only the blue color component may be chosen to be modified (as it has the smallest input on luminance). However, if the value of the source luminance 108 is further away from the value of the hardware luminance 106, then the red color component may be modified because the color red contributes a larger percentage to the overall luminance than the color blue. If the value of the source luminance 108 is even further away from the value of the hardware luminance 106, then the values of the red color component and the blue color component may both be raised (or lowered). If the value of the source luminance 108 is yet further away from the value of the hardware luminance 106, then the green color component may be modified because modification of the green component may account for a greater shift in the perceived luminance than modification of the red and blue color components. Likewise, if the value of the source luminance 108 is yet even further away from the value of the hardware luminance 106, then the values of the green color component and the blue color component may be both raised (or lowered). If the value of the source luminance 108 is still further away from the value of the hardware luminance 106, then the values of the green color component and the red color component may be both raised (or lowered). Accordingly, the luminance approximation may take into account the contribution of each individual color to the overall luminance when adding or subtracting color levels so as to more closely approximate the luminance of the source image 62.
Returning again to
Once the error diffusion (block 118) is applied to the luminance approximated image area 98 using the deviations 116, then the logic 74 may determine at decision block 120 if all areas of the original source image 62 have been processed. If there are image areas still left unprocessed, then the logic 74 may iterate back to block 80 and continue with the image manipulation of the remaining image areas 82, as described above. Indeed, the logic 74 may iterate, for example, from left to right, then from top to bottom, selecting the next image area to manipulate until the entire source image 62 has been transformed from a high pixel depth image 62 to a low pixel depth image 70. The resulting low pixel depth image 70 is capable of being displayed in hardware having a lower pixel depth while presenting a visually pleasing image representative of the original original source image 62. Once the entirety of source image 62 gas been processed, the logic 74 may conclude (block 122).
Turning to
In another embodiment, the error E1 may be divided so that one or more neighboring pixels 126, 128, and 130 receive different proportions of the error. For example, half the error (i.e., E1/2) may be added to the pixel 126, and one quarter of the error (i.e., E1/4) may be added to the neighboring pixels 128 and 130. Assuming raster-order processing, such a disproportionate subdivision passes a larger proportion of the error to the neighboring pixel subsequent in line for undergoing luminance approximation (block 96). The luminance approximation 96 may thus process the larger error and may result in a more visually pleasing display image 70. Once the error E1 is diffused, the next diffused, the next image area 82 (e.g., pixel) may be processed, as described in more detail with respect to
Turning to
The resulting error diffusion may thus allow for a wider spread of the error which may result in a display image 70 that is of superior visual reproduction even when using lower pixel depths. Indeed, the techniques disclosed herein, including luminance-based dithering and error diffusion, may allow for approximating any number of source images into a lower pixel depth image with improved visual quality.
The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.