1. The Field of the Invention
The present invention relates to rendering images on display devices having pixels with separately controllable pixel sub-components. More specifically, the present invention relates to filtering and subsequent displaced sampling of image data to obtain a desired degree of luminance accuracy and color accuracy.
3. The Prior State of the Art
As computers become ever more ubiquitous in modern society, computer users spend increasing amount of time viewing images on display devices. Flat panel display devices, such as liquid crystal display (LCD) devices, and cathode ray tube (CRT) display devices are two of the most common types of display devices used to render text and graphics. CRT display devices use a scanning electron beam to activate phosphors arranged on a screen. Each pixel of a CRT display device consists of a triad of phosphors, each of a different color. The phosphors included in a pixel are controlled together to generate what is perceived by the user as a point or region of light having a selected color defined by a particular hue, saturation, and intensity. The phosphors in a pixel of a CRT display device are not separately controllable. CRT display devices have been widely used in combination with desktop personal computers, workstations, and in other computing environments in which portability is not an important consideration.
LCD display devices, in contrast, have pixels consisting of multiple separately controllable pixel sub-components. Typical LCD devices have pixels with three pixel sub-components, which usually have the colors red, green, and blue. LCD devices have become widely used in portable or laptop computers due to their size, weight, and relatively low power requirements. Over the years, however, LCD devices have begun to be more common in other computing environments, and have become more widely used with non-portable personal computers.:
Conventional image data and image rendering processes were developed and optimized to display images on CRT display devices. The smallest unit on a CRT display device that is separately controllable is a pixel; the three phosphors included in each pixel are controlled together to generate the desired color. Conventional image processing techniques samples of image data to entire pixels, with the three phosphors together representing a single portion of the image. In other words, each pixel of a CRT display device corresponds to or represents a single region of the image data.
The image data and image rendering processes used with LCD devices are those that have been originally developed in view of the CRT, three-phosphor pixel model. Thus, conventional image rendering processes used with LCD devices do not take advantage of the separately controllable nature of pixel sub-components of LCD pixels, but instead generate together the luminous intensity values to be applied to the three pixel sub-components in order to yield the desired color. Using these conventional processes, each three-part pixel represents a single region of the image data.
It has been observed that the eyestrain and other reading difficulties that have been frequently experienced by computer users diminish as the resolution of display devices and the characters displayed thereon improves. The problem of poor resolution is particularly evident in flat panel display devices, such as LCDs, which may have resolutions 72 or 96 dots (i.e., pixels) per inch (dpi), which is lower than most CRT display devices. Such display resolutions are far lower than the 600 dpi resolution supported by most printers. Even higher resolutions are found in most commercially printed text such as books and magazines. The relatively few pixels in LCD devices are not enough to draw smooth character shapes, especially at common text sizes of 10, 12, and 14 point type. At such common text rendering sizes, portions of the text appear more prominent and coarse on the display device than when displayed on CRT display devices or printed.
In view of the foregoing problems experienced in the art, there is a need for techniques of improving the resolution of images displayed on LCD display devices. While improving resolution, it would also be desirable to accurately render the color of the images to a desired degree so as to generate displayed images that closely reproduce the image encoded in the image data.
The present invention relates to image data processing and image rendering techniques whereby images are displayed on display devices having pixels with separately controllable pixel sub-components. Spatially different regions of image data are mapped to individual pixel sub-components rather than to full pixels. It has been found that mapping point samples or samples generated from a simple box filter directly to pixel sub-components results in either color errors or lowered resolution. Moreover, it has been found that there is an inherent tradeoff between improving color accuracy and improving luminance accuracy. The methods and systems of the invention use filters that have been selected to optimize or to approximate an optimization of a desired balance between color accuracy and luminance accuracy.
The invention is particularly suited for use with LCD display devices or other display devices having pixels with a plurality of pixel sub-components of different colors. For example, the LCD display device may have pixels with red, green, and blue pixel sub-components arranged on the display device to form either vertical or horizontal stripes of same-colored pixel sub-components.
The image processing methods of the invention can include a scaling operation, whereby the image data is scaled in preparation for subsequent oversampling, and a hinting operation, which can be used to adapt the details of an image to the particular pixel sub-component positions of a display device. The image data signal, which can have three channels, each representing a different color component of the image, is passed through a low-pass filter to eliminate frequencies above a cutoff frequency that has been selected to reduce color aliasing that would otherwise be experienced. Although the pixel Nyquist frequency can be used as the cutoff frequency, it has been found that a higher cutoff frequency can be used. The higher cutoff frequency yields greater sharpness, at some sacrifice of color aliasing.
The low-pass filters are selected to optimize or to approximately optimize the tradeoff between color accuracy and luminance accuracy. The coefficients of the low-pass filters are applied to the image data. In one implementation, the low-pass filters are an optimized set of nine filters that includes one filter for each combination of color channel and pixel sub-component. In other implementations, the low-pass filters can be selected to approximate the filtering functionality of the general set of nine filters.
The filtered data represents samples that are mapped to individual pixel sub-components of the pixels, rather than to the entire pixels. The samples are used to select the luminous intensity values to be applied to the pixel sub-components. In this way, a bitmap representation of the image or a scanline of an image to be displayed on the display device can be assembled. The processing and filtering can be done on the fly during the rasterization and rendering of an image. Alternatively, the processing and filtering can be done for particular images, such as text characters, that are to be repeatedly included in displayed images. In this case, text characters can be prepared for display in an optimized manner and stored in a buffer or cache for later use in a document.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
In order that the manner in which the above-recited and other advantages and features of the invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
The present invention relates to image data processing and image rendering techniques whereby image data is rendered on patterned flat panel display devices that include pixels each having multiple separately controllable pixel sub-components of different colors. When applied to display devices, such as conventional liquid crystal display (LCD) devices, the image data processing operations include filtering a three-channel continuous signal representing the image data through filters that obtain samples that are mapped to the red, green, and blue pixel sub-components. The filters are selected to establish a desired tradeoff between color accuracy and luminance accuracy. Generally, an increase in color accuracy results in a corresponding decrease in luminance accuracy and vice versa. The samples mapped to the pixel sub-components are used to generate luminous intensity values for the pixel sub-components.
The image rendering processes are adapted for use with LCD devices or other display devices that have pixels with multiple separately controllable pixel sub-components. Although the invention is described herein primarily in reference to LCD devices, the invention can also be practiced with other display devices having pixels with multiple separately controllable pixel sub-components.
I. Exemplary Computing Environments
Prior to describing the filtering and sampling operations of the invention in detail, exemplary computing environments in which the invention can be practiced are presented. The embodiments of the present invention may comprise a special purpose or general purpose computer including various computer hardware, as discussed in greater detail below. Embodiments within the scope of the present invention also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media which can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such a connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media. Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
With reference to
The computer 20 may also include a magnetic hard disk drive 27 for reading from and writing to a magnetic hard disk 39, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to removable optical disk 31 such as a CD-ROM or other optical media. The magnetic hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32, a magnetic disk drive-interface 33, and an optical drive interface 34, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-executable instructions, data structures, program modules and other data for the computer 20. Although the exemplary environment described herein employs a magnetic hard disk 39, a removable magnetic disk 29 and a removable optical disk 31, other types of computer readable media for storing data can be used including magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, RAMS, ROMs, and the like.
Program code means comprising one or more program modules may be stored on the hard disk 39, magnetic disk 29, optical disk 31, ROM 24 or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A user may enter commands and information into the computer 20 through keyboard 40, pointing device 42, or other input devices (not shown), such as a microphone, joy stick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 21 through a serial port interface 46 coupled to system bus 23. Alternatively, the input devices may be connected by other interfaces, such as a parallel port, a game port or a universal serial bus (USB). An LCD device 47 is also connected to system bus 23 via an interface, such as video adapter 48. In addition to the LCD device, personal computers typically include other peripheral output devices (not shown), such as speakers and printers.
The computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as remote computers 49a and 49b. Remote computers 49a and 49b may each be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 20, although only memory storage devices 50a and 50b and their associated application programs 36a and 36b have been illustrated in
When used in a LAN networking environment, the computer 20 is connected to the local network 51 through a network interface or adapter 53. When used in a WAN networking environment, the computer 20 may include a modem 54, a wireless link, or other means for establishing communications over the wide area network 52, such as the Internet. The modem 54, which may be internal or external, is connected to the system bus 23 via the serial port interface 46. In a networked environment, program modules depicted relative to the computer 20, or portions thereof, may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing communications over wide area network 52 may be used.
As explained above, the present invention may be practiced in computing environments that include many types of computer system configurations, such as personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. One such exemplary computer system configuration is illustrated in
Portable personal computers, such as portable computer 60, tend to use flat panel display devices for displaying image data, as illustrated in
Each pixel includes three pixel sub-components, illustrated, respectively, as red (R) sub-component 72, green (G) sub-component 74 and blue (B) sub-component 76. The pixel sub-components are non-square and are arranged on LCD 70 to form vertical stripes of same-colored pixel sub-components. The RGB stripes normally run the entire width or height of the display in one direction. Common LCD display devices currently used with most portable computers are wider than they are tall, and tend to have RGB stripes running in the vertical direction, as illustrated by LCD 70. Examples of such devices that are wider than they are tall have column-to-row ratios such as 640×480, 800×600, or 1024×768. LCD display devices are also manufactured with pixel sub-components arranged in other patterns, including horizontal stripes of same-colored pixel sub-components, zigzag patterns or delta patterns. Moreover, some LCD display devices have pixels with a plurality of pixel sub-components other than three pixel sub-components. The present invention can be used with any such LCD display device or flat panel display device so long as the pixels of the display device have separately controllable pixel sub-components.
A set of RGB pixel sub-components constitutes a pixel. Thus, as used herein, the term “pixel sub-component” refers to one of the plurality of separately controllable elements that are included in a pixel. Referring to
II. Filter Selection, Properties, and Use
The image rendering processes of the invention result in spatially different sets of one or more samples of image data being mapped to individual, separately controllable pixel sub-components of pixels included in an LCD display device or another type of display device. At least some of the samples are “displaced” from the center of the full pixel. For example, a typical LCD display device has full pixels centered about the green pixel sub-component. According to the invention, the set of samples mapped to the red pixel sub-component is displaced from the point in the image data that corresponds to the center of the full pixel.
The image data processing and image rendering processes in which the filtering techniques of the invention can be used can include scaling and hinting operations. Thus, image data 200 can be data that has been scaled and/or hinted. The scaling operations are useful for preparing the image data to be oversampled in combination with the linear filtering operations of the invention. Further information relating to exemplary scaling operations is found in U.S. patent application Ser. No. 09/168,013, filed Oct. 7, 1998, entitled “Methods and Apparatus for Resolving Edges within a Display Pixel,” which is incorporated herein by reference.
The hinting operations can be used to adjust the position and size of images, such as text, in accordance with the particular display characteristics of the display device. Hinting can also be performed to align image boundaries, such as text character stems, with selected boundaries between pixel sub-components of particular colors to optimize contrast and enhance readability. Further information relating to exemplary sampling operations is found in U.S. patent application Ser. No. 09/168,015, entitled “Methods and Apparatus for Performing Grid Fitting and Hinting Operations” filed Oct. 7, 1998, which is incorporated herein by reference.
Image data 200 is passed through low-pass filters 208 as shown in
Low-pass filters 208 operate to obtain samples of the image data that are mapped to individual pixels sub-components in scan conversion module 214 to create a bitmap representation 216 or another data structure that indicates luminous intensity values to be applied to the individual pixel sub-components to generate the displayed image. The operation of the low-pass filters can be expressed mathematically as linear filtering followed by displaced sampling at the locations of the pixel sub-components. As is known in the art, filtering followed by sampling can be combined into one step, where the filters are only applied to regions of the image that result in samples at the desired sampling locations. As used herein, low-pass filters 208 are a combined filtering and displaced sampling operation.
The linear filtering operations disclosed herein relate to the scan conversion of image data that has been scaled and optionally hinted. General principles of scan conversion operations that can be adapted for use with the sampling filters and the linear filtering operations of the invention are disclosed in U.S. patent application Ser. No. 09/168,014, filed Oct. 7, 1998, entitled “Methods and Apparatus for Performing Image Rendering and Rasterization Operations,” which is incorporated herein by reference.
Low-pass filters 208 are selected in order to obtain a desired degree of color accuracy while maintaining a desired degree of luminance accuracy, which is perceived as sharpness or spatial resolution. As will be further described hereinafter, there is an inherent tradeoff between enhancing luminance accuracy and enhancing color accuracy on LCD displays, while mapping samples to individual pixel sub-component rather than to full pixels.
Sample 230a is subjected to a gamma correction operation 240, and is mapped to red pixel sub-component 250a as shown in
Similarly, filter 220b is applied to channel 204 representing the green component of the image to obtain a sample represented by element 230b of
The foregoing sampling and filtering operation described in referenced
Prior to discussing the specific details of the generalized set of filters in
Exploiting the higher horizontal resolution of a LCD pixel sub-component array can be expressed as an optimization problem. The image data defines a desired array of luminance values having pixel sub-component resolution and color values having full pixel resolution. Based on the image data, the filters can be chosen according to the invention to generate pixel sub-component values that yield an image as close as possible to the desired luminances and colors. To mathematically define the optimization problem, one can mathematically define an error model that measures the error between the perceived output of an LCD pixel sub-component array and the desired output, which as stated above, is defined by the image data. As will be described below, the error model will be used to construct an optimal filter that strikes a desired balance between luminance and color accuracy. One example of a presently preferred approach for defining an error metric and in selecting filters that optimize or approximately optimize the error metric is disclosed in U.S. Provisional Patent Application Ser. No. 60/175,811, which is entitled “Optimal Filtering for Patterned Displays,” filed on the same day as the present application, and incorporated herein by reference.
In order to further illustrate how suitable filters can be selected, the following example of defining and solving an optimization problem relating to the perception of luminance and color in a Y,U,V color space is presented. In preparation for identifying the properties of an optimal filter constructed according to the invention, an error metric is defined, which specifies how close an image displayed on a scanline of pixel sub-components appears, to the human eye, to a desired array of luminances and colors. While an LCD device includes pixels with pixel sub-components that are displaced one from another, the foundation for constructing the error metric can be understood by first examining how luminances and colors are defined when the pixels are assumed to be made of three colors [R,G,B] that are co-located.
The luminance, Y, of a co-located pixel is defined as
Y=0.3R+0.6G+0.1B
There are two dimensions of color separate from the brightness. One convenient and conventional way of defining these two color dimensions is
U=R−Y=0.7R−0.6G−0.1B
V=B−Y=−0.3R−0.6G+0.9B
When U=V=0, the pixel is monochromatic (R=G=B). Expanding on the foregoing definition of Y, U, and V, for co-located color sources, one can define a reasonable Y, U, and V for LCD devices, in which the pixel sub-components are displaced one from another. Regarding the definition of color (U, V) for an LCD, it has been observed that an edge of a displayed object appears reddish when the red pixel sub-component is brighter than the green and blue pixel sub-components adjacent to it. Moreover, it is well known that the eye computes a function termed “center/surround”, in that it compares a signal at a location to a related signal integrated over the region surrounding the location. Based on these observations, a reasonable model for U with respect to LCDs is to compare a red pixel sub-component to the luminance of the pixel sub-components surrounding it.
Ui=−0.1Bi−1+0.7Ri−0.6Gi
As shown in
Analogously, an edge of an object displayed on an LCD appears blue when the blue pixel sub-component is brighter than the pixel sub-components adjacent to it. As shown in
Vi=−0.6Gi+0.9Bi−0.3Ri+1
Again, due to the relatively low color resolution perceived by the eye, V is computed in this color model only for every third pixel sub-component, centered on the blue pixel sub-component. As shown in
Using these definitions of Ui and Vi, a color error metric can be defined. The color error metric expresses how much the color of an image displayed on an LCD scanline deviates from an ideal color, which is determined by examining the image data. Given an array of pixel sub-component values designated as Ri, Gi, and Bi, and desired color values of Ui* and Vi*, the color error metric, which sums the squared errors of the individual color errors, is defined as:
where α and β are parameters, the value of which can be selected as desired to indicate the relative importance of U, V, and the color components, in general, as will be farther describe below.
The rest of the error relates to the luminance error. When an LCD displays a constant color (e.g., red), only the red pixel sub-components are turned on, while the green and blue are off. Therefore, at the pixel level, there is an uneven pattern of luminance across the screen. However, the eye does not perceive a uneven pattern of luminance, but instead sees a constant brightness of 0.3 across the screen. Thus, a reasonable luminance model should model this observation, while taking into account the fact that the eye can perceive sub-pixel luminance edges.
One approach for defining the luminance model according to the foregoing constraints is to compute a luminance value at every pixel sub-component by applying the standard luminance formula at every triple of pixel sub-components. Yj* is a defined as a desired luminance of the jth pixel sub-component. For the ith pixel, Y3i−2* is the desired luminance at the red pixel sub-component, Y3i−1* is the desired luminance at the green pixel sub-component, and Y3i* is the desired luminance at the blue pixel sub-component. As graphically depicted in
Y3i−2=0.1Bi−1+0.3Ri+0.6Gi
Y3i−1=0.3Ri+0.6Gi+0.1Bi
Y3i=0.6Gi+0.1Bi+0.3Ri+1
This model for luminance fulfills both constraints. If a constant color is applied to the scanline, then the luminance is constant across a scanline. However, if there is a sharp edge in the pixel sub-component values, there will be a corresponding less sharp perceived edge centered at the same sub-pixel location. Based on the foregoing, the squared error metric for luminance as perceived by the eye for an image displayed on an LCD scanline is
The total error metric for an LCD scanline is
Etotal=Eiuminance+Ecolor
For every three pixel sub-components there are five constraints, namely, three luminances and two colors. Thus, the task of displaying an image on an LCD scanline by inapping samples to individual pixel sub-components is over-constrained. The pixel sub-component array cannot perfectly display the high-frequency luminance with no color error. However, the parameters α and β inside the expression Ecolor control the tradeoff between color accuracy and sharpness. When α and β are large, color errors are considered more serious than luminance errors. Conversely, if α and β are small, then representing the high-resolution luminance is considered more important than color errors. Thus, α and β are parameters that can be adjusted as desired to alter the balance between color accuracy and luminance accuracy. Depending on the implementation of the invention, the values of α and β can be set by the manufacturer, or can be selected by a user to adjust the LCD display device to individual tastes.
The total error metric can be used to solve for optimal values of Ri, Gi, and Bi. The values of Yj*, Ui*, and Vi* can be computed by, for example, examining image data that has been oversampled by a factor of three to generate point samples corresponding to (Rj*, Gj*, Bj*). The simplest case is when the desired image is black and white, which is often the case for text. For black and white images, Ui*=Vi*=0 for all pixels, i. The values of Yj* can be calculated using the conventional definition of Y, namely,
Yj*=0.3Rj*+0.6Gj*+0.1Bj*.
Using no filtering to calculate Yj* forces the optimal result with respect to Yj to have as little luminance error as possible, and consequently, to be as sharp as possible.
For full color images, the values of Ui* and Vi* can be calculated by applying a box filter having a width of three samples, or three pixel sub-components, to the image data and using the conventional U and V definitions with respect to the identified (Rj*,Gj*,Bj*) values. While it has been found that a box filter suitably approximates the desired Ui* and Vi* values, other filters can be used. The value of Yj* is calculated in the same way as described in reference to the black and white case.
The optimal pixel sub-component values (Ri,Gi,Bi) can be calculated by minimizing the total error metric with respect to each of the pixel sub-component variables or, in other words, setting the partial derivative of the error function to zero with respect to Ri, Gi, and Bi:
Since the variables Ri, Gi, and Bi only appear in the error metric quadratically, their derivatives are linear. Accordingly, the equations above can be combined into a linear system:
where the matrix M is constant and pentadiagonal—it only has non-zero entries on its main diagonal and the two diagonals immediately next to the main diagonal. The end effects can be handled by adding two extra pixels (R0,G0,B0) and (RN+1,GN+1,BN+1), which are computed along with the rest of the pixels and then discarded.
There are several ways to use the linear system to compute the values of the left-hand vector in the foregoing linear system. First, the right-hand vector can be computed using the desired values of Yj*, Ui*, and Vi*. The linear system can then be solved for the left-hand vector using any suitable numerical techniques, one example of which is a banded matrix solver.
Another way of solving the linear system for the left-hand vector is to find a direct filter than, when applied to the right-hand-side vector, will approximately solve the system. This technique involves computing the right-hand vector using the desired values of Yj*, Ui*, and Vi*, then convolving the right-hand vector with the direct filter. This approach for approximating the solution is valid based on the observation that the matrix inverse of M approximately repeats every three rows, except that the three rows are shifted by one pixel. This repeating pattern represents a direct filter that can be used with the invention to approximate the filtering that would strike a precise balance between color accuracy and sharpness.
This approximation would be exact for a scanline having an infinite length. The direct filter can be derived numerically by inverting the matrix M for a large scanline, then taking three rows at or near the center of the inverted matrix. In general, larger values of a and P enable the direct filters to be truncated at fewer digits.
A third approach involves combining the computation of the right-hand vector with the direct filtering to create nine filters that map three-times oversampled image data (i.e., Rj*,Gj*,Bj*) directly into pixel sub-component values. The generalized set of nine filters selected according to this third approach is further described in reference to
A more detailed presentation of mathematical techniques for selecting filters for processing image data in accordance to the foregoing example can be found in U.S. Provisional Patent Application Ser. No. 60/115,573 and U.S. Provisional Patent Application Ser. No. 60/115,731, which have been incorporated herein by reference.
Any of the foregoing computational techniques can be used to generate the filters that establish or approximately establish the desired tradeoff between color accuracy and sharpness. It should be understood that the preceding discussion of a mathematical approach for selecting the filters has been presented for purposes of illustration, and not limitation. Indeed, the invention extends to image processing and filtering techniques that utilize filters that conform with the general principles disclosed herein, regardless of the way in which the filters are selected. In addition to encompassing such techniques for processing and filtering image data, the invention also extends to processes of selecting the filters using analytical approaches, such as those disclosed herein.
The invention has been described in reference to an LCD display device having stripes of same-colored pixel sub-components. For LCD devices of this type, the color and luminance analysis presented herein considers only one dimension, namely, the linear direction that coincides with the orientation of the scanlines. In other words, the foregoing model for representing Y, U, and V on the striped LCD display device takes into consideration only the effects generated by the juxtaposition of pixel sub-components in the direction parallel to the orientation of the scanlines. Those skilled in the art, upon learning of the disclosure made herein, will recognize how the model can be defined in two dimensions, which takes into consideration the position and effect of pixel sub-components both above, below, and to the side of other pixel sub-components. While the one-dimensional model suitably describes the color perception of striped LCD devices, other pixel sub-component patterns, such as delta patterns, lend themselves more to a two-dimensional analysis. In any case, the invention extends to filters that have been selected in view of an optimization of an error metric or that conform to or approximate such an optimization, regardless of number of dimensions associated with the color model or other such details of the model.
The foregoing color modeling has been described in reference to R,G,B and Y,U,V measurements of color in the color space. Modeling the perception of color and luminance of the image on a display device having separately controllable pixel sub-components can also be performed with respect to other color dimensions in the color space. Because rotating colors in the color space is simply a linear operation, the “error metric” is accurately and appropriately considered to represent a color error and luminance error, regardless of the color dimensions used in any particular model. Moreover, regardless of the color dimensions used, the optimization problem is appropriately described in terms of striking a balance between color accuracy and luminance accuracy.
A generalized set of optimized filters is illustrated in
One example of the filter coefficients that have been found to generate or approximately generate a desired balance between color accuracy and luminance accuracy is presented in
As described above, the exemplary optimal filters of
It is also noted that the optimal filters whose input and output are the same color are rounded box filters with slight negative lobes, which gives a more rapid roll-off than a standard box filter. The R→R, G→G, and B→B filters also have a unity gain DC response. However, the filters that connect different colors from input to output are non-zero. Their purpose is to cancel color errors. The different color input/output filters have a zero DC response according to this embodiment of the invention.
While the filters illustrated in
As the image data is processed as disclosed herein, including the filtering operations in which the image data is sampled and mapped to obtain a desired balance between color accuracy and luminance accuracy, the image data is prepared for display on the LCD device or any other display device that has separately controllable pixel sub-components of different colors. The filtered data represents samples that are mapped to individual pixel sub-components of the pixels, rather than to the entire pixels. The samples are used to select the luminous intensity values to be applied to the pixel sub-components. In this way, a bitmap representation of the image or a scanline of an image to be displayed on the display device can be assembled.
The processing and filtering can be done on the fly during the rasterization and rendering of an image. Alternatively, the processing and filtering can be done for particular images, such as text characters, that are to be repeatedly included in displayed images. In this case, text characters can be prepared for display in an optimized manner and stored in a font glyph cache for later use in a document.
The image as displayed on the display device has the desired color accuracy and luminance accuracy, and also has improved resolution compared to images displayed using conventional techniques, which map samples to full pixels rather than to individual pixel sub-components.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application claims the benefit and priority of U.S. patent application Ser. No. 09/481,163, entitled “Filtering Image Data to Obtain Samples Mapped to Pixel Sub-Components of a Display Device,” filed Jan. 12, 2000, which claims the benefit of U.S. Provisional Patent Application Ser. No. 60/115,573, entitled “Resolution and Image Enhancement for Patterned Displays,” filed Jan. 12, 1999 and U.S. Provisional Patent Application Ser. No. 60/115,731, entitled “Resolution Enhancement for Patterned Displays,” filed Jan. 12, 1999. U.S. patent application Ser. No. 09/481,163 also being a continuation-in-part of U.S. patent application Ser. No. 09/364,365, entitled “Methods, Apparatus and Data Structures for Enhancing the Resolution of Images to be Rendered on Patterned Display Devices,” filed Jul. 30, 1999. The pending application incorporates by reference and claims the benefit and priority of all of the foregoing applications.
Number | Date | Country | |
---|---|---|---|
60115573 | Jan 1999 | US | |
60115731 | Jan 1999 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09481163 | Jan 2000 | US |
Child | 11166658 | Jun 2005 | US |
Parent | 09364365 | Jul 1999 | US |
Child | 09481163 | Jan 2000 | US |