This relates to generally to electronic devices with displays, and, more particularly, to electronic devices with calibrated displays.
Electronic devices such as portable computers, media players, cellular telephones, set-top boxes, and other electronic equipment are often provided with displays for displaying visual information.
Due to various factors, colors which are meant to appear uniform on a display may not actually appear uniform to a user. For example, a white background on a display may have portions which appear slightly yellow or slightly blue. Displays are sometimes calibrated to reduce color non-uniformity.
Conventional calibration methods for correcting color non-uniformity in a display typically require an excessive amount of measurement and a large amount of memory. For example, a three-dimensional look up table (“3D LUT”), which is used to generate adapted pixel values based on input pixel values, typically requires well over 4,000 measurements for a display with 8-bit resolution per color. The amount of memory required to store a table of this size may be undesirably large and may increase the costs associated with manufacturing a display.
It would therefore be desirable to be able to provide improved calibration systems for calibrating electronic devices with color displays.
An electronic device may include a display and display control circuitry. The display may be calibrated during manufacturing using a calibration system. The calibration system may include calibration computing equipment coupled to a light sensor and may be used to gather display performance information from the display. The light sensor may be used to capture images of a display while the display is operated in different modes of operation.
Display performance information may include color information and intensity information measured at different locations on the display. Calibration computing equipment may use the color information and intensity information to calculate color-specific, intensity-specific, location-specific correction factors for each different location on the display.
Correction factors may be determined by comparing measured color data a given location on the display with reference color data. The measured color data may include tristimulus values that are based on measured intensities of light at the given location. The reference color data may include color data measured at a reference location on the display or may include predetermined color data such as predetermined tristimulus values.
The color-specific, intensity-specific, location-specific correction factors may be stored in the electronic device. Display control circuitry in the electronic device may use the stored correction factors to perform pixel adaptation during operation of the display.
The display control circuitry may be configured to provide display data to the display. The display data may include color information and intensity information for each pixel. The display control circuitry may be configured to determine correction factor information for each pixel based on the color information, the intensity information, and the location of each pixel in the display.
The correction factor information may include correction factor values that correspond to different colors, intensity levels, and pixel locations on the display. The control circuitry may determine which correction factor values correspond to the color information for each pixel, the intensity information for each pixel, and the location of each pixel on the display. The display control circuitry may use interpolation to determine correction factor information for at least some of the pixels.
Display data for each pixel may include first and second digital display control values. The display control circuitry may determine correction factor information based on a ratio between the first digital display control value and the second digital display control value.
Each pixel may include a red subpixel, a green subpixel, and a blue subpixel. Correction factor information may include a red correction factor for each red subpixel, a green correction factor for each green subpixel, and a blue correction factor for a blue subpixel.
Further features of the invention, its nature and various advantages will be more apparent from the accompanying drawings and the following detailed description of the preferred embodiments.
Electronic devices such as cellular telephones, media players, computers, set-top boxes, wireless access points, and other electronic equipment may include calibrated displays. Displays may be used to present visual information and status data and/or may be used to gather user input data.
Display performance data may be gathered during calibration operations during manufacturing. The display color performance data may be used to calculate color-specific, intensity-specific, location-specific correction factors for a display in an electronic device. The correction factors may be stored in the electronic device and may be used to calibrate the display during operation of the display.
An illustrative electronic device of the type that may be provided with a display is shown in
As shown in
Device 10 may have a housing such as housing 12. Housing 12, which may sometimes be referred to as a case, may be formed of plastic, glass, ceramics, fiber composites, metal (e.g., stainless steel, aluminum, etc.), other suitable materials, or a combination of any two or more of these materials.
Housing 12 may be formed using a unibody configuration in which some or all of housing 12 is machined or molded as a single structure or may be formed using multiple structures (e.g., an internal frame structure, one or more structures that form exterior housing surfaces, etc.).
As shown in
In the example of
A diagram of electronic device 10 is shown in
Display driver circuitry 28 may be implemented using one or more integrated circuits (ICs) and may sometimes be referred to as a driver IC, display driver integrated circuit, or display driver. Display driver circuitry 28 may include, for example, timing controller 30 (TCON) circuitry such as a TCON integrated circuit. Display driver circuitry 28 may, for example, be mounted on an edge of a thin-film-transistor substrate layer in display 14 (as an example).
Graphics controller 52 may receive video data to be displayed on display 14 from storage and processing circuitry 30 over a path such as path 58. Storage and processing circuitry 30 may include one or more processors such as microprocessors, microcontrollers, digital signal processors, application-specific integrated circuits, or other processing circuits. Storage and processing circuitry may also include storage such as random-access memory, read-only memory, solid state memory in a solid state hard drive, magnetic storage, and other volatile and/or nonvolatile memory.
Circuitry 30 may use input-output circuitry 32 to allow data and user input to be supplied to device 10 and to allow data to be supplied from device 10 to external devices and/or to a user. Input-output circuitry 32 may include input-output devices such as touch screens, buttons, joysticks, click wheels, scrolling wheels, touch pads, key pads, keyboards, microphones, speakers, tone generators, vibrators, cameras, sensors, light-emitting diodes and other status indicators, data ports, etc. A user may control the operation of device 10 by supplying commands through input-output devices and may receive status information and other output from device 10 using the output resources of input-output devices.
Input-output circuitry 32 may include wireless communications circuitry. Wireless communications circuitry may include wireless local area network transceiver circuitry, cellular telephone network transceiver circuitry, and other components for wireless communication.
Display 14 may include a pixel array such as pixel array 56. Pixel array 56 may be controlled using control signals produced by display driver circuitry 28. During operation of device 10, storage and processing circuitry 30 may provide data to display driver circuitry 28 via graphics controller 52. Communications path 60 may be used to convey information between graphics controller 52 and display 14. Display driver circuitry 28 may convert the data that is received on path 60 into signals for controlling the pixels of pixel array 56.
Pixels 35 in pixel array 56 may contain thin-film transistor circuitry (e.g., polysilicon transistor circuitry or amorphous silicon transistor circuitry) and associated structures for producing electric fields across liquid crystal material in display 14. The thin-film transistor structures that are used in forming pixels 35 may be located on a substrate (sometimes referred to as a thin-film transistor layer or thin-film transistor substrate). The thin-film transistor (TFT) layer may be formed from a planar glass substrate, a plastic substrate, or a sheet of other suitable substrate materials.
As shown in
To provide display 14 with the ability to display color images, display 14 may include display pixels having color filter elements. Each color filter element may be used to impart color to the light associated with a respective display pixel in the pixel array of display 14. Display 14 may, for example, include a layer of liquid crystal material interposed between a thin-film-transistor layer and a color filter layer (as an example).
Display 14 may include touch circuitry such as capacitive touch electrodes (e.g., indium tin oxide electrodes or other suitable transparent electrodes) or other touch sensor components (e.g., resistive touch technologies, acoustic touch technologies, touch sensor arrangements using light sensors, force sensors, etc.). Display 14 may be a touch screen that incorporates display touch circuitry or may be a display that is not touch sensitive.
Display calibration information such as color-specific, intensity-specific, location-specific correction factors may be loaded onto device 10 during manufacturing. The stored correction factors may be accessed during operation of display 14 to produce calibrated images for a user. Correction factors may be stored in any suitable location in electronic device 10. For example, correction factors may be stored in storage and processing circuitry 30, in graphics controller 52, or in display driver 30 circuitry 28. In one suitable embodiment, a display timing controller (TCON) integrated circuit in circuitry 28 may receive incoming subpixel values from graphics controller 52 and may, based on the received incoming subpixel values, calculate and apply appropriate correction factors to the incoming subpixel values to obtain adapted subpixel values. This is, however, merely illustrative. If desired, display calibration may be performed by graphics controller 52, by storage and processing circuitry 30, and/or by other components in device 10.
A portion of an illustrative array of display pixels is shown in
Subpixels 35 may include subpixels of any suitable color. For example, subpixels 35 may include a pattern of cyan, magenta, and yellow subpixels, or may include any other suitable pattern of colors. The illustrative example in which subpixels 35 include a pattern of red, green, and blue subpixels is sometimes described herein as an example.
Display driver circuitry such as a display driver integrated circuit and, if desired, associated thin-film transistor circuitry formed on a display substrate layer may be used to produce signals such as data signals and gate line signals (e.g., on data lines and gate lines respectively in display 14) for operating pixels 34 (e.g., turning pixels 34 on and/or off and/or adjusting the intensity of pixels 34). During operation of display 14, display driver circuitry 28 may be used to control the intensity of light displayed by pixels 34 by controlling the values of data signals and gate line signals that are supplied to pixels 34.
Control circuitry included in storage and processing circuitry 30 may be used to provide digital display control values to display driver circuitry in device 10. Digital display control values may be a set of integers (commonly integers with values ranging from 0 to 255) that may be used to control the brightness of pixels 34. Each digital display control value may correspond to an associated intensity level. Display driver circuitry 28 may be used to convert the digital display control values into analog display signals. The analog display signals may be supplied to pixels 34 and may therefore be used to control the brightness of pixels 34. For example, a digital display control value of 0 may result in an “off” pixel while a digital display control value of 255 may result in a pixel operating at a maximum available power.
Digital display control values may include any suitable range of values. For example, digital display control values may be a set of integers ranging from 0 to 64. Arrangements in which digital display control values include integer values ranging from 0 to 255 is sometimes described herein as an example.
Display driver circuitry 28 may be used to concurrently operate pixels 34 of different colors in order to generate light having a color that is a mixture of, for example, primary colors red, green, and blue. As examples, operating red pixels R and blue pixels B at equal intensities may produce light that appears violet, operating red pixels R and green pixels G at equal intensities may generate light that appears yellow, operating red pixels R and green pixels G at maximum intensity while operating blue pixels B at half of maximum intensity may appear “yellowish,” and operating red pixels R, green pixels G and blue pixels B at simultaneously at maximum intensity may generate light that appears white.
In some cases, however, a given color may appear differently in some portions of the display than in other portions of the display. A white background, for example, which is meant to appear uniformly white across the display, may appear slightly yellow in some portions of the display and/or may appear slightly blue in some portions of the display. Other colors may also exhibit non-uniformity across the display.
There are various factors that can contribute to color non-uniformity in a display. For example, backlight non-uniformity (e.g., manufacturing variations in the light-emitting diodes of a backlight), cell gap variation (e.g., gaps between adjacent pixel cells), color filter variation, display panel temperature variation, and other factors may contribute to color non-uniformity in a display. As an example, portions of a display such as edge portions 36 of display 14 shown in
Color non-uniformity in a display may be corrected by applying correction factors to incoming pixel values to generate corresponding adapted pixel values. The adapted pixel values may be used to generate images with increased color uniformity. The adapted pixel values may be generated during operation of the display.
In order to produce electronic devices having display pixel adaptation capabilities, the display in each electronic device may undergo a first calibration process during manufacturing. The first calibration process may include gathering display performance information, processing the display performance information, and using the display performance information to calculate color-specific, intensity-specific, location-specific correction factors. The correction factors may be stored in the electronic device and may be used during operation of the display to produce calibrated images for a user.
Calibration computing equipment 46 may be coupled to test chamber 38 using a wired or wireless communications path such as path 44.
Test chamber 38 may include a light sensor such as light sensor 40. Light sensor 40 may include one or more light-sensitive components 45 for gathering display light 42 emitted by display 14 during calibration operations. Light-sensitive components 45 in light sensor 40 may include colorimetric light-sensitive components and spectrophotometric light-sensitive components configured to gather colored light.
Light sensor 40 may, for example, be a colorimeter having one or more light-sensitive components 45 each corresponding to an associated set of colored pixels in display 14. For example, a display having red, green and blue display pixels may be calibrated using a light sensor having corresponding red, green, and blue light-sensitive components 45. This is, however, merely illustrative. A display may include display pixels for emitting colors other than red, green, and blue, and light sensor 40 may include light-sensitive components 45 sensitive to colors other than red, green, and blue, may include white light sensors, or may include spectroscopic sensors.
Light sensor 40 may be used by system 48 to convert display light 42 into display performance data for calibrating the performance of displays such as display 14. For example, light sensor 40 may be used to capture images of display 14 while display 14 is operated in different modes of operation. Images captured by light sensor 40 may be provided to calibration computing equipment 46. Each captured image may contain display performance information. Display performance information may include, for example, data corresponding to display light intensities as a function of digital display control values (e.g., measured intensity of red light as a function of the digital display control values supplied to red pixels).
Display performance information may include color data such as X, Y, and Z tristimulus values. Tristimulus values may be calculated based on measured intensities of light at a particular location on display 14. Color data associated with a particular location on display 14 may be compared with color data associated with a reference location on display 14. The comparison of color data may be used to calculate correction factors for that particular location on display 14.
Test chamber 38 may, if desired, be a light-tight chamber that prevents outside light (e.g., ambient light in a testing facility) from reaching light sensor 40 during calibration operations.
During calibration operations, device 10 may be placed into test chamber 38 (e.g., by a technician or by a robotic member). Calibration computing equipment 46 may be used to operate device 10 and light sensor 40 during calibration operations. For example, calibration computing equipment 46 may issue a command (e.g., by transmitting a signal over path 44) to device 10 to operate some or all pixels of display 14. While device 10 is operating the pixels of display 14, calibration computing equipment 46 may operate light sensor 40 to gather display performance data. Display performance data may include, for example, measured intensities of red light emitted from display 14, measured intensities of green light emitted from display 14, and measured intensities of blue light emitted from display 14.
Display 14 may be operated in one or more calibration sequences during calibration operations. Each calibration sequence may correspond to a different color. In a first data collection phase of calibration operations, display 14 may, for example, be operated in a first series of calibration sequences such as a red calibration sequence, a green calibration sequence, and a blue calibration sequence. A red calibration sequence may include operating red pixels at different power levels while green and blue pixels are turned off, a blue calibration sequence may include operating green pixels at different power levels while red and blue pixels are turned off, and a blue calibration sequence may include operating blue pixels at different power levels while red and green pixels are turned off. A given calibration sequence may include measurements at, for example, five different power levels, ten different power levels, fifteen different power levels, more than fifteen different power levels, less than fifteen different power levels, etc.
During a second data collection phase of calibration operations, display 14 may be operated in a series of calibration sequences corresponding to additional colors. The additional colors may include any suitable color. A “color” may be defined by the relative intensity ratios of the colored pixels that make up the color (e.g., “brightness” ratios). For example, in a display having red, green, and blue pixels, a color may be defined by the brightness ratio of red pixels to green pixels, the brightness ratio of green pixels to blue pixels, and the brightness ratio of red pixels to blue pixels (or any other suitable set of brightness ratios from which the brightnesses of red, green, and blue pixels relative to each other may be determined).
As an example, a “greenish blue” color may be defined by a 1:2 brightness ratio of red pixels to green pixels, a 2:3 brightness ratio of green pixels to blue pixels, and a 1:3 brightness ratio of red pixels to blue pixels. As another example, a “yellowish” color may be defined by a 1:2 ratio of blue pixels to green pixels, a 1:2 ratio of blue pixels to red pixels, and a 1:1 ratio of red pixels to green pixels. As yet another example, a “neutral” color may be defined by equal brightness levels of red, green, and blue pixels (e.g., a 1:1 brightness ratio of red pixels to green pixels, a 1:1 brightness ratio of green pixels to blue pixels, and a 1:1 ratio of blue pixels to red pixels).
Each calibration sequence in the second data collection phase may correspond to a different color. For example, a “greenish blue” calibration sequence may include operating red, green, and blue pixels at different power levels while maintaining the relative brightness ratios corresponding to the greenish blue color. Each calibration sequence may include measurements at, for example, five different power levels, ten different power levels, fifteen different power levels, more than fifteen different power levels, less than fifteen different power levels, etc. The series of calibration sequences in the second data collection phase may include any suitable number of calibration sequences. For example, the second data collection phase may include five calibration sequences (e.g., five calibration sequences corresponding respectively to five different colors), may include ten calibration sequences (e.g., ten calibration sequences corresponding respectively to ten different colors), more than ten calibration sequences, less than ten calibration sequences, etc. If desired, the second data collection phase may be omitted and display 14 may only be operated in a first series of calibration sequences (e.g., red, green, and blue calibration sequences).
The display performance data collected during each calibration sequence may be used to calculate correction factors for each shade of the particular color associated with that calibration sequence. For example, a “magenta” calibration sequence may be used to calculate a set of correction factors for each measured shade (e.g., each power level at which a measurement is taken) of magenta, a “bluish green” calibration sequence may be used to calculate a set of correction factors for each measured shade of bluish green, etc. Thus, the set of colors for which data is collected during calibration operations may dictate the set of colors for which correction factors will later be calculated. Similarly, the power levels at which measurements are taken in each calibration sequence may determine the intensity levels for which correction factors will later be calculated.
With this type of configuration, the correction factors which are applied to incoming subpixel values during operation of the display may be color-specific and intensity-specific. For example, if the combination of brightness ratios associated with incoming subpixel values corresponds to a “magenta” color at maximum intensity, then the correction factors which have been calculated specifically for magenta at maximum intensity may be applied to the incoming subpixel values to produce an adapted set of subpixel values.
If desired, a set of correction factors may be calculated for every color (e.g., every different combination of relative brightness ratios) and/or for every shade of a given color. Display 14 may be operated in a corresponding calibration sequence for every color and intensity value for which correction factors are to be calculated.
If desired, correction factors may be calculated for a set of representative colors and for a selected number of shades of each representative color. The set of representative colors may include any suitable color and the selected number of shades may include any suitable shade. Storing correction factors for a set of representative colors and a selected number of shades of each color may require less storage space in an electronic device than storing correction factors for all possible colors and all possible shades of each color. This is, however, merely illustrative. If desired, correction factors may be calculated for all possible colors, for substantially all possible colors, for primary colors only (e.g., red, green, and blue), for a representative set of colors, for every shade of each color, for a selected number of shades of each color, for only the maximum brightness of each color, etc.
A representative set of colors may, for example, include neutral colors (e.g., colors having equal intensities of red, green, and blue light), may include saturated colors (e.g., saturated primary colors and/or saturated secondary colors), may include mid-tone colors (e.g., colors between neutral colors and saturated colors), and/or may include other suitable colors.
Calibration computing equipment 46 may receive display performance data (e.g., display performance data corresponding to captured images of display 14) from light sensor 40 over path 44. Calibration computing equipment 46 may be used to process the gathered data and to calculate color-specific, intensity-specific, location-specific correction factors based on the display performance data. Processing steps may include, for example, applying one or more filters to the images of display 14 gathered by light sensor 40. Filtering techniques that may be applied to images gathered by light sensor 40 include, for example, median filtering (e.g., 2D median filtering) and low-pass filtering (e.g., low-pass/average filtering techniques).
Processing steps performed by calibration computing equipment 46 may include, for example, applying filters (e.g., 2D median filters and low-pass/average filters) to optimize the radiometric resolution of images captured by light sensor 40. The radiometric resolution of each image captured by light sensor 40 may be optimized (e.g., reduced) such that each image is composed of a number of reduced resolution pixels. A reduced resolution pixel may be a location on the display for which display performance information is used to calculate a set of correction factors. Correction factors may be calculated for each reduced resolution pixel on display 14.
With this type of configuration, the correction factors which are applied to incoming subpixel values during operation of the display may be location-specific (e.g., may be location-specific in addition to being color-specific and intensity-specific). For example, the correction factors applied to incoming subpixel values associated with a given pixel may be determined based on the location of the given pixel.
The resolution of each image captured by light sensor 40 may be optimized depending on the areas of display 14 that tend to exhibit greater color non-uniformity. For example, if it is determined that display 14 exhibits greater color non-uniformity at the edges, then the resolution of each image captured by light sensor 40 may be optimized to have greater resolution at the edges than at other portions of the display. This may in turn allow for a greater concentration of locations at the edges of a display for which correction factors will be calculated. This is, however, merely illustrative. If desired, correction factors may be calculated for any suitable number of locations on a display. The locations on a display for which correction factors are calculated may be uniformly distributed across the display or may be distributed non-uniformly in any suitable manner. In any case, the filtering techniques employed by calibration computing equipment 46 may be used to reduce the radiometric resolution of each image captured by light sensor 40 based on the desired locations for which correction factors are to be calculated.
Display performance data gathered by calibration computing equipment 46 may be used to calculate color-specific, intensity-specific, location-specific correction factors for display 14. In one suitable embodiment, the correction factors may be calculated using device 10 during operation of display 14. With this type of configuration, a set of correction factors may be calculated on-the-fly for each set of incoming subpixel values associated with a given pixel on the display. The correction factors may be based on the color to be displayed by that pixel, the intensity of light to be displayed by that pixel, and the location of that pixel on the display. The calculated correction factors may be applied to the incoming subpixel values to obtain adapted subpixel values during operation of the display. To achieve on-the-fly calculation of correction factors, display performance data gathered during calibration operations may be stored on device 10 using storage and processing circuitry 30 and/or using display control circuitry 68 (
In another suitable embodiment, color-specific, intensity-specific, location-specific correction factors may be calculated using external equipment (e.g., using calibration computing equipment 46 and/or other computing equipment external to device 10). The calculated correction factors may be stored in device 10 and may be used during operation of display 14 to display calibrated images for a user. With this type of configuration, the set of correction factors applied to each set of incoming subpixel values associated with a given pixel may be determined based on the correction factors already stored in device 10. Correction factors may be applied to the incoming subpixel values to obtain adapted subpixel values during operation of the display. Correction factors calculated during calibration operations may be stored in device 10 using storage and processing circuitry 30 and/or using display control circuitry 68 (
Any color generated by a display such as display 14 may therefore be represented by a point (e.g., by chromaticity values x and y) on a chromaticity diagram such as the diagram shown in
Saturated colors may be included in a subregion such as subregion 50S of bounded region 50. Subregion 50S may included saturated primary colors (e.g., saturated red, saturated green, and saturated blue) and saturated secondary colors (e.g., saturated cyan, saturated magenta, and saturated yellow). Subregion 50N may include neutral colors. Neutral colors may include, for example, colors having equal intensities of red, blue, and green such as white and gray (e.g., different shades of gray).
A third subregion such as subregion 50M may include mid-tone colors. Mid-tone colors in subregion 50M may lie between the saturated colors of region 50S and the neutral colors of region 50N. The human eye may be more sensitive to non-uniformity in mid-tone colors in a display than to non-uniformity in saturated colors. If desired, correction factors may be calculated for a set of representative colors that lie in regions 50M and 50N to reduce color non-uniformity in a display. In general, correction factors may be calculated for any suitable color or set of colors. Choosing a set of colors that lie in regions 50M and 50N is merely illustrative.
As shown in
The example of
As shown in
A similar series of measurements may be taken during each calibration sequence in the second data collection phase (e.g., for each color for which correction factors are to be calculated). In the illustrative graphs shown in
The data collected during calibration operations may be used to calculate color-specific, intensity-specific, location-specific correction factors. In order to illustrate how the correction factors may be calculated, an example will be described in which correction factors are calculated for a particular color, at a particular intensity level, and at a particular location on a display.
The first measurement includes captured image 14A of display 14. Captured image 14A may correspond to measurement a_of
The second, third, and fourth measurements shown in
In the current example, bluish green at maximum intensity includes a red intensity level of 33% maximum intensity (e.g., corresponding to a digital display control value of R=85).
Measurement G9 includes captured image 14C of display 14 and may be taken while digital display control values of R=0, G=170, and B=0 are supplied to subpixels 35 of display 14. In general, captured image 14C may be a measurement taken during the green calibration sequence that corresponds to the intensity level of green in the color for which correction factors are being calculated. In the current example, bluish green at maximum intensity includes a green intensity level of 66% maximum intensity (e.g., corresponding to a digital display control value of G=170).
Measurement 814 includes captured image 14D of display 14 and may be taken while digital display control values of R=0, G=0, and B=255 are supplied to subpixels 35 of display 14. In general, captured image 14D may be a measurement taken during the blue calibration sequence that corresponds to the intensity level of blue in the color for which correction factors are being calculated. In the current example, bluish green at maximum intensity includes a blue intensity level of 100% maximum intensity (e.g., corresponding to a digital display control value of B=255).
Each measurement may yield an associated set of output data. The output data may include, for example, measured intensities of red light (R), measured intensities of green light (G), and measured intensities of blue light (B). A set of R, G, and B intensity values may be measured at any suitable location on display 14. For example, a set of R, G, and B intensity values may be measured at each predetermined location on display 14 for which correction factors are to be calculated. The measured R, G, and B intensity values associated with each predetermined location may be transformed into X, Y, and Z tristimulus values using a known transformation matrix (e.g., as described above in connection with
To calculate correction factors for a location such as location “P” on display 14, measured color data associated with location P may be compared with reference color data. Reference color data may, for example, be a measured set of X, Y, and Z tristimulus values or may be a predetermined set of X, Y, and Z tristimulus values. In the case where the reference color data is predetermined, a set of known X, Y, and Z tristimulus values associated with the color for which correction factors are being calculated may be used as reference color data. For example, if correction factors are calculated for white, reference color data may include X, Y, and Z tristimulus values corresponding to the standard illuminant D65 defined by the International Commission on Illumination (CIE).
In the case where reference color data includes a measured set of X, Y, and Z tristimulus values, the reference color data may be measured at a reference location on display 14 (e.g., a location on the display for which display performance information is known, a location on the display at which colors exhibit little to no non-uniformity, a location at the center of the display, etc.). In the current example, reference color data for greenish blue at maximum intensity may include measured X, Y, and Z tristimulus values at a predetermined reference location on display 14.
Measured color data at location P may be compared with measured color data at the reference location. As shown in
To compare measured color data at location P with reference color data, the X, Y, and Z components from captured images 14B, 14C, and 14D may be respectively added together. For example, the X component associated with location P on captured image 14B, the X component associated with location P on captured image 14C, and the X component associated with location P on captured image 14D may be added together to obtain XP. More specifically, the following summations may be made:
X
R4(P)+XG9(P)+XB14(P)=XP (1)
Y
R4(P)+YG9(P)+YB14(P)=YP (2)
Z
R4(P)+ZG9(P)+ZB14(P)=ZP (3)
X
R4(REF)+XG9(REF)+XB14(REF)=XREF (4)
Y
R4(RFF)+YG9(REF)+YB14(REF)=YREF (5)
Z
R4(REF)+ZG9(REF)+ZB14(REF)=ZREF (6)
The measured color data at location P may therefore be represented by the components XP, YP, and ZP, and the reference color data at the reference location may be represented by the components XREF, YREF, and ZREF. Illustrative graphs of color data at location P and reference color data are shown in
Color data at location P may be compared with reference color data. In particular, the relative ratios of components XREF, YREF, and ZREF may be compared with the relative ratios of components XP, YP, and ZP. In the illustrative example shown in
A set of factors fx, fy, and fz may be calculated based on the comparison between measured color data at location P and reference color data (e.g., measured color data at the reference location). The set of factors may be a set of numbers which are each less than or equal to one and may be calculated such that, when XP, YP, and ZP, are multiplied respectively by factors fx, fy, and fz, the resulting ratios of X, Y, and Z components at location P is equivalent or substantially equivalent to the ratios of XREF, YREF, and ZREF. This method is based on the fact that if the ratios of X, Y, and Z components of one color are equivalent to those of another color, then the colors must be the same.
In the illustrative example of
In the illustrative example of
The calculated set of factors fx, fy, and fz may be used to calculate target X, Y, and Z values. Target X, Y, and Z values may be calculated by multiplying factors fx, fy, and fz respectively with the measured X, Y, and Z values at location P on captured image 14A of
X
TARGET
=X
a14(P)*fx (7)
Y
TARGET
=Y
a14(P)*fy (8)
Z
TARGET
=Z
a14(P)*fz (9)
The target components XTARGET, YTARGET, and ZTARGET may be used to calculate color-specific, intensity-specific, location-specific correction factors fR, fG, and fB. To calculate the correction factors, the following equation may be used:
where M3×3−1 is the inverse of a three by three matrix M3×3. Matrix M3×3 may be composed of measured X, Y, and Z tristimulus values from red, green and blue calibration sequences. In particular, M3×3 may be of the following form:
where the first column includes values Xred, Yred, and Zred; associated with a given measurement in the red calibration sequence, the second column includes values Xgreen, Ygreen, and Zgreen associated with a given measurement in the green calibration sequence, and the third column includes values Xblue, Yblue, and Zblue associated with a given measurement in the blue calibration sequence. The first, second, and third columns may therefore respectively correspond to a measured intensity level R of red light, a measured intensity level G of green light, and a measured intensity level B of blue light.
The X, Y, and Z values that populate matrix M3×3 may be chosen in any suitable manner. In one suitable embodiment, the values may be chosen such that the columns all correspond to the same intensity level (e.g., such that R=G=B). The intensity level may be based on the color and intensity for which correction factors are being calculated. For example, the R, G, and B intensity values represented by the columns of matrix M3×3 may be chosen such that they are close in value respectively to the R, G, and B intensity values of the color and intensity level for which correction factors are being calculated (while still maintaining color neutrality such that R=G=B). The intensity of light represented by the columns of matrix M3×3 may, for example, be an average of the R, G, and B values associated with the color and intensity level for which correction factors are being calculated. In the current example, correction factors are being calculated for greenish blue at maximum intensity having R, G, and B intensity values of 85, 170, and 255, respectively. The intensity of light represented by the columns of matrix M3×3 may, for example, be the average of values 85, 170, and 255. The resulting average intensity value of 170 corresponds to measurement R9 in the red calibration sequence, G9 in the green calibration sequence, and B9 in the blue calibration sequence. The corresponding matrix M3×3 would then be the following:
where the first column includes values XR9, YR9, and ZR9 associated with the R=170 measurement in the red calibration sequence, the second column includes values XG9, YG9, and ZG9 associated with the G=170 measurement in the green calibration sequence, and the third column includes values XB9, YB9, and ZB9 associated with the B=170 measurement in the blue calibration sequence.
In general, matrix M3×3 may be populated in any suitable manner. The example in which the R, G, and B intensity values represented by the columns of matrix M3×3 are chosen such that they are close in value respectively to the R, G, and B intensity values of the color and intensity level for which correction factors are being calculated is merely illustrative.
The inverse matrix M3×3 may be computed from matrix M3×3 and may be used in equation (10) to calculate color-specific, intensity-specific, location-specific correction factors fR, fG, and fB. In the current example, the correction factors calculated using equation (10) would correspond to greenish blue, at maximum intensity, at location P.
Correction factors may be calculated for any suitable color, may be calculated for any suitable intensity level, and may be calculated for any suitable location on display 14. The calculation described above in which correction factors are computed for greenish blue, at maximum intensity, at location P is merely illustrative and may be applied similarly to any suitable combination of color, intensity level, and location.
If desired, the correction factors calculated during calibration operations may be stored in a look-up table. An illustrative table of correction factors for greenish blue at location P is shown in
A table such as table 102 of
Tables such as table 102 of
The correction factors which are applied to a given pixel may depend on the color to be displayed by that pixel, the intensity level of light to be displayed by that pixel, and the location of that pixel on the display. Based on this information, a set of correction factors fR, fG, and fB may be determined.
Consider, for example, a pixel such as pixel 104 of
It may be the case that correction factors were not calculated for an exact color, intensity level, and location associated with pixel 104. Techniques such as the least squares method, linear interpolation, bilinear interpolation, and other suitable approximation techniques may be used to determine an optimal set of correction factors for each particular color, intensity level, and location associated with a given pixel.
For example, circuitry 68 may compare the color to be displayed by pixel 104 with the representative colors for which correction factors have been calculated. If desired, the least squares method or any other suitable approximation method may be used to determine which representative color most closely matches the color to be displayed by pixel 104. This may include, for example, comparing the set of brightness ratios associated with incoming R, G, and B values with the sets of brightness ratios associated with the representative colors for which correction factors have been calculated.
After determining the representative color that most closely matches the color to be displayed by pixel 104 (sometimes referred to as the “best match” color), the location of pixel 104 may be taken into account. If location X of pixel 104 is not one of the locations for which correction factors have been calculated, circuitry 68 may determine the nearest locations for which correction factors have been calculated. In the example of
For each neighboring pixel location, circuitry 68 may obtain a set of correction factors which most closely corresponds to the color and intensity level to be displayed by pixel 104. The sets of correction factors may be obtained from the look-up tables associated with the best match color and the neighboring pixel locations. For example, if greenish blue is determined to be the best match color for pixel 104, then correction factors may be obtained from the greenish blue look-up tables associated respectively with locations P, Q, R, and S.
Obtaining a set of correction factors from each look-up table may include, for example, choosing the set of correction factors corresponding to an intensity level that most closely matches the intensity to be displayed by pixel 104. As another example, if the intensity to be displayed by pixel 104 falls between two intensity levels for which correction factors have been calculated, linear interpolation may be used to calculate a set of correction factors. This is, however, merely illustrative. In general, any suitable approximation method may be used to determine an optimal set of correction factors when correction factors corresponding to the exact intensity of light to be displayed by pixel 104 have not been stored in device 10.
Once circuitry 68 has obtained a set of correction factors for each neighboring pixel location, circuitry 68 may then determine a final set of correction factors fB, fG, and fB to apply to the incoming subpixel values for pixel 104. This may include, for example, using bilinear interpolation to calculate a set of correction factors for pixel 104 based on the correction factors obtained from neighboring pixel locations P, Q, R, and S. This is, however, merely illustrative. If desired, other approximation methods may be used to determine a set of correction factors for pixel 104 based on the correction factors obtained from neighboring pixel locations.
Circuitry 68 may apply the final set of correction factors fR, fG, and fB to incoming subpixel values to obtain adapted subpixel values R′, G′, and B′. Circuitry 68 may supply the adapted pixel values to pixel 104 on display 14 (e.g., via path 66 of
The procedure of pixel adaptation just described 10 may take place for each pixel in display 14 or may, if desired, take place for a selected group of pixels in display 14. Pixel adaptation may take place continuously during operation of display 14 or may, if desired, take place at intervals during operation of display 14.
At step 108, a calibration system such as calibration system 48 of
At step 110, the display performance data gathered by calibration computing equipment 46 may be processed and analyzed. Processing may include, for example, reducing the resolution (e.g., radiometric resolution) of each captured image of display 14 based on the areas of display 14 that exhibit greater color non-uniformity. Once processed, the display performance data may be used to compute color-specific, intensity-specific, location-specific correction factors. Computation of correction factors may be performed during manufacturing operations or may be performed during operation of display 14. For example, display performance data may be stored on electronic device 10 and correction factors may be calculated locally on device 10 during operation of display 14. If correction factors are computed during manufacturing operations, such computations may be performed by calibration computing equipment 46 or may be performed by computing equipment that is separate from calibration system 48.
At step 112, the correction factors calculated during step 110 may be stored in device 10. This may include, for example, storing look-up tables such as look-up table 102 of
At step 114, correction factors may be applied to incoming subpixel values during operation of display 14. For example, display control circuitry 68 may receive incoming subpixel values from storage and processing circuitry 30 in device 10 and may, based on the received incoming subpixel values, calculate and apply correction factors to the incoming subpixel values to obtain adapted subpixel values. The adapted incoming subpixel values may subsequently be supplied to subpixels 35 of display 14 to produce calibrated images for a user.
At step 116, calibration code may be launched on electronic device 10. This may include, for example, launching calibration code on device 10 after placing device 10 in a test chamber such as test chamber 38 of
At step 118, light sensor 40 may capture images of display 14 while display 14 is operated in a calibration sequence. For example, a red calibration sequence may include capturing a series of images of display 14 while the red pixels of display 14 are operated at different power levels (e.g., with green and blue pixels turned off). Each captured image may include information about the performance of display 14. For example, display performance information such as X, Y, and Z tristimulus values may be obtained from each captured image in a given calibration sequence.
If more calibration sequences are to be captured, calibration operations may repeat step 118, as indicated by line 124. Device 10 may be operated in any suitable number of calibration sequences. In a first data collection phase, for example, device 10 may be operated in a red calibration sequence, a blue calibration sequence, and a green calibration sequence. In a second data collection phase, device 10 may be operated in a series of calibration sequences corresponding to the colors for which correction factors are calculated (e.g., a series of nine calibration sequences corresponding to nine representative colors).
After all calibration sequences have been captured, calibration operations may proceed to step 120, as indicated by line 122. At step 120, the images captured during step 118 and/or the display performance data corresponding to such images may be provided to an analysis system for analysis. If correction factors are calculated during manufacturing, the analysis system may include calibration computing equipment 46. With this type of configuration, calibration computing equipment 46 may process and analyze the captured images and corresponding display performance data to calculate color-specific, intensity-specific, location-specific correction factors. If correction factors are calculated during operation of display 14, the analysis system may be formed as part of electronic device 10. With this type of configuration, display performance data gathered during step 118 may be stored on device 10 and may be used to calculate color-specific, intensity-specific, location-specific correction factors during operation of display 14.
At step 134, reference color data may be defined for the color and intensity level for which correction factors are being calculated. In one suitable embodiment, reference color data may include predetermined X, Y, and Z tristimulus values (e.g., a set of X, Y, and Z values defined by the International Commission on Illumination or other predetermined set of tristimulus values). In another suitable embodiment, reference color data may include measured color data at a reference location on display 14 (e.g., a set of X, Y, and Z tristimulus values measured at a reference location on display 14). Calibration computing equipment 46 may obtain reference color data from the images captured during calibration operations (
At step 136, calibration computing equipment 46 may obtain measured color data for a location on the display for which correction factors are being calculated. Measured color data may include, for example, measured X, Y, and Z tristimulus values associated with the color, intensity level, and location for which correction factors are being calculated and may be obtained from the images captured during calibration operations (
At step 138, calibration computing equipment 46 may compare measured color data with reference color data (
At step 140, calibration computing equipment 46 may calculate correction factors based on the comparison of step 138. For example, calibration computing equipment may use equations (7) through (9) to calculate a set of target X, Y, and Z components based on the comparison of measured X, Y, and Z components with reference X, Y, and Z components. The target X, Y, and Z components may then be transformed into corresponding correction factors fR, fG, and fB using equation (10). The correction factors fR, fG, and fB, may be color-specific, intensity-specific, and location-specific.
At step 142, computing equipment 46 may determine whether or not correction factors have been calculated for all desired locations associated with a given color and intensity level. If correction factors are to be calculated for more locations, processing may return to step 136, as indicated by line 150. If correction factors have been calculated for all locations for a given color and intensity level, processing may proceed to step 144, as indicated by line 148.
At step 144, computing equipment 46 may determine whether or not correction factors have been calculated for all desired colors and intensity levels. If correction factors are to be calculated for more colors and/or more intensity levels of a given color, processing may return to step 134, as indicated by line 154. If correction factors have been calculated for all desired colors and intensity levels, processing may proceed to step 146, as indicated by line 152.
At step 146, analysis is complete and the correction factors calculated by computing equipment 46 may be stored in device 10 (e.g., in the form of look-up tables such as look-up table 102 of
At step 156, display control circuitry 68 may receive incoming R, G, and B subpixel values (sometimes referred to as data, display data, digital display control values, or display control signals) for a selected pixel from storage and processing circuitry 30. If desired, display control circuitry 68 may optionally linearize the incoming subpixel values to remove display gamma non-linearity (e.g., if the display gamma is not equal to one). If the display gamma is equal to one, the step of linearizing the incoming subpixel values may be omitted. Based on the linearized incoming subpixel values, display control circuitry 68 may determine the color and intensity level to be displayed by the selected pixel.
At step 158, display control circuitry 68 may determine the location of the selected pixel for which incoming subpixel values have been received (e.g., a location such as location X of
At step 160, display control circuitry 68 may identify neighboring pixel locations for which correction factors have been stored (e.g., locations P, Q, R, and S of
At step 162, display control circuitry 68 may obtain a set of correction factors from each of the neighboring pixel locations. Display control circuitry 68 may determine which set of correction factors most closely corresponds to the color and intensity level to be displayed by the selected pixel. For example, if the color to be displayed by the selected pixel most closely matches greenish blue, then a set of correction factors may be obtained from the greenish blue look-up table at each neighboring pixel location. If desired, display control circuitry 68 may use the method of least squares, linear interpolation, or other approximation methods to obtain a set of correction factors that most closely corresponds to the color and intensity to be displayed by the selected pixel. Display control circuitry 68 may obtain a set of correction factors from each neighboring pixel location.
At step 164, display control circuitry 68 may determine a final set of correction factors to be applied to the linearized incoming subpixel values based on the sets of correction factors obtained from the neighboring pixel locations. This may include, for example, using bilinear interpolation to obtain a final set of correction factors fR, fG, and fB based on the sets of correction factors obtained from the neighboring pixel locations.
At step 166, display control circuitry 68 may apply the appropriate correction factor to each linearized incoming subpixel value (e.g., the linearized incoming subpixel value for red may be multiplied by fR, the linearized incoming subpixel value for green may be multiplied by fG, and the linearized incoming subpixel value for blue may be multiplied by fB). The resulting adapted linearized subpixel values may then optionally be de-linearized (e.g., to restore the non-linear display gamma) to obtain adapted subpixel values R′, G′, and B′. The adapted subpixel values may then be supplied to the selected pixel (e.g., via path 66 if
The pixel adaptation process described in connection with
The foregoing is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention. The foregoing embodiments may be implemented individually or in any combination.