The present disclosure relates to systems and methods for compensating for non-uniformity properties in luminance or color of a pixel with respect to other pixels in the electronic display device.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
As electronic displays are employed in a variety of electronic devices, such as mobile phones, televisions, tablet computing devices, and the like, manufacturers of the electronic displays continuously seek ways to improve the consistency of colors depicted on the electronic display devices. For example, given variations in manufacturing, various noise sources present within a display device, or various ambient conditions in which each display device operates, different pixels within a display device might emit a different color value or gray level even when provided with the same electrical input. It is desirable, however, for the pixels to uniformly depict the same color or gray level when the pixels programmed to do so to avoid visual display artifacts, color mixing between sub-pixels, frame mura, and the like.
A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
In certain electronic display devices, light-emitting diodes such as organic light-emitting diodes (OLEDs), micro-LEDs (μLEDs), or active matrix organic light-emitting diodes (AMOLEDs) may be employed as pixels to depict a range of gray levels for display. However, due to various properties associated with the manufacturing of the display, the driving scheme of these pixels within the display device, and other characteristics related to the display panel, a particular gray level output by one pixel in a display device may be different from a gray level output by another pixel in the same display device upon receiving the same electrical input. As such, the digital values used to generate these gray levels for various pixels may be compensated to account for these differences based on certain characteristics of the display panel. For instance, a digital compensation value for a gray level to be output by a pixel may be determined based on optical wave or electrical wave testing performed on the display during the manufacturing phase of the display. In addition, the digital compensation value for the gray level may be determined based on real time color sensing circuitry, predictive modeling algorithms based on sensor data (e.g., thermal, ambient light) acquired by circuitry disposed in the display, and the like. Based on the results of the testing, sensing, or modeling, a correction spatial map that provides gain and offset compensation values for the pixels of the display may be determined. In addition, display brightness value (DBV) adaptation lookup table (LUT) and a gray conversion LUT may be generated for various gray levels at one or more luminance settings for each pixel of the display based on the results of the testing, sensing, or modeling. In any case, display driving circuitry may use the correction spatial map, the DBV adaptation LUT, and the gray conversion LUT to adjust the voltages and/or currents provided to each pixel of the display to achieve improved pixel uniformity properties across the display.
With the foregoing in mind, in certain embodiments, the correction spatial map may define a gain correction factor (e.g., multiplier) and an offset correction factor (e.g., voltage offset) for gray levels used in different portions (e.g., 3×3 pixel grid, 4×4 pixel grid) of the display to correct for non-uniform properties (e.g., process/manufacturing artifacts, mask misalignment) of the display. The DBV or brightness adaptation LUT may provide a scaling factor (β) that may be applied to each respective gain correction factor (α) and each respective offset correction factor (σ) based on the gray level and brightness level specified for each pixel. In addition, the gray conversion LUT may include replacement gray level values for gray levels provided in image data to be depicted by pixels in the display based on the input gray level provided to the display driver circuitry. In one embodiment, the gray conversion LUT may include global replacement values for each gray level value that may be used an input value.
By way of operation, a compensation system of the display driver circuitry may receive a gray level value and a brightness value for each pixel of the display. Based on the location of each respective pixel, the compensation system may determine a gain correction factor (α) and offset correction factor (σ) according to the correction spatial map. The compensation system may then apply a respective brightness adaptation factor (β1, β2) to the gain correction factor (α) and the offset correction factor (σ), respectively, based on the received gray level and the received brightness value for the respective pixel.
After determining the resulting brightness-adapted gain factor (A) and the brightness-adapted offset factor (δ), the compensation system may apply these factors to the input gray level value to generate a compensated gray level value to be provided to the respective pixel driving circuit that causes the respective pixel to illuminate to the desired gray level. By employing the gain and offset correction factors based on the correction spatial map and the brightness adaptation factors, the image data received by the pixels of the display may be compensated for various non-uniformity properties in depicting gray levels across the display.
Various refinements of the features noted above may exist in relation to various aspects of the present disclosure. Further features may also be incorporated in these various aspects as well. These refinements and additional features may exist individually or in any combination. For instance, various features discussed below in relation to one or more of the illustrated embodiments may be incorporated into any of the above-described aspects of the present disclosure alone or in any combination. The brief summary presented above is intended only to familiarize the reader with certain aspects and contexts of embodiments of the present disclosure without limitation to the claimed subject matter.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments of the present disclosure will be described below. These described embodiments are only examples of the presently disclosed techniques. Additionally, in an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but may nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
Organic light-emitting diode (e.g., OLED, AMOLED) display panels provide opportunities to make thin, flexible, high-contrast, and color-rich electronic displays. Generally, OLED display devices are current driven devices and use thin film transistors (TFTs) as current sources to provide certain amount of current to generate a certain level of luminance to a respective pixel electrode. OLED luminance to current ratio is generally represented as OLED efficiency with units: cd/A (Luminance/Current Density or (cd/m2)/(A/m2)). Each respective TFT, which provides current to a respective pixel, may be controlled by gate to source voltage (Vgs), which is stored on a capacitor (Cst) electrically coupled to the LED of the pixel.
Generally, the application of the gate-to-source voltage Vgs on the capacitor Cst is performed by programming voltage on a corresponding data line to be provided to a respective pixel. However, when providing the voltage on a data line, several sources of noise or variation in the OLED-TFT system can result in either localized (e.g., in-panel) or global (e.g., panel to panel) non-uniformity in luminance or color. Variations in the TFT system may be addressed in a number of ways. For instance, in one embodiment, pixel performance (e.g., output luminance/brightness) may be tested at certain times (e.g., during manufacturing of the display) to determine how each pixel of a display responds to different electrical inputs (e.g., emission current provided to pixel). By way of example, the pixels of a display may be provided electrical inputs to cause the pixels to depict various gray levels. The luminance value output by each pixel, when each pixel is provided with the same electrical input, may be captured via optical testing and used to determine the luminance value differences across the pixels of the panel. These differences may then be used to generate a multi-dimensional look up table (LUT) that includes gain and offset values for the pixels of the display. The gain and offset values may be applied (e.g., added) to the pixel data of input image data, such that each pixel of the display each responds to the same electrical input (e.g., current or voltage) similarly.
In addition to the gain and offset values, a lookup table may also be generated to provide scaling factors for each gain and offset value for each pixel of the display to compensate for various brightness properties (e.g., display brightness value) of various pixels in the display based on the desired gray level value and brightness level for each pixel. The scaling factor may be applied (e.g., multiplied) to the gain and offset values, such that the resulting scaled gain and offset values may compensate for various sources related to the non-uniform performances of pixels across the display panel. After determining the scaled gain and offset values for each pixel of the input image data, the scaled gain and digital offset values for each pixel may be incorporated into the original input gray level data for each pixel, thereby generating compensated gray level data. The compensated gray level data may then be provided to a pixel driving circuitry or to the respective pixels of the display to depict the image of the image data. By compensating for the non-uniform luminance properties of the pixels across the panel, the pixels used to display the resulting image may provide more uniform color and luminance properties, thereby improving the quality of the images depicted on the display. Additional details with regard to compensating pixel data for uniformity using gain and offset values will be discussed below with reference to
By way of introduction,
As shown in
Before continuing further, it should be noted that the system block diagram of the device 10 shown in
Considering each of the components of
The processor(s) 16 may control the general operation of the device 10. For instance, the processor(s) 16 may execute an operating system, programs, user and application interfaces, and other functions of the electronic device 10. The processor(s) 16 may include one or more microprocessors and/or application-specific microprocessors (ASICs), or a combination of such processing components. For example, the processor(s) 16 may include one or more instruction set (e.g., RISC) processors, as well as graphics processors (GPU), video processors, audio processors and/or related chip sets. As may be appreciated, the processor(s) 16 may be coupled to one or more data buses for transferring data and instructions between various components of the device 10. In certain embodiments, the processor(s) 16 may provide the processing capability to execute an imaging applications on the electronic device 10, such as Photo Booth®, Aperture®, iPhoto®, Preview®, iMovie®, or Final Cut Pro® available from Apple Inc., or the “Camera” and/or “Photo” applications provided by Apple Inc. and available on some models of the iPhone®, iPod®, and iPad®.
A computer-readable medium, such as the memory 18 or the nonvolatile storage 20, may store the instructions or data to be processed by the processor(s) 16. The memory 18 may include any suitable memory device, such as random-access memory (RAM) or read only memory (ROM). The nonvolatile storage 20 may include flash memory, a hard drive, or any other optical, magnetic, and/or solid-state storage media. The memory 18 and/or the nonvolatile storage 20 may store firmware, data files, image data, software programs and applications, and so forth.
The network device 22 may be a network controller or a network interface card (NIC), and may enable network communication over a local area network (LAN) (e.g., Wi-Fi), a personal area network (e.g., Bluetooth), and/or a wide area network (WAN) (e.g., a 3G or 4G data network). The power source 24 of the device 10 may include a Li-ion battery and/or a power supply unit (PSU) to draw power from an electrical outlet or an alternating-current (AC) power supply.
The display 26 may display various images generated by device 10, such as a GUI for an operating system or image data (including still images and video data). The display 26 may be any suitable type of display, such as a liquid crystal display (LCD), plasma display, or an organic light emitting diode (OLED) display, for example. In one embodiment, the display 26 may include self-emissive pixels such as organic light emitting diodes (OLEDs) or micro-light-emitting-diodes (μ-LEDs).
Additionally, as mentioned above, the display 26 may include a touch-sensitive element that may represent an input structure 14 of the electronic device 10. The imaging device(s) 28 of the electronic device 10 may represent a digital camera that may acquire both still images and video. Each imaging device 28 may include a lens and an image sensor capture and convert light into electrical signals.
In certain embodiments, the electronic device 10 may include a compensation system 30, which may include a chip, such as processor or ASIC, that may control various aspects of the display 26. It should be noted that the compensation system 30 may be implemented in the CPU, the GPU, image signal processing pipeline, display pipeline, driving silicon, or any suitable processing device that is capable of processing image data in the digital domain before the image data is provided to the pixel circuitry.
In certain embodiments, the compensation system 30 may compensate for non-uniform gray levels and luminance properties for each pixel of the display 26. Generally, when the same electrical signal (e.g., voltage or current) is provided to each pixel of the display 26, each pixel should depict the same gray level. However, due to various sources of noise, frame mura effects, color mixing due to mask misalignment, and the like, the same voltage being applied to a number of pixels may result in a variety of different gray levels or luminance values depicted across the number of pixels. As such, the compensation system 30 may determine one or more compensation factors to adjust a digital value provided to each pixel to compensate for these differences. The compensation system 30 may then adjust the data signals provided to each pixel based on the compensation factors.
As mentioned above, the electronic device 10 may take any number of suitable forms. Some examples of these possible forms appear in
The notebook computer 40 may include an integrated imaging device 28 (e.g., a camera). In other embodiments, the notebook computer 40 may use an external camera (e.g., an external USB camera or a “webcam”) connected to one or more of the I/O ports 12 instead of or in addition to the integrated imaging device 28. In certain embodiments, the depicted notebook computer 40 may be a model of a MacBook®, MacBook® Pro, MacBook Air®, or PowerBook® available from Apple Inc. In other embodiments, the computer 40 may be portable tablet computing device, such as a model of an iPad® from Apple Inc.
The electronic device 10 may also take the form of portable handheld device 60 or 70, as shown in
The display 26 may display images generated by the handheld device 60 or 70. For example, the display 26 may display system indicators that may indicate device power status, signal strength, external device connections, and so forth. The display 26 may also display a GUI 52 that allows a user to interact with the device 60 or 70, as discussed above with reference to
Having provided some context with regard to possible forms that the electronic device 10 may take, the present discussion will now focus on the compensation system 30 of
The self-emissive pixel array 80 is shown having a controller 84, a power driver 86A, an image driver 86B, and the array of self-emissive pixels 82. The self-emissive pixels 82 are driven by the power driver 86A and image driver 86B. Each power driver 86A and image driver 86B may drive one or more self-emissive pixels 82. In some embodiments, the power driver 86A and the image driver 86B may include multiple channels for independently driving multiple self-emissive pixels 82. The self-emissive pixels may include any suitable light-emitting elements, such as organic light emitting diodes (OLEDs), micro-light-emitting-diodes (μ-LEDs), and the like.
The power driver 86A may be connected to the self-emissive pixels 82 by way of scan lines S0, S1, . . . Sm-1, and Sm and driving lines D0, D1, . . . Dm-1, and Dm. The self-emissive pixels 82 receive on/off instructions through the scan lines S0, S1, Sm-1, and Sm and generate driving currents corresponding to data voltages transmitted from the driving lines D0, D1, . . . Dm-1, and Dm. The driving currents are applied to each self-emissive pixel 82 to emit light according to instructions from the image driver 86B through driving lines M0, M1, . . . Mn-1, and Mn. Both the power driver 86A and the image driver 86B transmit voltage signals through respective driving lines to operate each self-emissive pixel 82 at a state determined by the controller 84 to emit light. Each driver may supply voltage signals at a duty cycle and/or amplitude sufficient to operate each self-emissive pixel 82.
The controller 84 may control the color of the self-emissive pixels 82 using image data generated by the processor(s) 16 and stored into the memory 18 or provided directly from the processor(s) 16 to the controller 84. The compensation system 30 may receive the image data generated by the processor and adjust the image data to generate compensated image data that improves the uniformity in color and brightness properties of the pixels 82. The compensation system 30 may provide the compensated image data to the controller 84, which may then transmit corresponding data signals to the self-emissive pixels 82, such that the self-emissive pixels 82 may depict substantially uniform color and luminance provided the same current input in accordance with the techniques that will be described in detail below.
With the foregoing in mind,
Referring now to
In addition to the gray level data 102, the compensation system 30 may receive a display brightness value (DBV) 104 that indicates a brightness value for the display 26. The DBV value 104 may correspond to a brightness setting or parameter applied to each of the pixels 82 of the display 26. In some cases, the DBV 104 of the display 26 may cause different pixels 82 that depict different gray level data and/or are located at different positions along the display 26 may illuminate differently based on the DBV 104. To ensure that each pixel 82 depicts a uniform color and luminance across the display 26 provided the same gray level data 102 and the DBV 104, the compensation system 30 may determine a gain compensation factor (e.g., A(i, j, c)) and an offset compensation factor (e.g., δ(k, l, c)) for different regions (e.g., grid of pixels) of the display 26 based on the gray level data 102 and the DBV 104. Using the gain compensation factor (e.g., A(i, j, c)) and the offset compensation factor (e.g., δ(k, l, c)), the compensation system 30 may determine compensated gray level data 106, which may be provided to the respective pixel 82 via respective pixel driving circuitry.
Generally, the compensation system 30 may determine a gain compensation factor (e.g., A(i, j, c)) and an offset compensation factor (e.g., δ(k, l, c)) for different regions of the display 26 based on the following relationship:
ITFT=(1+A)*f(Vdata−δ) (1)
where ITFT corresponds to a current provided to a respective pixel 82, A corresponds to the gain compensation factor applied to a current value to corresponds to a gray level of the respective pixel 82, Vdata corresponds to a gray level of the respective pixel 82, and δ corresponds to the offset compensation factor applied to a voltage associated with the current value. The gain compensation factor (A) and an offset compensation factor (δ) may be determined according to the following equations:
A=α(x,y,c)*β1(GIN,DBV) (2)
δ=σ(x,y,c)*β2(GIN,DBV) (3)
where α(x, y, c) corresponds to a gain value determined according to a correction spatial map based on a pixel's x- and y-coordinates and a color component (c) (e.g., red, green, blue sub-pixel), σ(x, y, c) corresponds to an offset value determined according to a correction spatial map based on a pixel's x- and y-coordinates and a color component (c) (e.g., red, green, blue sub-pixel), β1 is a scaling factor applied to the gain value α(x, y, c) determined according to a brightness lookup table (LUT), β2 is a scaling factor applied to the offset value σ(x, y, c) determined according to a brightness lookup table (LUT). Additional details with regard to the correction spatial map and the brightness LUT will be discussed below with reference to
As shown in Equation (1) above, the current ITFT provided to the respective pixel 82 may be adjusted using the gain compensation factor (A) and the offset compensation factor (δ). As such, the compensation system 30 may determine the gain compensation factor (A) and the offset compensation factor (δ) based on the performance of the pixels 82 throughout the display 26 with respect to the location (x, y) of each pixel 82, the gray level value 102 to be depicted by each pixel 82, the DBV 104, and the like. Keeping the foregoing in mind and referring to
By way of example, the external memory 108 may include a correction spatial map that provides gain (α(i, j, c)) and offset (σ(k, l, c)) compensation values for each pixel 82 of the display 26. The correction spatial map may be a lookup table (LUT) that provides the gain (α(i, j, c)) and offset (σ(k, l, c)) compensation values to adjust the input gray level data 102 for a respective pixel 82, such that each pixel 82 of the display 26 performs uniformly. In certain embodiments, the correction spatial map may be determined during manufacturing of the display 26 via optical testing used to determine the color differences across the pixels 82 of the panel for the display 26. In addition, the correction spatial map may be determined based on real-time sensing via a sensor that measure the illumination features of the pixel 82, the electrical properties (e.g., voltage/current) within the pixel 82, and the like. In yet another embodiment, the correction spatial map may be determined based on predictive models that use sensor data (e.g., thermal data, ambient light) acquired via sensors disposed on the display 26. In any case, the correction spatial map may specify the gain (α(i, j, c)) and offset (σ(k, l, c)) compensation values for a respective pixel 82 based on the location (e.g., x, y) of the pixel 82 and the color component (e.g., c) of the pixel 82.
In addition, it should be noted that in some embodiments, the correction spatial map may be stored in the external memory 108 as a collection of compressed and uncompressed data. Additional details with regard to the storage of the correction spatial map will be discussed below with reference to
The correction spatial map may be organized according to different portions or grids (e.g., 3×3 pixels, 4×4 pixels) of the display 26 and a color component (c) of the pixel 82. As such, based on the location and color component of the pixel 82 associated with the gray level data 102, the correction spatial map may provide gain (α(i, j, c)) and offset (σ(k, l, c)) compensation values that may be applied to a region or grid of pixels 82 in the display 26. With this in mind, it should be noted that the (i, j) coordinates of the gain (α(i, j, c)) and offset (σ(k, l, c)) compensation values correspond a region or grid of pixels 82 in the display 26.
In some embodiments, the correction spatial map may be pre-loaded into a static random-access memory (SRAM) 110 of the compensation system 30, such that the compensation system 30 may perform its respective operations more quickly. The pre-loaded correction spatial map may be compressed to preserve memory in the SRAM 110. As such, when the compensation system 30 retrieves the pre-loaded correction spatial map from the SRAM 110, the compensation system 30 may decompress the pre-loaded correction spatial map using a de-compression component 112. After decompressing the pre-loaded correction spatial map, the compensation system 30 may receive gain (α(i, j, c)) and offset (σ(k, l, c)) compensation values for different regions or grids of the display 26. The compression of the correction spatial map is described in greater detail below with reference to
The gain (α(i, j, c)) and offset (σ(k, l, c)) compensation values for different regions or grids of the display 26 may then be up-sampled by up sample component 114 to provide gain (α(x, y, c)) and offset (σ(x, y, c)) compensation values for each sub-pixel of each pixel 82 of the display 26. The gain (α(x, y, c)) and offset (σ(x, y, c)) compensation values may then be provided to a brightness adaptation lookup table (LUT) 116. In certain embodiments, the brightness adaptation LUT 116 may include a respective scaling factor (e.g., β1, β2) for the gain (α(x, y, c)) and offset (σ(x, y, c)) compensation values. The brightness adaptation LUT 116 may be retrieved by the compensation system 30 from the external memory 108 and may be organized with respect to a gray level value provided via the gray level data 102 and a brightness value provided via the DBV 104 for each pixel 82. As such, the compensation system 30 may use the gray level data 102 and the DBV 104 to determine scaling factors (β1 and β2) to apply to the gain (α(x, y, c)) and offset (σ(x, y, c)) compensation values provided by the up sample component 114. Like the correction spatial map described above, the brightness adaptation LUT 116 may be generated based on optical testing during manufacturing of the display 26, real-time sensing data regarding various features of the pixel 82, predictive models, and the like.
After applying the respective scaling factor (e.g., β1, β2) to the gain (α(x, y, c)) and offset (σ(x, y, c)) compensation values, the compensation system 30 may obtain the gain compensation factor (A(i, j, c)) and the offset compensation factor (δ(k, l, c)) to apply to the gray level data 102. To apply the gain compensation factor (A(i, j, c)) and the offset compensation factor (δ(k, l, c)) to the gray level data 102, the compensation system 30 may perform a number of transformations to the gray level data 102. That is, the gain compensation factor (A(i, j, c)) may be applied in a current domain, while the offset compensation factor (δ(k, l, c)) may be applied in a voltage domain. As such, the compensation system 30 may include a gray-to-current (G2I) transformation component 118, a current-to-voltage (I2V) transformation component, and a voltage-to-gray (V2G) transformation component 122.
As shown in
The compensation system 30 may then apply the gain compensation factor (A(i, j, c)) to the current value. The resulting scaled current value may be provided to the I2V transformation component 120, which may convert the scaled current value to a voltage value. Like the G2I transformation component 118, the I2V transformation component 120 may receive gray conversion LUT from the external memory 108 and convert the scaled current or resulting voltage value to a corresponding replacement current or voltage value based on the DBV 104.
After the I2V transformation component 120 generates the voltage value, the compensation system 30 may add the offset compensation factor (δ(k, l, c)) to the generated voltage value. The resulting compensated voltage value may be provided to the V2G transformation component 122 to convert the offset-compensated voltage value to a compensated gray value. Like the G2I transformation component 118 and the I2V transformation component 120 described above, the V2G transformation component 122 may also receive the gray conversion LUT from the external memory 108 and convert the compensated voltage value to a corresponding compensated replacement voltage value based on the DBV 104.
In any case, after the V2G transformation component 122 outputs the compensated gray value, a dithering component 124 receives the compensated gray value and may apply dithering effects to the compensated gray value to improve the quality of the resulting image displayed via the display 26. The dithering component 124 may apply a spatial or temporal dithering effect to the compensated gray value. The resulting gray value output by the dithering component 124 may include the compensated gray level data 106, which may be provided to a pixel driving circuit (e.g., controller 84, image driver 86B, power driver 86S), for display.
Although the G2I transformation component 118, the I2V transformation component 120, and the V2G transformation component 122 is described above as receiving and employing the gray conversion LUT, which may be the external memory 108, it should be noted that, in some embodiments, the gray conversion LUT may not be used by the compensation system 30. In this case, the compensation system 30 may perform each of the respective actions described above without determining a respective replacement value.
To display the compensated gray level data 106, the compensation system 30 may send the compensated gray level data 106 to a source driver 132 as illustrated in
Provided the compensation techniques described above, it becomes apparent that the compensated gray level data 106 includes a gray level adjustment (±ΔG), as compared to the original input gray level data 102. That is, the compensated gray level data 106 may be higher or lower than the gray level data 102. As such, the compensated gray level data 106 may be outside a range of values (e.g., 0-255) that correspond to valid gray level data that can be depicted by a respective pixel 82 and may result in saturated pixel data being depicted by the display 26.
With this in mind, in some embodiments, a total correction range may be calculated for each display 26 manufactured by the same entity. The total correction range may include a maximum gray level adjustment (±ΔG) that each pixel 82 may expect to be adjusted based on the techniques described above. In some embodiments, the maximum gray level adjustment (±ΔG) may be determined based on performing the compensation techniques described above for a collection of displays 26 provided by a particular manufacturer, as the distortion of the gray level may be consistent with each supplier. After determining the maximum gray level adjustment (±ΔG), a pre-scale component 142 may be employed to scale the gray level data 102 prior to performing the compensation techniques described above with respect to
With the foregoing in mind,
Referring now to
At block 154, the compensation system 30 may pre-scale the gray level data 102 to add or subtract the maximum gray level adjustment (±ΔG), such that the compensated gray level data 106 may remain within the range of available gray level values that may be depicted by the respective pixel 82. At block 156, the compensation system 30 may determine a gain compensation factor (e.g., A(i, j, c)) and an offset compensation factor (e.g., δ(i, j, c)) for each gray level data 102 of each pixel 82 of the display 26. As discussed above, the gain compensation factor (e.g., A(i, j, c)) and the offset compensation factor (e.g., δ(k, l, c)) may be determined based on the location of the respective pixel 82 associated with the gray level data 102, the color data, the correction spatial map, the brightness adaptation LUT, and the gray conversion LUT, as described above.
At block 158, the compensation system 30 may apply the gain compensation factor (e.g., A(i, j, c)) and the offset compensation factor (e.g., δ(k, l, c)) to the gray level data 102 in accordance with the embodiments described herein. As discussed above, the gain compensation factor (e.g., A(i, j, c)) may be applied to a current value determined based on the G2I transformation component 118, the pre-scaled gray level data 102, and the DBV 104. The offset compensation factor (e.g., δ(k, l, c)) may be added to a voltage value output by the I2V transformation component 120.
After applying the gain compensation factor (e.g., A(i, j, c)) and the offset compensation factor (e.g., δ(k, l, c)) to the gray level data 102, at block 160, the compensation system 30 may transmit the compensated gray level data 106 to the respective pixel driving circuitry (e.g., source driver 132), which may provide a corresponding voltage or current signal to the respective pixel 82 and cause the respective pixel 82 to illuminate to a particular gray level based on the provided voltage or current signal. In certain embodiments, prior to transmitting the compensated gray level data 106, the compensation system may convert a voltage value output by the I2V transformation component 120 and adjusted by the offset compensation factor (e.g., δ(k, l, c)) to a gray level value, which may be dithered in order to generate the compensated gray level data 106.
Referring briefly back to
By way of example,
In addition to the uncompressed gain (α(i, j, c)) and offset (σ(i, j, c)) compensation values for the column 182, the spatial correction map may include compressed difference or delta values that corresponds to gain (α(i, j, c)) and offset (σ(i, j, c)) compensation values for the remaining columns of the display 26. For example, the spatial correction map may include 8-bit uncompressed data for each row of pixels 82 that are on the column 182. In addition, the spatial correction map may include a 4-bit encoded delta value that represents a difference between the uncompressed gain (α(i, j, c)) and offset (σ(i, j, c)) compensation values for a respective pixel 82 in column 182 and another pixel 82 in a different column of the display 26.
By way of example, the 8-bit uncompressed data for a pixel 82 located at row i and column 0 (e.g., column 182) may be represented as: x(i, 0). The 4-bit compressed or encoded delta for the pixel 82 located at row i and column j may be represented as: Δ(i, j). With this in mind, when determining the gain (α(i, j, c)) and offset (σ(i, j, c)) compensation values to provide to the up-sample component 114, the compensation system 30 may retrieve the spatial correction map via the external memory 108 and/or the SRAM 110, and the de-compression component 112 may decompress the 4-bit compressed or encoded delta for the pixel 82 located at row i and column j (e.g., Δ(i, j). The compensation system 30 may then determine the gain (α(i, j, c)) and offset (σ(i, j, c)) compensation values for the pixel located at (i, j) as provided below:
xd(i,1)=x(i,0)−Δ(i,1)
xd(i,j+1)=xd(i,j)−Δ(i,j+1)
By storing a portion of the spatial correction map as uncompressed data and the remaining portion of the spatial correction map as compressed data, the compensation system 30 may preserve more memory in the external memory 108 or the SRAM 110 for other data. That is, the encoded delta values that are stored as part of the spatial correction map includes fewer bits than the corresponding uncompressed values.
In addition to the example provided above, subpixel (e.g., R/G/B) offset/gain data uncompressed (8-bit) for a 1st column of the display 26 may be stored in the spatial correction map and subpixel (e.g., R/G/B) delta offset/gain (4-bit) data between horizontally adjacent pixels for the remaining columns. In another embodiment, subpixel (e.g., R/G/B) offset/gain data uncompressed (8-bit) for the 1st row or the display 26 may be stored in the spatial correction map and the subpixel (e.g., R/G/B) delta offset/gain (4-bit) between horizontally adjacent pixels for the remaining columns may be stored. In yet another embodiment, subpixel (e.g., R/G/B) offset/gain data uncompressed (8-bit) for a 1st pixel in the display 26 may be stored in the spatial compression map, along with subpixel (e.g., R/G/B) delta offset/gain (4-bit) between adjacent pixels scanning the display 26 on a serpentine or other suitable path.
It should be noted that in some embodiments, the encoded or compressed data of the spatial correction map may be encoded using an iterative encoding scheme. The iterative encoding scheme may account for a maximum compression quantization error to prevent error propagation and limit the maximum compression quantization error of [−Min_precision/2+Min_precision/2].
In addition, the encoding scheme may be combined with variable length coding schemes (e.g., Huffman coding) to achieve an improved compression ratio. This scheme could also be extended to take advantage of potential color redundancy between R/G/B sub-pixels. For example, 4 bits may be used to store the red value of a given pixel, and 3 bits may be used to store the values for the green and blue components.
The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
This application is a Non-Provisional application claiming priority to U.S. Provisional Patent Application No. 62/623,946, entitled “Applying Gain and Offset Correction Factors for Pixel Uniformity Compensation in Display Panels”, filed Jan. 30, 2018, which is herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
8610781 | Wei et al. | Dec 2013 | B2 |
9135851 | Rykowski | Sep 2015 | B2 |
20080042943 | Cok | Feb 2008 | A1 |
20130321671 | Cote | Dec 2013 | A1 |
20150002378 | Nathan et al. | Jan 2015 | A1 |
20170032742 | Piper et al. | Feb 2017 | A1 |
20180158173 | Gao | Jun 2018 | A1 |
Number | Date | Country | |
---|---|---|---|
20190237001 A1 | Aug 2019 | US |
Number | Date | Country | |
---|---|---|---|
62623946 | Jan 2018 | US |