This disclosure relates to gathering burn-in statistics (BIS) and applying burn-in compensation (BIC) for electronic displays with time multiplexed or otherwise pulsed display pixels.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Numerous electronic devices—including televisions, portable phones, computers, wearable devices, vehicle dashboards, virtual-reality glasses, and more—display images on an electronic display. To display an image, an electronic display may control light emission of its display pixels based at least in part on corresponding image data. As electronic displays are used over time, the pixels thereof may become increasingly more susceptible to image artifacts, such as burn-in related aging of pixels, which may be compensated by image processing.
Burn-in is a phenomenon whereby pixels degrade over time owing to the different amount of utilization (e.g., light emission) that different pixels emit over time. In other words, pixels may age at different rates depending on their relative utilization and/or environment. For example, pixels used more than others may age more quickly, and thus may gradually emit less light when given the same amount of driving current or voltage values. This may produce undesirable burn-in image artifacts on the electronic display. In general, the estimated aging due to pixels' utilization may be stored, accumulated, and referenced when compensating for burn-in effects. However, in some scenarios, such as micro-light-emitting-diode (LED) displays, the physical implementation of how the display works (e.g., via pulsed light emissions) may change the efficacy of traditional burn-in compensation systems.
A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
Burn-in statistics collection and burn-in compensation may take into account the timing of pulses in electronic displays (e.g., micro-light-emitting-diode (LED) displays) that use pulsed light emissions. In these types of displays, the time averaged luminance output of a pixel is equivalent to the desired luminance level of the image data for that pixel. For example, a single image frame may be broken up into multiple (e.g., two, four, eight, sixteen, thirty-two, and so on) sub-frames, and a particular pixel may be illuminated (e.g., pulsed) or deactivated during each sub-frame such that the aggregate luminance output over the total image frame is equivalent to the desired luminance output of the particular pixel. In other words, the duration and frequency of the pixel emissions (e.g., pulses) during an image frame may be regulated to maintain an average luminance output during the image frame that appears to a viewer as the desired luminance output.
Furthermore, display image processing techniques may be performed on input image data such that the power (e.g., current and/or voltage) applied to the pixels drives the pixels to produce the desired amount of light. For example, as the pixels are utilized over the life of the display, the pixels may incur burn-in related aging, whereby the pixels emit less light when given the same amount of driving current or voltage values. As such, one display image processing technique may be to track the estimated aging of pixels and compensate the current or time for which the current is applied to the pixels to counter the effects of burn-in related aging.
Additionally, in some embodiments, a sub-pixel uniformity correction may be utilized to adjust the driving current, voltage, and/or activation timing for each pixel to account for differences in manufacturing between the pixels. For example, some pixels may exhibit different luminance outputs at the same voltage/current than other pixels, and such differences may be noted and/or preprogrammed during manufacturing to account for such differences.
However, in micro-LED displays the physical implementation of how the display works (e.g., via pulsed light emissions) may change the efficacy of burn-in compensation systems that track desired luminance levels and provide compensations in the luminance domain of the image data. As such, in some embodiments, display image processing techniques that alter the desired luminance, sub-frame timings, or otherwise changes the uncompensated total current (e.g., time integrated current) that would otherwise be provided to a pixel may be performed prior to burn-in compensation and sub-pixel uniformity correction.
Furthermore, in some embodiments, the burn-in compensation may be performed with, immediately subsequent to, immediately prior to, or between stages of the sub-pixel uniformity correction. For example, the sub-pixel uniformity gain correction may be applied prior to the burn-in compensation and a digital code conversion of the sub-pixel uniformity correction, which sets the digital code to be sent to the display panel that corresponds to the desired current/luminance, may occur after the burn-in compensation. As such, modifications to the image data due to burn-in compensation may be performed such that the modeled aging and compensation therefore of the pixels has increased accuracy (e.g., aligned with what is sent to the pixels) and increased effectiveness at reducing image artifacts.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments of the present disclosure will be described below. These described embodiments are only examples of the presently disclosed techniques. Additionally, in an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but may nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Furthermore, the phrase A “based on” B is intended to mean that A is at least partially based on B. Moreover, the term “or” is intended to be inclusive (e.g., logical OR) and not exclusive (e.g., logical XOR). In other words, the phrase A “or” B is intended to mean A, B, or both A and B.
Electronic devices often use electronic displays to present visual information. Such electronic devices may include computers, mobile phones, portable media devices, tablets, televisions, virtual-reality headsets, and vehicle dashboards, among many others. To display an image, an electronic display controls the luminance (and, as a consequence, the color) of its display pixels based on corresponding image data received at a particular resolution. For example, an image data source may provide image data as a stream of pixel data, in which data for each pixel indicates a target luminance (e.g., brightness and/or color) of one or more display pixels located at corresponding pixel positions. In some embodiments, image data may indicate luminance per color component, for example, via red component image data, blue component image data, and green component image data, collectively referred to as RGB image data (e.g., RGB, sRGB). As should be appreciated, color components other than RGB may also be used such as CMY (i.e., cyan, magenta, and yellow). Additionally or alternatively, image data may be indicated by a luma channel and one or more chrominance channels (e.g., YCbCr, YUV, etc.), grayscale (e.g., gray level), or other color basis. It should be appreciated that image data and/or particular channels of image data (e.g., a luma channel), as disclosed herein, may encompass linear, non-linear, and/or gamma-corrected luminance values.
To display images, the electronic display may illuminate one or more pixels according to the image data. In general electronic displays may take a variety of forms and operate by reflecting/regulating a light emission from an illuminator (e.g., backlight, projector, etc.) or generate light at the pixel level, for example, using self-emissive pixels such as micro-light-emitting diodes (LEDs) or organic light-emitting diodes (OLEDs). In some embodiments, the electronic display may display an image by pulsing light emissions from pixels such that the time averaged luminance output is equivalent to the desired luminance level of the image data. For example, a single image frame may be broken up into multiple (e.g., two, four, eight, sixteen, thirty-two, and so on) sub-frames, and a particular pixel may be illuminated (e.g., pulsed) or deactivated during each sub-frame such that the aggregate luminance output over the total image frame is equivalent to the desired luminance output of the particular pixel. In other words, the duration and frequency (e.g., as opposed to the brightness) of the pixel emissions during an image frame may be regulated to maintain an average luminance output during the image frame that appears to the human eye as the desired luminance output.
In some embodiments, the electronic display may be a micro-LED display having active matrixes of micro-LEDs, pixel drivers (e.g., micro-drivers), anodes, and arrays of row and column drivers. While discussed herein as relating to micro-LED displays, as should be appreciated, the features discussed herein may be applicable to any suitable display that using time multiplexed (e.g., pulsed) light emissions to generate an image on the electronic display. Each micro-driver may drive a number of display pixels on the electronic display. For example, each micro-driver may be connected to numerous anodes, and each anode may selectively connect to one of multiple different display pixels. Thus, a collection of display pixels may share a common anode connected to a micro-driver. The micro-driver may drive a display pixel by providing a driving signal across an anode to one of the display pixels. Any suitable number of display pixels may be located on respective anodes of the micro-LED display. Moreover, in some embodiments, the collection of display pixels connected to each anode may be of the same color component (e.g., red, green, or blue).
Additionally, the image data may be processed to account for one or more physical or digital effects associated with displaying the image data. For example, display image data may be compensated and/or enhanced to account for pixel aging (e.g., burn-in compensation), sub-pixel uniformity, cross-talk between electrodes within the electronic device, transitions from previously displayed image data (e.g., pixel drive compensation), warps, contrast control, and/or other factors that may otherwise cause distortions or artifacts perceivable to a viewer.
As discussed herein, as pixels are utilized over the life of the display, the pixels may incur burn-in related aging, whereby the pixels emit less light when given the same amount of driving current or voltage values. As such, burn-in statistics may be gathered to estimate and track an estimated amount of sub-pixel aging, and compensation may be performed to counter the effects of burn-in related aging. Additionally, in some embodiments, a sub-pixel uniformity correction may be utilized to adjust the driving current, voltage, and/or activation timing for each pixel to account for differences between the pixels (e.g., due to non-uniformity in manufacturing, non-uniformity in materials, etc.). For example, some pixels may exhibit different luminance outputs at the same voltage/current than other pixels, and such differences may be noted and/or preprogrammed during manufacturing to account for such differences.
In general, micro-LED displays utilize a digital code to operate sub-pixels as either enabled or disabled and may be time multiplexed to achieve the desired luminance output. As such, the physical current/voltages provided to the individual pixels, on which the burn-in compensation is based, may not be known until after each display image processing technique has been completed. As such, in some embodiments, display image processing techniques that alter the desired luminance, sub-frame timings, or otherwise changes the uncompensated current total (e.g., time integrated current over the image frame) that would otherwise be provided to a pixel may be performed prior to burn-in compensation and sub-pixel uniformity correction. Furthermore, in some embodiments, the burn-in compensation may be performed with, immediately subsequent to, immediately prior to, or between stages of the sub-pixel uniformity correction. For example, the burn-in compensation may be performed between a gain correction and a digital code conversion of the sub-pixel uniformity correction such that modifications to the image data due to burn-in compensation may be performed in accordance with the post-processed image data and the total current provided to each pixel.
With the foregoing in mind,
The electronic device 10 may include one or more electronic displays 12, input devices 14, input/output (I/O) ports 16, a processor core complex 18 having one or more processors or processor cores, local memory 20, a main memory storage device 22, a network interface 24, a power source 26, and image processing circuitry 28. The various components described in
The processor core complex 18 is operably coupled with local memory 20 and the main memory storage device 22. Thus, the processor core complex 18 may execute instructions stored in local memory 20 or the main memory storage device 22 to perform operations, such as generating or transmitting image data to display on the electronic display 12. As such, the processor core complex 18 may include one or more general purpose microprocessors, one or more application specific integrated circuits (ASICs), one or more field programmable logic arrays (FPGAs), or any combination thereof.
In addition to program instructions, the local memory 20 or the main memory storage device 22 may store data to be processed by the processor core complex 18. Thus, the local memory 20 and/or the main memory storage device 22 may include one or more tangible, non-transitory, computer-readable media. For example, the local memory 20 may include random access memory (RAM) and the main memory storage device 22 may include read-only memory (ROM), rewritable non-volatile memory such as flash memory, hard drives, optical discs, or the like.
The network interface 24 may communicate data with another electronic device or a network. For example, the network interface 24 (e.g., a radio frequency system) may enable the electronic device 10 to communicatively couple to a personal area network (PAN), such as a BLUETOOTH® network, a local area network (LAN), such as an 802.11x Wi-Fi network, or a wide area network (WAN), such as a 4G, Long-Term Evolution (LTE), or 5G cellular network.
The power source 26 may provide electrical power to operate the processor core complex 18 and/or other components in the electronic device 10. Thus, the power source 26 may include any suitable source of energy, such as a rechargeable lithium polymer (Li-poly) battery and/or an alternating current (AC) power converter.
The I/O ports 16 may enable the electronic device 10 to interface with various other electronic devices. The input devices 14 may enable a user to interact with the electronic device 10. For example, the input devices 14 may include buttons, keyboards, mice, trackpads, and the like. Additionally or alternatively, the electronic display 12 may include touch sensing components that enable user inputs to the electronic device 10 by detecting occurrence and/or position of an object touching its screen (e.g., surface of the electronic display 12).
The electronic display 12 may display a graphical user interface (GUI) (e.g., of an operating system or computer program), an application interface, text, a still image, and/or video content. The electronic display 12 may include a display panel with one or more display pixels to facilitate displaying images. Additionally, each display pixel may represent one of the sub-pixels that control the luminance of a color component (e.g., red, green, or blue). Although sometimes used to refer to a collection of sub-pixels (e.g., red, green, and blue subpixels) as used herein, a display pixel or pixel refers to an individual sub-pixel (e.g., red, green, or blue subpixel).
As described above, the electronic display 12 may display an image by controlling the luminance output (e.g., light emission) of the sub-pixels based on corresponding image data. In some embodiments, pixel or image data may be generated by an image source, such as the processor core complex 18, a graphics processing unit (GPU), or an image sensor (e.g., camera). Additionally, in some embodiments, image data may be received from another electronic device 10, for example, via the network interface 24 and/or an I/O port 16. Moreover, in some embodiments, the electronic device 10 may include multiple electronic displays 12 and/or may perform image processing (e.g., via the image processing circuitry 28) for one or more external electronic displays 12, such as connected via the network interface 24 and/or the I/O ports 16.
The electronic device 10 may be any suitable electronic device. To help illustrate, one example of a suitable electronic device 10, specifically a handheld device 10A, is shown in
The handheld device 10A may include an enclosure 30 (e.g., housing) to, for example, protect interior components from physical damage and/or shield them from electromagnetic interference. The enclosure 30 may surround, at least partially, the electronic display 12. In the depicted embodiment, the electronic display 12 is displaying a graphical user interface (GUI) 32 having an array of icons 34. By way of example, when an icon 34 is selected either by an input device 14 or a touch-sensing component of the electronic display 12, an application program may launch.
Input devices 14 may be accessed through openings in the enclosure 30. Moreover, the input devices 14 may enable a user to interact with the handheld device 10A. For example, the input devices 14 may enable the user to activate or deactivate the handheld device 10A, navigate a user interface to a home screen, navigate a user interface to a user-configurable application screen, activate a voice-recognition feature, provide volume control, and/or toggle between vibrate and ring modes. Moreover, the I/O ports 16 may also open through the enclosure 30. Additionally, the electronic device may include one or more cameras 36 to capture pictures or video. In some embodiments, a camera 36 may be used in conjunction with a virtual reality or augmented reality visualization on the electronic display 12.
Another example of a suitable electronic device 10, specifically a tablet device 10B, is shown in
Turning to
As discussed above, the electronic device 10 may include one or more electronic displays 12 of any suitable type. In some embodiments, the electronic display 12 may be a micro-LED display having a display panel 40 that includes an array of micro-LEDs (e.g., red, green, and blue micro-LEDs) as display pixels. Support circuitry 42 may receive display image data 44 (e.g., digital coded image data) and send control signals 46 to an array 48 of micro-drivers 50. As should be appreciated, the display image data 44 may be of any suitable format depending on the implementation (e.g., type of display). In some embodiments, the support circuitry 42 may include a video timing controller (video TCON) and/or emission timing controller (emission TCON) that receives and uses the display image data 44 in a serial bus to determine a data clock signal and/or an emission clock signal to control the provisioning of the display image data 44 to the display panel 40. The video TCON may also pass the display image data 44 to serial-to-parallel circuitry that may deserialize the display image data 44 into several parallel image data signals. That is, the serial-to-parallel circuitry may collect the display image data 44 into the control signals 46 that are passed on to specific columns of the display panel 40. The control signals 46 (e.g., data/row scan controls, data clock signals, and/or emission clock signals) for each column of the array 48 may contain luminance values corresponding to pixels in the first column, second column, third column, fourth column . . . and so on, respectively. Moreover, the control signals 46 may be arranged into more or fewer columns depending on the number of columns that make up the display panel 40.
The micro-drivers 50 may be arranged in an array 48, and each micro-driver 50 may drive a number of display pixels 52. Different display pixels 52 (e.g., display sub-pixels) may include different colored micro-LEDs (e.g., a red micro-LED, a green micro-LED, or a blue micro-LED) to emit light according to the display image data 44. Moreover, in some embodiments, the subset of display pixels 52 located at each anode 54 may be associated with a particular color (e.g., red, green, blue). Furthermore, although shown for only a single color channel, it should be appreciated that each anode 54 may have a respective cathode 56 associated with the particular color channel. For example, the depicted cathodes 56 may correspond to red color channels (e.g., subset of red display pixels 52). Indeed, there may be a second set of cathodes 56 that couple to a green color channels (e.g., subset of green display pixels 52) and a third set of cathodes 56 that couple to a blue color channels (subset of blue display pixels 52), but these are not expressly illustrated in
Additionally, a power supply 58 may provide a reference voltage (VREF) 60 (e.g., to drive the micro-LEDs of the display pixels 52), a digital power signal 62, and/or an analog power signal 64. In some cases, the power supply 58 may provide more than one reference voltage 60 signal. For example, display pixels 52 of different colors may be driven using different reference voltages, and the power supply 58 may generate each reference voltage 60 (e.g., VREF for red, VREF for green, and VREF for blue display pixels 52). Additionally or alternatively, other circuitry on the display panel 40 may step a single reference voltage 60 up or down to obtain different reference voltages and drive the different colors of display pixels 52.
The micro-drivers 50 may include pixel data buffer(s) 70 and/or a digital counter 72, as shown in
The counter 72 may receive the emission clock signal 78 and output a digital counter signal 80 indicative of the number of edges (only rising, only falling, or both rising and falling edges) of the emission clock signal 78. The digital data signal 76 and the digital counter signal 80 may enter a comparator 82 that outputs an emission control signal 84 in an “on” state when the digital counter signal 80 does not exceed the digital data signal 76, and an “off” state otherwise. The emission control signal 84 may be routed to driving circuitry (not shown) for the display pixel 52 being driven on or off. The longer the selected display pixel 52 is driven “on” by the emission control signal 84, the greater the amount of light that will be perceived by the human eye as originating from the display pixel 52.
To help illustrate, the timing diagram 86 of
In some embodiments, the steps between gray levels, reflected by the steps between emission clock signal 78 edges, may be of consistent width (e.g., linearly additive) or changing width (e.g., indicative of a gamma domain). For example, based on the way humans perceive light, the difference between lower gray levels may be more perceptible than the difference between higher gray levels. The emission clock signal 78 may, therefore, increase the time between clock edges as the frame progresses. The particular pattern of the emission clock signal 78, as generated by the emission TCON, may have increasingly longer differences between edges (e.g., periods) so as to provide a gamma encoding of the gray level of the display pixel 52 being driven.
As discussed above, an electronic display 12 may display an image by pulsing light emissions from display pixels 52 such that the time averaged luminance output is equivalent to the desired luminance level of the display image data 44. Furthermore, a single image frame may be broken up into multiple (e.g., two, four, eight, sixteen, thirty-two, and so on) sub-frames, and a particular pixel may be illuminated (e.g., pulsed) or deactivated during each sub-frame such that the aggregate luminance output over the total image frame is equivalent to the desired luminance output of the particular pixel. In other words, in addition to regulating the duration of the pixel emission during a sub-frame (e.g., as discussed above with reference to
Due at least in part to the time-multiplexed nature of operating the electronic display 12 (e.g., micro-LED display), it may be difficult to ascertain the physical utilization of individual pixels until the display image data 44 is or is ready to be generated, such as after other display image processing compensations, corrections, enhancements, etc. have been performed. Indeed, as discussed herein, the display image data 44 may be a digitally coded format (e.g., non-linear gray code) indicative of desired luminance levels to be displayed. Prior to being sent to the display panel 40, the display image data 44 may be generated by converting image data (e.g., via a sub-pixel uniformity digital code conversion of the image processing circuitry 28) from a luminance or current domain to the digital code domain. In general, the image processing circuitry 28 may correct, compensate, enhance, or otherwise alter image data in a luminance domain (e.g., linear domain), gamma domain (e.g., non-linear domain), current domain, voltage domain, etc. to reduce or eliminate image artifacts and/or improve perceived image quality.
To help illustrate, a portion of the electronic device 10, including image processing circuitry 28, is shown in
In addition to the display panel 40, the electronic device 10 may also include an image data source 90 and/or a controller 92 in communication with the image processing circuitry 28. In some embodiments, the controller 92 may control operation of the image processing circuitry 28, the image data source 90, and/or the display panel 40. To facilitate controlling operation, the controller 92 may include a controller processor 94 and/or controller memory 96. As should be appreciated, the controller processor 94 may be included in the processor core complex 18, the image processing circuitry 28, the electronic display 12, a separate processing module, or any combination thereof and execute instructions stored in the controller memory 96. Moreover, the controller memory 96 may be included in the local memory 20, the main memory storage device 22, a separate tangible, non-transitory, computer-readable medium, or any combination thereof. In general, the image processing circuitry 28 may process source image data 98 for display on one or more electronic displays 12. For example, the image processing circuitry 28 may include a display pipeline, memory-to-memory scaler and rotator (MSR) circuitry, warp compensation circuitry, or additional hardware or software means for processing image data. The source image data 98 may be processed by the image processing circuitry 28 to reduce or eliminate image artifacts, compensate for one or more different software or hardware related effects, and/or format the image data for display on one or more electronic displays 12. As should be appreciated, the present techniques may be implemented in standalone circuitry, software, and/or firmware, and may be considered a part of, separate from, and/or parallel with a display pipeline or MSR circuitry.
The image processing circuitry 28 may receive source image data 98 corresponding to a desired image to be displayed on the electronic display 12 from the image data source 90. The source image data 98 may indicate target characteristics (e.g., luminance data) corresponding to the desired image using any suitable source format, such as an RGB format, an aRGB format, a YCbCr format, and/or the like. Moreover, the source image data 98 may be fixed or floating point and be of any suitable bit-depth. Furthermore, the source image data 98 may reside in a linear color space, a gamma-corrected color space, or any other suitable color space. The image data source 90 may include captured images from cameras 36, images stored in memory, graphics generated by the processor core complex 18, or a combination thereof. Additionally, the image processing circuitry 28 may include one or more sets of image data processing blocks 100 (e.g., circuitry, modules, or processing stages) such as a burn-in compensation/burn-in statistics (BIC/BIS) block 102 and/or a sub-pixel uniformity correction (SPUC) block 104. As should be appreciated, multiple other processing blocks 106 may also be incorporated into the image processing circuitry 28, such as a color management block, image enhancement block, a pixel contrast control (PCC) block, a dither block, a scaling/rotation block, etc. before the BIC/BIS block 102 and/or SPUC block 104. The image data processing blocks 100 may receive and process source image data 98 and output display image data 44 in a format (e.g., digital format and/or resolution) interpretable by the display panel 40 and/or its support circuitry 42. Further, the functions (e.g., operations) performed by the image processing circuitry 28 may be divided between various image data processing blocks 100, and, while the term “block” is used herein, there may or may not be a logical or physical separation between the image data processing blocks 100.
In general, the BIC/BIS block 102 may compensate the image data for burn-in related aging of the display pixels 52. For example, as the display pixels 52 are utilized over the life of the display panel 40, the display pixels 52 may incur burn-in related aging, whereby the pixels emit less light when given the same amount of driving current or voltage values. Moreover, as different display pixels 52 may be used differently and/or have different environments (e.g., temperature), the display pixels 52 may age non-uniformly. As such, the BIC/BIS block 102 may include a burn-in statistics (BIS) sub-block 110, as shown in
Additionally, in some embodiments, the image processing circuitry 28 may include the SPUC block 104 having a sub-pixel uniformity gain correction sub-block 114 and a sub-pixel uniformity digital code conversion sub-block 116. In general, the sub-pixel uniformity gain correction sub-block 114 applies individual gains to pixels values to account for non-uniformities in luminance output efficiency between different display pixels 52. For example, manufacturing variations may cause different display pixels 52 to have different luminance output responses given the same current and voltage. As such, the sub-pixel uniformity gain correction sub-block 114 may apply a gain to the pixel values to adjust an amount of current, voltage, or timing at which the current/voltage is supplied to each display pixel 52 to normalize the differences in manufacturing between display pixels 52. Such gains may be pre-programmed and/or set on a per-display-panel basis at manufacturing (e.g., in response to per-display-panel testing). Moreover, the amount of gain may vary based on the desired luminance level (e.g., pixel value). Additionally, the sub-pixel uniformity digital code conversion sub-block 116 converts the pixel values from a luminance or current domain into a digital code (e.g., grayscale) interpretable by the electronic display 12 (e.g., the support circuitry 42 of the electronic display 12). Moreover, in some embodiments, the sub-pixel uniformity digital code conversion sub-block 116 may also compensate for non-linearities in luminance outputs with regard to the time modulation of the display pixels 52. Indeed, different display pixels 52 (e.g., due to manufacturing tolerances/differences) may exhibit different amounts of luminance output given the same (e.g., compensated via the sub-pixel uniformity gain correction sub-block 114) current and time multiplexing. For example, the voltage swings of the time multiplexing may cause different luminance outputs for different display pixels 52. Moreover, in some embodiments, the sub-pixel uniformity gain correction sub-block 114 may perform a normalization of gains that are implemented by the sub-pixel uniformity digital code conversion sub-block 116 during the conversion to the digital code.
As discussed herein, due at least in part to the time multiplexing of display pixels 52, it may be desired to perform burn-in compensation for the display pixels 52 after other display image processing techniques (e.g., via other processing blocks 106) or any change to the image data that would cause a change in total current draw (e.g., time integrated current over the image frame) of the display pixels 52. Indeed, any changes in luminance affecting the current after the BIC/BIS block 102 would not be taken into account during estimating the age of the display pixels 52 and/or may alter the desired burn-in compensation. Moreover, as the sub-pixel uniformity digital code conversion sub-block 116 performs the conversion to the display image data 44, such conversion may be after the BIC/BIS block 102. As such, in some embodiments, the SPUC block 104 may be separated into the sub-pixel uniformity gain correction sub-block 114 and the sub-pixel uniformity digital code conversion sub-block 116, with the BIC/BIS block 102 disposed (e.g., physically and/or functionally) therebetween.
To help illustrate,
As discussed above, the BIC/BIS block 102 tracks an estimated amount of aging of each display pixel 52 or grouping of display pixels 52 and compensates the image data (e.g., input image data 118, gain corrected image data 120, or source image data 98) for burn-in related aging of the display pixels 52. Additionally, in some embodiments, the BIC/BIS block 102 may also compensate for temperature-based current scaling due to the temperature of the display pixels 52. As should be appreciated, although discussed above in the context of the SPUC block 104, the features of the BIC/BIS block 102 may be implemented with or without the SPUC block 104 or other image processing blocks 106.
To calculate the gains 124 the BIC sub-block 112 may utilize one or more gain maps 130 corresponding to the estimated aging (e.g., burn-in history map(s)) and/or a normalization factor 132. The gain maps 130 may be two-dimensional (2D) maps of per-color-component pixel gains generated based on a cumulative estimated aging of each display pixel 52 or grouping of multiple display pixels 52. Additionally, in some embodiments, the gain maps 130 may be upsampled (e.g., depending on implementation) to spatially support the pixel-resolution of the display panel 40. For example, the burn-in history map(s) storing the estimated aging of the display pixels 52 may be downsampled compared to the pixel-resolution of the display panel 40 (e.g., for storage and/or bandwidth reduction), and the burn-in history map(s) and/or gain maps 130 derived therefrom may be upsampled accordingly. Moreover, in some embodiments, the normalization factor 132 may be used to normalize the luminance output of the display pixels 52 with respect to a maximum gain for each color component.
In addition to the gain maps 130 and/or normalization factor 132, in some embodiments, the BIC sub-block 112 may utilize a current adaptation factor 134 to account for a change in current due to the temperature 136 of the display pixels 52 and/or brightness setting 138 of the electronic display 12. As should be appreciated, the desired brightness setting of a time multiplexed display panel 40 may be related to the current at which the display pixels 52 are driven. However, the actual current delivered to the display pixels 52 may vary depending on temperature. As such, a global current value 140, based on the brightness setting 138 (e.g., global brightness setting) may be used (e.g., via a global current look-up-table 142) to ascertain a global current indicative of the actual current delivered for a preset temperature. Additionally, in some embodiments, a temperature grid 144 may provide temperatures 136 at one or more locations across the electronic device 10. As should be appreciated, the temperature grid 144 may be uniformly spaced or non-uniformly spaced across the display panel 40. Moreover, in some embodiments, the temperatures 136 for each display pixel 52 or groups of display pixels 52 may be interpolated from the temperature grid 144. Furthermore, in some scenarios, a single temperature value (e.g., measured, estimated, or preset value) may utilized instead of individual temperatures 136. The temperatures 136 (or single temperature value) may undergo a temperature scale/offset to define a temperature differential 146 indicative of the local temperature's delta from a preset temperature. The temperature differential 146 may be utilized with the global current 140 (e.g., via using look-up-table) to perform a temperature-based current scaling and generate values of the local currents 148. As should be appreciated, the temperature differential 146 or temperature 136 may be utilized in the temperature-based current scaling depending on implementation. The local currents 148 for each display pixel 52 or groups of display pixels 52 maybe used to calculate the current adaptation factor 134 (e.g., via a look-up-table). As should be appreciated, while look-up-tables are discussed herein, the calculations of the BIC/BIS block 102 may be, in whole or in part, performed via hardware or software (e.g., via the controller processor 94 and controller memory 96). By taking into account the current adaptation factor 134, the normalization factor 132, and the gain maps 130, the gains 124 may be calculated and applied to the input pixel values 126 to generate the output pixel values 128 (e.g., compensated image data 122).
To maintain the estimated amount of aging (e.g., via one or more burn-in history maps) the burn-in statistics sub-block 110 may calculate history updates 150 to be aggregated over time using the output pixel values 128 of the burn-in compensation sub-block 112, as shown in
To generate the history update 150, the temperature adaptation factor 152 and the current aging factor 154 may be combined (e.g. via multiplication) with a duty cycle factor 158. Indeed, as the temperature adaptation factor 152 and the current aging factor 154 account for the temperature and current utilization, respectively, of the display pixels 52, the duty cycle factor 158 accounts for how long (e.g., duty cycle per image frame) the pixels are active during the time multiplexed image frame in accordance with the output pixel values 128. For example, the brightness setting 138 may be used to calculate a global duty cycle 160 (e.g., via a look-up-table), which may be combined with the output pixel values 128 to generate the duty cycle factor. As such, by combining the temperature adaptation factor 152, the current aging factor 154, and the duty cycle factor 158, a history update 150 is generated to estimate the amount of aging that has occurred for each display pixel 52 or group of display pixels 52. As should be appreciated, each look-up-table discussed above, or calculation represented thereby, may be shared by or specific to each color component display pixel type. For example, the global duty cycle LUT, the current aging LUT, the temperature adaptation LUT, and/or other LUTs discussed herein may be different for red, green, and/or blue display pixels 52.
Although the above referenced flowchart 170 is shown in a given order, in certain embodiments, process/decision blocks may be reordered, altered, deleted, and/or occur simultaneously. Additionally, the referenced flowchart 170 is given as an illustrative tool and further decision and process blocks may also be added depending on implementation.
The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
It is well understood that the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. In particular, personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
Number | Name | Date | Kind |
---|---|---|---|
7211452 | Cok et al. | May 2007 | B2 |
8077123 | Naugler, Jr. | Dec 2011 | B2 |
8358256 | Hamer et al. | Jan 2013 | B2 |
11164541 | Holland et al. | Nov 2021 | B2 |
20050083269 | Lin et al. | Apr 2005 | A1 |
20110012908 | Daly | Jan 2011 | A1 |
20110141149 | Inoue et al. | Jun 2011 | A1 |
20120075354 | Su | Mar 2012 | A1 |
20190080666 | Chappalli | Mar 2019 | A1 |
20220366822 | Leerentveld | Nov 2022 | A1 |
20230306884 | Choi | Sep 2023 | A1 |