The present disclosure relates generally to electronic displays and, more particularly, to devices and methods for achieving improvements in sensing attributes of a light emitting diode (LED) electronic display or attributes affecting an LED electronic display.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Flat panel displays, such as active matrix organic light emitting diode (AMOLED) displays, micro-LED (μLED) displays, and the like, are commonly used in a wide variety of electronic devices, including such consumer electronics as televisions, computers, and handheld devices (e.g., cellular telephones, audio and video players, gaming systems, and so forth). Such display panels typically provide a flat display in a relatively thin package that is suitable for use in a variety of electronic goods. In addition, such devices may use less power than comparable display technologies, making them suitable for use in battery-powered devices or in other contexts where it is desirable to minimize power usage.
LED displays typically include picture elements (e.g. pixels) arranged in a matrix to display an image that may be viewed by a user. Individual pixels of an LED display may generate light as a voltage is applied to each pixel. The voltage applied to a pixel of an LED display may be regulated by, for example, thin film transistors (TFTs). For example, a circuit switching TFT may be used to regulate current flowing into a storage capacitor, and a driver TFT may be used to regulate the voltage being provided to the LED of an individual pixel. Finally, the growing reliance on electronic devices having LED displays has generated interest in improvement of the operation of the displays.
A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
The present disclosure relate to devices and methods for increased determination of the performance of certain electronic display devices including, for example, light emitting diode (LED) displays, such as organic light emitting diode (OLED) displays, active matrix organic light emitting diode (AMOLED) displays, or micro LED (μLED) displays. Under certain conditions, non-uniformity of a display induced by process non-uniformity temperature gradients, or other factors across the display should be compensated for to increase performance of a display (e.g., reduce visible anomalies). The non-uniformity of pixels in a display may vary between devices of the same type (e.g., two similar phones, tablets, wearable devices, or the like), it can vary over time and usage (e.g., due to aging and/or degradation of the pixels or other components of the display), and/or it can vary with respect to temperatures, as well as in response to additional factors.
To improve display panel uniformity, compensation techniques related to adaptive correction of the display may be employed. For example, as pixel response (e.g., luminance and/or color) can vary due to component processing, temperature, usage, aging, and the like, in one embodiment, to compensate for non-uniform pixel response, a property of the pixel (e.g., a current or a voltage) may be measured (e.g., sensed via a sensing operation) and compared to a target value, for example, stored in a lookup table or the like, to generate a correction value to be applied to correct pixel illuminations to match a desired gray level. In this manner, modified data values may be transmitted to the display to generate compensated image data (e.g., image data that accurately reflects the intended image to be displayed by adjusting for non-uniform pixel responses).
Various refinements of the features noted above may be made in relation to various aspects of the present disclosure. Further features may also be incorporated in these various aspects as well. These refinements and additional features may exist individually or in any combination. For instance, various features discussed below in relation to one or more of the illustrated embodiments may be incorporated into any of the above-described aspects of the present disclosure alone or in any combination. The brief summary presented above is intended only to familiarize the reader with certain aspects and contexts of embodiments of the present disclosure without limitation to the claimed subject matter.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Furthermore, the phrase A “based on” B is intended to mean that A is at least partially based on B. Moreover, the term “or” is intended to be inclusive (e.g., logical OR) and not exclusive (e.g., logical XOR). In other words, the phrase A “or” B is intended to mean A, B, or both A and B.
Electronic displays are ubiquitous in modern electronic devices. As electronic displays gain ever-higher resolutions and dynamic range capabilities, image quality has increasingly grown in value. In general, electronic displays contain numerous picture elements, or “pixels,” that are programmed with image data. Each pixel emits a particular amount of light based on the image data. By programming different pixels with different image data, graphical content including images, videos, and text can be displayed.
Display panel sensing allows for operational properties of pixels of an electronic display to be identified to improve the performance of the electronic display. For example, variations in temperature and pixel aging (among other things) across the electronic display cause pixels in different locations on the display to behave differently. Indeed, the same image data programmed on different pixels of the display could appear to be different due to the variations in temperature and pixel aging. Without appropriate compensation, these variations could produce undesirable visual artifacts. However, compensation of these variations may hinge on proper sensing of differences in the images displayed on the pixels of the display. Accordingly, the techniques and systems described below may be utilized to enhance the compensation of operational variations across the display through improvements to the generation of reference images to be sensed to determine the operational variations.
With this in mind, a block diagram of an electronic device 10 is shown in
The electronic device 10 shown in
The processor core complex 12 may carry out a variety of operations of the electronic device 10, such as causing the electronic display 18 to perform display panel sensing and using the feedback to adjust image data for display on the electronic display 18. The processor core complex 12 may include any suitable data processing circuitry to perform these operations, such as one or more microprocessors, one or more application specific processors (ASICs), or one or more programmable logic devices (PLDs). In some cases, the processor core complex 12 may execute programs or instructions (e.g., an operating system or application program) stored on a suitable article of manufacture, such as the local memory 14 and/or the main memory storage device 16. In addition to instructions for the processor core complex 12, the local memory 14 and/or the main memory storage device 16 may also store data to be processed by the processor core complex 12. By way of example, the local memory 14 may include random access memory (RAM) and the main memory storage device 16 may include read only memory (ROM), rewritable non-volatile memory such as flash memory, hard drives, optical discs, or the like.
The electronic display 18 may display image frames, such as a graphical user interface (GUI) for an operating system or an application interface, still images, or video content. The processor core complex 12 may supply at least some of the image frames. The electronic display 18 may be a self-emissive display, such as an organic light emitting diodes (OLED) display, a micro-LED display, a micro-OLED type display, or a liquid crystal display (LCD) illuminated by a backlight. In some embodiments, the electronic display 18 may include a touch screen, which may allow users to interact with a user interface of the electronic device 10. The electronic display 18 may employ display panel sensing to identify operational variations of the electronic display 18. This may allow the processor core complex 12 to adjust image data that is sent to the electronic display 18 to compensate for these variations, thereby improving the quality of the image frames appearing on the electronic display 18.
The input structures 22 of the electronic device 10 may enable a user to interact with the electronic device 10 (e.g., pressing a button to increase or decrease a volume level). The I/O interface 24 may enable electronic device 10 to interface with various other electronic devices, as may the network interface 26. The network interface 26 may include, for example, interfaces for a personal area network (PAN), such as a Bluetooth network, for a local area network (LAN) or wireless local area network (WLAN), such as an 802.11x Wi-Fi network, and/or for a wide area network (WAN), such as a cellular network. The network interface 26 may also include interfaces for, for example, broadband fixed wireless access networks (WiMAX), mobile broadband Wireless networks (mobile WiMAX), asynchronous digital subscriber lines (e.g., ADSL, VDSL), digital video broadcasting-terrestrial (DVB-T) and its extension DVB Handheld (DVB-H), ultra wideband (UWB), alternating current (AC) power lines, and so forth. The power source 28 may include any suitable source of power, such as a rechargeable lithium polymer (Li-poly) battery and/or an alternating current (AC) power converter.
In certain embodiments, the electronic device 10 may take the form of a computer, a portable electronic device, a wearable electronic device, or other type of electronic device. Such computers may include computers that are generally portable (such as laptop, notebook, and tablet computers) as well as computers that are generally used in one place (such as conventional desktop computers, workstations and/or servers). In certain embodiments, the electronic device 10 in the form of a computer may be a model of a MacBook®, MacBook® Pro, MacBook Air®, iMac®, Mac® mini, or Mac Pro® available from Apple Inc. By way of example, the electronic device 10, taking the form of a notebook computer 10A, is illustrated in
User input structures 22, in combination with the electronic display 18, may allow a user to control the handheld device 10B. For example, the input structures 22 may activate or deactivate the handheld device 10B, navigate user interface to a home screen, a user-configurable application screen, and/or activate a voice-recognition feature of the handheld device 10B. Other input structures 22 may provide volume control, or may toggle between vibrate and ring modes. The input structures 22 may also include a microphone may obtain a user's voice for various voice-related features, and a speaker may enable audio playback and/or certain phone capabilities. The input structures 22 may also include a headphone input may provide a connection to external speakers and/or headphones.
Turning to
Similarly,
As illustrated, the system 50 includes aging/temperature determination circuitry 56 that may determine or facilitate determining the non-uniformity of the pixels in the display 18 due to, for example, aging and/or degradation of the pixels or other components of the display 18. The aging/temperature determination circuitry 56 that may also determine or facilitate determining the non-uniformity of the pixels in the display 18 due to, for example, temperature.
The image correction circuitry 52 may send the image data 54 (for which the non-uniformity of the pixels in the display 18 have or have not been compensated for by the image correction circuitry 52) to analog-to-digital converter 58 of a driver integrated circuit 60 of the display 18. The analog-to-digital conversion converter 58 may digitize then image data 54 when it is in an analog format. The driver integrated circuit 60 may send signals across gate lines to cause a row of pixels of a display panel 62, including pixel 64, to become activated and programmable, at which point the driver integrated circuit 68 may transmit the image data 54 across data lines to program the pixels, including the pixel 64, to display a particular gray level (e.g., individual pixel brightness). By supplying different pixels of different colors with the image data 54 to display different gray levels, full-color images may be programmed into the pixels. The driver integrated circuit 60 may also include a sensing analog front end (AFE) 66 to perform analog sensing of the response of the pixels to data input (e.g., the image data 54) to the pixels.
The processor core complex 12 may also send sense control signals 68 to cause the display 18 to perform display panel sensing. In response, the display 18 may send display sense feedback 70 that represents digital information relating to the operational variations of the display 18. The display sense feedback 70 may be input to the aging/temperature determination circuitry 56, and take any suitable form. Output of the aging/temperature determination circuitry 56 may take any suitable form and be converted by the image correction circuitry 52 into a compensation value that, when applied to the image data 54, appropriately compensates for non-uniformity of the display 18. This may result in greater fidelity of the image data 54, reducing or eliminating visual artifacts that would otherwise occur due to the operational variations of the display 18. In some embodiments, the processor core complex 12 may be part of the driver integrated circuit 60, and as such, be part of the display 18.
The display 18 senses (process block 82) operational variations of the display 18 itself. In particular, the processor core complex 12 may send one or more instructions (e.g., sense control signals 68) to the display 18. The instructions may cause the display 18 to perform display panel sensing. The operational variations may include any suitable variations that induce non-uniformity in the display 19, such as process non-uniformity temperature gradients, aging of the display 18, and the like.
The processor core complex 12 then adjusts (process block 84) the display 18 based on the operational variations. For example, the processor core complex 12 may receive display sense feedback 70 that represents digital information relating to the operational variations from the display 18 in response to receiving the sense control signals 68. The display sense feedback 70 may be input to the aging/temperature determination circuitry 56, and take any suitable form. Output of the aging/temperature determination circuitry 56 may take any suitable form and be converted by the image correction circuitry 52 into a compensation value. For example, processor core complex 12 may apply the compensation value to the image data 54, which may then be sent to the display 18. In this manner, the processor core complex 12 may perform the method 80 to increase performance of the display 18 (e.g., by reducing visible anomalies).
To accurately display an image frame, an electronic display may control light emission (e.g., actual luminance) from its display pixels, based on for example, environmental operational parameters (e.g., ambient temperature, humidity, brightness, and the like) and/or display-related operational parameters (e.g., light emission, current signal magnitude which may affect light emission, and the like).
To help illustrate, a portion 134 of the electronic device 10 including a display pipeline 136 is shown in
As depicted, the portion 134 of the electronic device 10 also includes the power source 28, an image data source 138, a display driver 140, a controller 142, and a display panel 144. In some embodiments, the controller 142 may control operation of the display pipeline 136, the image data source 138, and/or the display driver 140. To control operation, the controller 142 may include a controller processor 146 and controller memory 148. In some embodiments, the controller processor 146 may execute instructions stored in the controller memory 148. Thus, in some embodiments, the controller processor 146 may be included in the processor core complex 12, a timing controller in the electronic display 18, a separate processing module, or any combination thereof. Additionally, in some embodiments, the controller memory 148 may be included in the local memory 14, the main memory storage device 16, a separate tangible, non-transitory, computer readable medium, or any combination thereof.
In the depicted embodiment, the display pipeline 136 is communicatively coupled to the image data source 138. In this manner, the display pipeline 136 may receive image data from the image data source 138. As described above, in some embodiments, the image data source 138 may be included in the processor core complex 12, or a combination thereof. In other words, the image data source 138 may provide image data to be displayed by the display panel 144.
Additionally, in the depicted embodiment, the display pipeline 136 includes an image data buffer 150 to store image data, for example, received from the image data source 138. In some embodiments, the image data buffer 150 may store image data to be processed by and/or already processed by the display pipeline 136. For example, the image data buffer 150 may store image data corresponding with multiple image frames (e.g., a previous image frame, a current image frame, and/or a subsequent image frame). Additionally, the image data buffer may store image data corresponding with multiple portions (e.g., a previous row, a current row, and/or a subsequent row) of an image frame.
To process the image data, the display pipeline 136 may include one or more image data processing blocks 152. For example, in the depicted embodiment, the image data processing blocks 152 include a content analysis block 154. Additionally, in some embodiments, the image data processing block 152 may include an ambient adaptive pixel (AAP) block, a dynamic pixel backlight (DPB) block, a white point correction (WPC) block, a sub-pixel layout compensation (SPLC) block, a burn-in compensation (BIC) block, a panel response correction (PRC) block, a dithering block, a sub-pixel uniformity compensation (SPUC) block, a content frame dependent duration (CDFD) block, an ambient light sensing (ALS) block, or any combination thereof.
To display an image frame, the content analysis block 154 may process the corresponding image data to determine content of the image frame. For example, the content analysis block 154 may process the image data to determine target luminance (e.g., greyscale level) of display pixels 156 for displaying the image frame. Additionally, the content analysis block 154 may determine control signals, which instruct the display driver 140 to generate and supply analog electrical signals to the display panel 144. To generate the analog electrical signals, the display driver 140 may receive electrical power from the power source 28, for example, via one or more power supply rails. In particular, the display driver 140 may control supply of electrical power from the one or more power supply rails to display pixels 156 in the display panel 144.
In some embodiments, the content analysis block 154 may determine pixel control signals that each indicates a target pixel current to be supplied to a display pixel 156 in the display panel 144 of the electronic display 18. Based at least in part on the pixel control signals, the display driver 140 may illuminate display pixels 156 by generating and supplying analog electrical signals (e.g., voltage or current) to control light emission from the display pixels 156. In some embodiments, the content analysis block 154 may determine the pixel control signals based at least in part on target luminance of corresponding display pixels 156.
Additionally, in some embodiments, one or more sensors 158 may be used to sense (e.g., determine) information related to display performance of the electronic device 10 and/or the electronic display 18, such as display-related operational parameters and/or environmental operational parameters. For example, the display-related operational parameters may include actual light emission from a display pixel 156 and/or current flowing through the display pixel 156. Additionally, the environmental operational parameters may include ambient temperature, humidity, and/or ambient light.
In some embodiments, the controller 142 may determine the operational parameters based at least in part on sensor data received from the sensors 158. Thus, as depicted, the sensors 158 are communicatively coupled to the controller 142. In some embodiments, the controller 142 may include a sensing controller that controls performance of sensing operations and/or determines results (e.g., operational parameters and/or environmental parameters) of the sensing operations.
To help illustrate, one embodiment of a sensing controller 159 that may be included in the controller 142 is shown in
Additionally, in some embodiments, the sensing controller 159 may process the received data to determine control commands instructing the display pipeline 136 to perform control actions and/or determine control commands instructing the electronic display to perform control actions. In the depicted embodiment, the sensing controller 159 outputs control commands indicating sensing brightness, sensing time (e.g., duration), sense pixel density, sensing location, sensing color, and sensing interval. It should be understood that the described input data and output control commands are merely intended to be illustrative and not limiting.
As described above, the electronic device 12 may refresh an image or an image frame at a refresh rate, such as 60 Hz, 120 Hz, and/or 240 Hz. To refresh an image frame, the display driver 140 may refresh (e.g., update) image data written to the display pixels 156 on the display panel 144. For example, to refresh a display pixel 156, the electronic display 18 may toggle the display pixel 156 from a light emitting mode to a non-light emitting mode and write image data to the display pixel 156 such that display pixel 156 emits light based on the image data when toggled back to the light emitting mode. Additionally, in some embodiments, display pixels 156 may be refreshed with image data corresponding to an image frame in one or more contiguous refresh pixel groups.
To help illustrate, timing diagrams of a display panel 144 using different refresh rates to display an image frame are shown in
With regard to the first timing diagram 160, a new image frame is displayed by the display panel 144 approximately once every 16.6 milliseconds when using the 60 Hz refresh rate. In particular, at 0 ms, the refresh pixel group 164 is positioned at the top of the display panel 144 and the display pixels 156 below the refresh pixel group 164 illuminate based on image data corresponding with a previous image frame 162. At approximately 8.3 ms, the refresh pixel group 164 has rolled down to approximately halfway between the top and the bottom of the display panel 144. Thus, the display pixels 156 above the refresh pixel group 164 may illuminate based on image data corresponding to a next image frame 166 while the display pixels 156 below the refresh pixel group 164 illuminate based on image data corresponding with the previous image frame 162. At approximately 16.6 ms, the refresh pixel group 164 has rolled down to the bottom of the display panel 144 and, thus, each of the display pixels 156 above the refresh pixel group 164 may illuminate based on image data corresponding to the next image frame 166.
With regard to the second timing diagram 168, a new frame is displayed by the display panel 144 approximately once every 8.3 milliseconds when using the 120 Hz refresh rate. In particular, at 0 ms, the refresh pixel group 164 is positioned at the top of the display panel 144 and the display pixels 156 below the refresh pixel group 164 illuminate based on image data corresponding with a previous image frame 162. At approximately 4.17 ms, the refresh pixel group 164 has rolled down to approximately halfway between the top and the bottom of the display panel 144. Thus, the display pixels 156 above the refresh pixel group 164 may illuminate based on image data corresponding to a next image frame 166 while the display pixels 156 below the refresh pixel group 164 illuminate based on image data corresponding with the previous image frame 162. At approximately 8.3 ms, the refresh pixel group 164 has rolled down to the bottom of the display panel 144 and, thus, each of the display pixels 156 above the refresh pixel group 164 may illuminate based on image data corresponding to the next image frame 166.
With regard to the third timing diagram 170, a new frame is displayed by the display panel 144 approximately once every 4.17 milliseconds when using the 240 Hz PWM refresh rate by using multiple noncontiguous refresh pixel groups—namely a first refresh pixel group 164A and a second refresh pixel group 164B. In particular, at 0 ms, the first refresh pixel group 164A is positioned at the top of the display panel 144 and a second refresh pixel group 164B is positioned approximately halfway between the top and the bottom of the display panel 144. Thus, the display pixels 156 between the first refresh pixel group 164A and the second refresh pixel group 164B may illuminate based on image data corresponding to a previous image frame 162, and the display pixels 156 between the first refresh pixel group 164A and the second refresh pixel group 164B may illuminate based on image data corresponding to the previous image frame 162.
At approximately 2.08 ms, the first refresh pixel group 164A has rolled down to approximately one quarter of the way between the top and the bottom of the display panel 144 and the second refresh pixel group 164B has rolled down to approximately three quarters of the way between the top and the bottom of the display panel 144. Thus, the display pixels 156 above the first pixel refresh group 164 illuminate based on image data corresponding to a next image frame 166 and the display pixels 156 between the position of the second refresh pixel group 164B at 0 ms and the second refresh pixel group 164B illuminate based on image data corresponding to the next image frame 166. At approximately 4.17 ms, the first refresh pixel group 164A has rolled approximately halfway down between the top and the bottom of the display panel 144 and the second refresh pixel group 164B has rolled to the bottom of the display panel 144. Thus, the display pixel 156 above the first refresh pixel group 164A and the display pixels between the first refresh pixel group 164A and the second refresh pixel group 164B may illuminate based on image data corresponding to the next image frame 166.
As described above, refresh pixel groups 164 (including 164A and 164B) may be used to sense information related to display performance of the display panel 144, such as environmental operational parameters and/or display-related operational parameters. That is, the sensing controller 159 may instruct the display panel 144 to illuminate one or more display pixels 156 (e.g., sense pixels) in a refresh pixel group 164 to facilitate sensing the relevant information. In some embodiments, a sensing operation may be performed at any suitable frequency, such as once per image frame, once every 2 image frames, once every 5 image frames, once every 10 image frames, between image frames, and the like. Additionally, in some embodiments, a sensing operation may be performed for any suitable duration of time, such as between 20 μs and 500 μs (e.g., 50 μs, 75 μs, 100 μs, 125 μs, 150 μs, and the like).
As discussed above, a sensing operation may be performed by using one or more sensors 158 to determine sensor data indicative of operational parameters. Additionally, the controller 142 may process the sensor data to determine the operational parameters. Based at least in part on the operational parameters, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to adjust image data written to the display pixels 156, for example, to compensate for expected affects the operational parameters may have on perceived luminance.
Additionally, as described above, sense pixels may be illuminated during a sensing operation. Thus, when perceivable, illuminated sense pixels may result in undesired front of screen (FOS) artifacts. To reduce the likelihood of producing front of screen artifacts, characteristics of the sense pixels may be adjusted based on various factors expected to affect perceivability, such as content of an image frame and/or ambient light conditions.
To help illustrate, one embodiment of a process 174 for adjusting a characteristics—namely a pattern—of the sense pixels is described in
Accordingly, in some embodiments, the controller 142 may receive display content and/or ambient light conditions (process block 276). For example, the controller 142 may receive content of an image frame from the content analysis block 154. In some embodiments, the display content may include information related to color, variety of patterns, amount of contrast, change of image data corresponding to an image frame compared to image data corresponding to a previous frame, and/or the like. Additionally, the controller 142 may receive ambient light conditions from one or more sensors 158 (e.g., an ambient light sensor). In some embodiments, the ambient light conditions may include information related to the brightness/darkness of the ambient light.
Based at least in part on the display content and/or ambient light conditions, the controller 142 may determine a sense pattern used to illuminate the sense pixels (process block 278). In this manner, the controller 142 may determine the sense pattern to reduce likelihood of illuminating the sense pixels cause a perceivable visual artifact. For example, when the content to be displayed includes solid, darker blocks, less variety of colors or patterns, and the like, the controller 142 may determine that a brighter, more solid pattern of sense pixels should not be used. On the other hand, when the content being displayed includes a large variety of different patterns and colors that change frequently from frame to frame, the controller 142 may determine that a brighter, more solid pattern of sense pixels may be used. Similarly, when there is little ambient light, the controller 142 may determine that a brighter, more solid pattern of sense pixels should not be used. On the other hand, when there is greater ambient light, the controller 142 may determine that a brighter, more solid pattern of sense pixels may be used.
To help illustrate, examples of sense patterns that may be used to sense information related to display performance of the display panel 144 are depicted in
For example, with regard to the first sense pattern 180, one or more contiguous sense pixel rows in the refresh pixel group 164 are illuminated. Similarly, one or more contiguous sense pixel rows in the refresh pixel group 164 are illuminated in the third sense pattern 186. However, compared to the first sense pattern 180, the sense pixels 182 in the third sense pattern 186 may be a different color, a location on the display panel 144, and/or include fewer rows.
To reduce perceivability, noncontiguous sense pixels 182 may be illuminated, as shown in the second sense pattern 184. Similarly, noncontiguous sense pixels 182 are illuminated in the fourth sense pattern 188. However, compared to the second sense pattern 184, the sense pixels 182 in the fourth sense pattern 188 may be a different color, a location on the display panel 144, and/or include fewer rows. In this manner, the characteristics (e.g., density, color, location, configuration, and/or dimension) of sense patterns may be dynamically adjusted based at least in part on content of an image frame and/or ambient light to reduce perceivability of illuminated sense pixels 182. It should be understood that the sensing patterns described are merely intended to be illustrative and not limiting. In other words, in other embodiments, other sense pattern with varying characteristics may be implements, for example, based on operational parameter to be sensed.
One embodiment of a process 190 for sensing operational parameters using sense pixels 182 in a refresh pixel group 164 is described in
Accordingly, in some embodiments, the controller 142 may determine a sense pattern used to illuminate sense pixels 182 during a sensing operation (process block 192). As described above, the controller 142 may determine a sense pattern based at least in part on content of an image frame to be displayed and/or ambient light conditions to facilitate reducing likelihood of the sensing operation causing perceivable visual artifacts. Additionally, in some embodiments, the sense patterns with varying characteristics may be predetermined and stored, for example, in the controller memory 148. Thus, in such embodiments, controller 142 may determine the sense pattern by selecting and retrieving a sense pattern. In other embodiments, the controller 142 may determine the sense pattern by dynamically adjusting a default sensing pattern.
Based at least in part on the sense pattern, the controller 142 may instruct the display driver 140 to determine sense pixels 182 to be illuminated and/or sense data to be written to the sense pixels 182 to perform the sensing operation (process block 194). In some embodiments, the sensing pattern may indicate characteristics of sense pixels 182 to be illuminated during the sensing operation. As such, the controller 142 may analyze the sensing pattern to determine characteristics such as, density, color, location, configuration, and/or dimension of the sense pixels 182 to be illuminated.
Additionally, the controller 142 may determine when each display pixel row of the display panel 144 is to be refreshed (process block 196). As described above, display pixels 156 may be refreshed (e.g., updated) with image data corresponding with an image frame by propagating a refresh pixel group 164. Thus, when a row is to be refreshed, the controller 142 may determine whether the row includes sense pixels 182 (decision block 198).
When the row includes sense pixels 182, the controller 142 may instruct the display driver 140 to write sense data to the sense pixels 182 based at least in part on the sense pattern. (process block 200). The controller 142 may then perform a sensing operation (process block 202). In some embodiments, to perform the sensing operation, the controller 142 may instruct the display driver 140 to write sensing image data to the sense pixels 182. Additionally, the controller 142 may instruct the display panel 144 to illuminate the sense pixels 182 based on the sensing image data, thereby enabling one or more sensors 158 to determine (e.g., measure) sensor data resulting from illumination of the sense pixels 182.
In this manner, the controller 142 may receive and analyze sensor data received from one or more sensors 158 indicative of environmental operational parameters and/or display-related operational parameters. As described above, in some embodiments, the environmental operational parameters may include ambient temperature, humidity, brightness, and the like. Additionally, in some embodiments, the display-related operational parameters may include an amount of light emission from at least one display pixel 156 of the display panel 144, an amount of current at the at least one display pixel 156, and the like.
When the row does not include sense pixels 182 and/or after the sensing operation is performed, the controller 142 may instruct the display driver 140 to write image data corresponding to an image frame to be displayed to each of the display pixels 156 in the row (process block 204). In this manner, the display pixels 156 may display the image frame when toggled back into the light emitting mode.
Additionally, the controller 142 may determine whether the row is the last display pixel row on the display panel 144 (decision block 206). When not the last row, the controller 142 may continue propagating the refresh pixel group 164 successively through rows of the display panel 144 (process block 196). In this manner, the display pixels 156 may be refreshed (e.g., update) to display the image frame.
On the other hand, when the last row is reached, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to adjust image data corresponding to subsequent image frames written to the display pixels 156 based at least in part on the sensing operation (e.g., determined operational parameters) (process block 208). In some embodiments, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to adjust image data to compensate for determined changes in the operational parameters. For example, the display pipeline 136 may adjust image data written to a display pixel 156 based on determined temperature, which may affect perceived luminance of the display pixel. In this manner, the sensing operation may be performed to facilitate improving perceived image quality of displayed image frames.
To help illustrate, timing diagram 210, shown in
With regard to the depicted embodiment, at time to, pixel row 1 is included in the refresh pixel group 164 and, thus, in a non-light emitting mode. On the other hand, pixel rows 2-5 are illuminated based on image data 216 corresponding to a previous image frame. For the purpose of illustration, the controller 142 may determine a sense pattern that includes sense pixels 182 in pixel row 3. Additionally, the controller 142 may determine that pixel row 3 is to be refreshed at t1.
Thus, when pixel row 3 is to be refreshed at t1, the controller 142 may determine that pixel row 3 includes sense pixels 182. As such, the controller 142 may instruct the display driver 140 to write sensing image data to the sense pixels 182 in pixel row 3 and perform a sensing operation based at least in part on illumination of the sense pixels 182 to facilitate determining operational parameters. After the sensing operation is completed (e.g., at time t2), the controller 142 may instruct the display driver 140 to write image data 216 corresponding with a next image frame to the display pixels 156 in pixel row 3.
Additionally, the controller 142 may determine whether pixel row 3 is the last row in the display panel 144. Since additional pixel rows remain, the controller 142 may instruct the display driver 140 to successively write image data corresponding to the next image frame to the remaining pixel rows. Upon reaching the last pixel row (e.g., pixel row 5), the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to adjust image data written to the display pixels 156 for displaying subsequent image frames based at least in part on the determined operational parameters. For example, when the determined operational parameters indicate that current output from a sense pixel 182 is less than expected, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to increase current supplied to the display pixels 156 for displaying subsequent image frames. On the other hand, when the determined operational parameters indicate that the current output from the sense pixel is greater than expected, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to decrease current supplied to the display pixels 156 for displaying subsequent image frames.
It should be noted that the process 190 of
To help illustrate, a process 220 for sensing (e.g., determining) operational parameters when using multiple noncontiguous refresh pixel groups 164 is described in
Accordingly, in some embodiments, the controller 142 may determine a sense pattern used to illuminate sense pixels 182 during a sensing operation (process block 222), as described in process block 192 of the process 190. Based at least in part on the sense pattern, the controller 142 may instruct the display driver 140 to determine sense pixels 182 to be illuminated and/or sense data to be written to the sense pixels 182 to perform a sensing operation (process block 224), as described in process block 194 of the process 190. Additionally, the controller 142 may determine when each display pixel row of the display panel 144 is to be refreshed (process block 226), as described in process block 196 of the process 190. When a row is to be refreshed, the controller 142 may determine whether the row includes sense pixels 182 (decision block 228), as described in decision block 198 of the process 190.
When the row includes sense pixels 182, the controller 142 may instruct the display driver 140 to stop refreshing each display pixel 156, such that the display pixel 156 is not refreshed until the display pixel 156 is instructed to resume refreshing (process block 230). That is, if a display pixel 156 of the display panel 144 is emitting light, or more specifically displaying image data 216, the controller 142 instructs the display pixel 156 to continue emitting light, and continue displaying the image data 216. If the display pixel 156 is not emitting light (e.g., is a refresh pixel 64), the controller 142 instructs the display pixel 156 to continue not emitting light. In some embodiments, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to instruct the display pixels 156 to stop refreshing until instructed to.
The controller 142 may then instruct the display driver 140 to write sense data to the sense pixels 182 based at least in part on the sense pattern (process block 232), as described in process block 200 of the process 190. The controller 142 may perform the sensing operation (process block 234), as described in process block 202 of the process 190.
The controller 142 may then instruct the display driver 140 to resume refreshing each display pixel 156 (process block 236). The display pixels 156 may then follow the next instruction from the display pipeline 136 and/or the display driver 140.
When the row does not include sense pixels 182 and/or after the sensing operation is performed, the controller 142 may instruct the display driver 140 to write image data corresponding to an image frame to be displayed to each of the display pixels 156 in the row (process block 238), as described in process block 204 of the process 190. Additionally, the controller 142 may determine whether the row is the last display pixel row on the display panel 144 (decision block 240), as described in decision block 206 of the process 190. When not the last row, the controller 142 may continue propagating the refresh pixel group 164 successively through rows of the display panel 144 (process block 226). In this manner, the display pixels 156 may be refreshed (e.g., update) to display the image frame.
On the other hand, when the last row is reached, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to adjust image data corresponding to subsequent image frames written to the display pixels 156 based at least in part on the sensing operation (e.g., determined operational parameters) (process block 242), as described in process block 208 of the process 190.
To help illustrate, timing diagram 250, shown in
With regard to the depicted embodiment, at time to, pixel row 1 is included in the refresh pixel group 164 and, thus, in a non-light emitting mode. On the other hand, pixel rows 2-9 are illuminated based on image data 216 corresponding to a previous image frame. For the purpose of illustration, the controller 142 may determine a sense pattern that includes sense pixels 182 in pixel row 6. Additionally, the controller 142 may determine that pixel row 6 is to be refreshed at t1.
Thus, when pixel row 6 is to be refreshed at t1, the controller 142 may determine that pixel row 6 includes sense pixels 182. As such, the controller 142 may instruct the display driver 140 to stop refreshing each display pixel 156 of the display panel 144, such that the display pixel 156 is not refreshed until the display pixel 156 is instructed to resume refreshing. That is, if a display pixel 156 of the display panel 144 is emitting light, or more specifically displaying image data 216, the controller 142 instructs the display pixel 156 to continue emitting light, and continue displaying the image data 216. If the display pixel 156 is not emitting light (e.g., is a refresh pixel 64), the controller 142 instructs the display pixel 156 to continue not emitting light.
Additionally, the controller 142 may instruct the display driver 140 to write sensing image data to the sense pixels 182 in pixel row 6 and perform a sensing operation based at least in part on illumination of the sense pixels 182 to facilitate determining operational parameters. After the sensing operation is completed (e.g., at time t2), the controller 142 may instruct the display driver 140 to resume refreshing each display pixel 156. The display pixels 156 may then follow the next instruction from the display pipeline 136 and/or the display driver 140. The controller 142 may then instruct the display driver 140 to write image data 216 corresponding with a next image frame to the display pixels 156 in pixel row 6.
The controller 142 may then determine whether pixel row 6 is the last row in the display panel 144. Since additional pixel rows remain, the controller 142 may instruct the display driver 140 to successively write image data corresponding to the next image frame to the remaining pixel rows. Upon reaching the last pixel row (e.g., pixel row 9), the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to adjust image data written to the display pixels 156 for displaying subsequent image frames based at least in part on the determined operational parameters. For example, when the determined operational parameters indicate that current output from a sense pixel 182 is less than expected, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to increase current supplied to the display pixels 156 for displaying subsequent image frames. On the other hand, when the determined operational parameters indicate that the current output from the sense pixel is greater than expected, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to decrease current supplied to the display pixels 156 for displaying subsequent image frames.
It should be noted that the process 220 of
To help illustrate,
The process 220 enables the controller 142 to sense environmental operational parameters and/or display-related operational parameters using sense pixels 182 in a refresh pixel group 164 displayed by the display panel 144. Because the sensing time does not fit into a duration of a refresh operation that does not include sense pixels 182, such that the duration of the refresh operation is unaltered, the circuitry used to implement the process 220 may be simpler, use fewer components, and be more appropriate for applications where saving space in the display panel 144 is a priority. It should be noted, however, that because the majority of display pixels 156 of the display panel 144 are emitting light (e.g., displaying the image data 216) rather than not emitting light, performing the process 220 may increase average luminance during sensing. In particular, stopping the display pixels 156 of the display panel 144 from refreshing during the sensing time may freeze a majority of display pixels 156 that are emitting light, which may increase perceivability of the sensing. As such, perceivability, via a change in average luminance of the display panel 144, may vary with the number of display pixels 156 emitting light and/or displaying image data 216.
Accordingly, in some embodiments, the controller 142 may determine a sense pattern used to illuminate sense pixels 182 during a sensing operation (process block 262), as described in process block 192 of the process 190. Based at least in part on the sense pattern, the controller 142 may instruct the display driver 140 to determine sense pixels 182 to be illuminated and/or sense data to be written to the sense pixels 182 to perform a sensing operation (process block 264), as described in process block 194 of the process 190. Additionally, the controller 142 may determine when each display pixel row of the display panel 144 is to be refreshed (process block 266), as described in process block 196 of the process 190. When a row is to be refreshed, the controller 142 may determine whether the row includes sense pixels 182 (decision block 268), as described in decision block 198 of the process 190.
When the row includes sense pixels 182, the controller 142 may instruct the display driver 140 to stop refreshing each display pixel 156 in a refresh pixel group 164 positioned below the row that includes the sense pixels 182, such that the display pixel 156 in the refresh pixel group 164 positioned below the row is not refreshed until the display pixel 156 is instructed to resume refreshing (process block 270). That is, if a display pixel 156 of the display panel 144 in the refresh pixel group 164 positioned below the row is emitting light, or more specifically displaying image data 216, the controller 142 instructs the display pixel 156 to continue emitting light, and continue displaying the image data 216. If the display pixel 156 in the refresh pixel group 164 positioned below the row is not emitting light (e.g., is a refresh pixel 64), the controller 142 instructs the display pixel 156 to continue not emitting light. In some embodiments, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to instruct the display pixels 156 to stop refreshing until instructed to.
The controller 142 may then instruct the display driver 140 to write sense data to the sense pixels 182 based at least in part on the sense pattern (process block 272), as described in process block 200 of the process 190. The controller 142 may perform the sensing operation (process block 274), as described in process block 202 of the process 190.
The controller 142 may then instruct the display driver 140 to resume refreshing each display pixel 156 in the refresh pixel group 164 positioned below the row that includes the sense pixels 182 in the refresh pixel group (process block 276). The display pixels 156 in the refresh pixel group 164 positioned below the row may then follow the next instruction from the display pipeline 136 and/or the display driver 140.
When the row does not include sense pixels 182 and/or after the sensing operation is performed, the controller 142 may instruct the display driver 140 to write image data corresponding to an image frame to be displayed to each of the display pixels 156 in the row (process block 278), as described in process block 204 of the process 190. Additionally, the controller 142 may determine whether the row is the last display pixel row on the display panel 144 (decision block 280), as described in decision block 206 of the process 190. When not the last row, the controller 142 may continue propagating the refresh pixel group 164 successively through rows of the display panel 144 (process block 266). In this manner, the display pixels 156 may be refreshed (e.g., update) to display the image frame.
On the other hand, when the last row is reached, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to adjust image data corresponding to subsequent image frames written to the display pixels 156 based at least in part on the sensing operation (e.g., determined operational parameters) (process block 282), as described in process block 208 of the process 190.
To help illustrate, timing diagram 290, shown in
With regard to the depicted embodiment, at time to, pixel row 1 is included in the refresh pixel group 164 and, thus, in a non-light emitting mode. On the other hand, pixel rows 2-10 are illuminated based on image data 216 corresponding to a previous image frame. For the purpose of illustration, the controller 142 may determine a sense pattern that includes sense pixels 182 in pixel row 5. Additionally, the controller 142 may determine that pixel row 5 is to be refreshed at t1.
Thus, when pixel row 5 is to be refreshed at t1, the controller 142 may determine that pixel row 5 includes sense pixels 182. As such, the controller 142 may instruct the display driver 140 to stop refreshing each display pixel 156 in the refresh pixel group 164 positioned below pixel row 5, such that the display pixel 156 in the refresh pixel group 164 positioned below pixel row 5 is not refreshed until the display pixel 156 is instructed to resume refreshing. That is, if a display pixel 156 in the refresh pixel group 164 positioned below pixel row 5 is emitting light, or more specifically displaying image data 216, the controller 142 instructs the display pixel 156 to continue emitting light, and continue displaying the image data 216. If the display pixel 156 in the refresh pixel group 164 positioned below pixel row 5 is not emitting light (e.g., is a refresh pixel 64), the controller 142 instructs the display pixel 156 to continue not emitting light.
Additionally, the controller 142 may instruct the display driver 140 to write sensing image data to the sense pixels 182 in pixel row 5 and perform a sensing operation based at least in part on illumination of the sense pixels 182 to facilitate determining operational parameters. After the sensing operation is completed (e.g., at time t2), the controller 142 may instruct the display driver 140 to resume refreshing each display pixel 156 in the refresh pixel group 164 positioned below pixel row 5. The display pixels 156 in the refresh pixel group 164 positioned below pixel row 5 may then follow the next instruction from the display pipeline 136 and/or the display driver 140. The controller 142 may then instruct the display driver 140 to write image data 216 corresponding with a next image frame to the display pixels 156 in pixel row 5.
The controller 142 may then determine whether pixel row 5 is the last row in the display panel 144. Since additional pixel rows remain, the controller 142 may instruct the display driver 140 to successively write image data corresponding to the next image frame to the remaining pixel rows. Upon reaching the last pixel row (e.g., pixel row 10), the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to adjust image data written to the display pixels 156 for displaying subsequent image frames based at least in part on the determined operational parameters. For example, when the determined operational parameters indicate that current output from a sense pixel 182 is less than expected, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to increase current supplied to the display pixels 156 for displaying subsequent image frames. On the other hand, when the determined operational parameters indicate that the current output from the sense pixel is greater than expected, the controller 142 may instruct the display pipeline 136 and/or the display driver 140 to decrease current supplied to the display pixels 156 for displaying subsequent image frames.
It should be noted that the process 260 of
To help illustrate,
The graph 300 of
The process 260 enables the controller 142 to sense environmental operational parameters and/or display-related operational parameters using sense pixels 182 in a refresh pixel group 164 displayed by the display panel 144. Because the sensing time does not fit into a duration of a refresh operation that does not include sense pixels 182, such that the duration of the refresh operation is unaltered, the circuitry used to implement the process 260 may be simpler, use fewer components, and be more appropriate for embodiments where saving space is a priority. Additionally, because only the display pixels 156 in a refresh pixel group 164 positioned below the respective display pixel row that includes the one or more sense pixels 182 are paused, while the display pixels 156 positioned above the respective display pixel row that includes the one or more sense pixels 182 continue to operate normally, all display pixels 156 of the display panel 144 are not “paused,” and as such, performing the process 260 may maintain average luminance during sensing.
As a result, during sensing, the instantaneous luminance of the display panel 144 may vary due to the display pixels 156 in a refresh pixel group 164 positioned below the respective display pixel row that includes the one or more sense pixels 182 not refreshing. As such, perceivability, via a change in instantaneous luminance of the display panel 144, may vary with the number of display pixels 156 in the refresh pixel group 164 positioned below the pixel row that includes the one or more sense pixels 182 that are emitting light and/or displaying image data 216.
Accordingly, the technical effects of the present disclosure include sensing environmental and/or operational information within a refresh pixel group of a frame displayed by an electronic display. In this manner, perceivability of the sensing may be reduced. In some embodiments, a total time that a first display pixel row includes a continuous block of refresh pixels is the same as a total time used for a second display pixel row to illuminate a continuous block of refresh pixels and sense pixels. In some embodiments, during sensing, each pixel of the display panel is instructed to stop refreshing. As such, a total time that a first display pixel row includes a continuous block of refresh pixels, wherein the first display pixel row is not instructed to stop refreshing at a time when the first display pixel row includes a refresh pixel, is less than a total time that a second display pixel row includes a continuous block of the refresh pixels and the sense pixels. Additionally, in some embodiments, during sensing, each pixel of the display panel in a refresh pixel group positioned below a respective display pixel row that includes the sense pixels is instructed to stop refreshing. As such, a total time that a first display pixel row includes a continuous block of refresh pixels is the same as a total time used for a second display pixel row to illuminate a continuous block of refresh pixels and sense pixels.
Display panel sensing allows for operational properties of pixels of an electronic display to be identified to improve the performance of the electronic display. For example, variations in temperature and pixel aging (among other things) across the electronic display cause pixels in different locations on the display to behave differently. Indeed, the same image data programmed on different pixels of the display could appear to be different due to the variations in temperature and pixel aging. Without appropriate compensation, these variations could produce undesirable visual artifacts. However, compensation of these variations may hinge on proper sensing of differences in the images displayed on the pixels of the display. Accordingly, the techniques and systems described below may be utilized to enhance the compensation of operational variations across the display through improvements to the generation of reference images to be sensed to determine the operational variations.
As shown in
As previously discussed, since it may be desirable to compensate for image data 352, for example, based on manufacturing and/or operational variations of the electronic display 18, the processor core complex 12 may provide sense control signals 354 to cause the electronic display 18 to perform display panel sensing to generate display sense feedback 356. The display sense feedback 356 represents digital information relating to the operational variations of the electronic display 18. The display sense feedback 356 may take any suitable form, and may be converted by the image data generation and processing circuitry 350 into a compensation value that, when applied to the image data 352, appropriately compensates the image data 352 for the conditions of the electronic display 18. This results in greater fidelity of the image data 352, reducing or eliminating visual artifacts that would otherwise occur due to the operational variations of the electronic display 18.
The electronic display 18 includes an active area 364 with an array of pixels 366. The pixels 366 are schematically shown distributed substantially equally apart and of the same size, but in an actual implementation, pixels of different colors may have different spatial relationships to one another and may have different sizes. In one example, the pixels 366 may take a red-green-blue (RGB) format with red, green, and blue pixels, and in another example, the pixels 366 may take a red-green-blue-green (RGBG) format in a diamond pattern. The pixels 366 are controlled by a driver integrated circuit 368, which may be a single module or may be made up of separate modules, such as a column driver integrated circuit 368A and a row driver integrated circuit 368B. The driver integrated circuit 368 (e.g., 368B) may send signals across gate lines 370 to cause a row of pixels 366 to become activated and programmable, at which point the driver integrated circuit 368 (e.g., 368A) may transmit image data signals across data lines 372 to program the pixels 366 to display a particular gray level (e.g., individual pixel brightness). By supplying different pixels 366 of different colors with image data to display different gray levels, full-color images may be programmed into the pixels 366. The image data may be driven to an active row of pixel 366 via source drivers 374, which are also sometimes referred to as column drivers.
As described above, display 18 may display image frames through control of its luminance of its pixels 366 based at least in part on received image data. When a pixel 366 is activated (e.g., via a gate activation signal across a gate line 370 activating a row of pixels 366), luminance of a display pixel 366 may be adjusted by image data received via a data line 372 coupled to the pixel 366. Thus, as depicted, each pixel 366 may be located at an intersection of a gate line 370 (e.g., a scan line) and a data line 372 (e.g., a source line). Based on received image data, the display pixel 366 may adjust its luminance using electrical power supplied from a power source 28, for example, via power a supply lines coupled to the pixel 366.
As illustrated in
Additionally, in the depicted embodiment, the gate of the driver TFT 382 is electrically coupled to the storage capacitor 378. As such, voltage of the storage capacitor 378 may control operation of the driver TFT 382. More specifically, in some embodiments, the driver TFT 382 may be operated in an active region to control magnitude of supply current flowing through the LED 380 (e.g., from a power supply or the like providing Vdd). In other words, as gate voltage (e.g., storage capacitor 378 voltage) increases above its threshold voltage, the driver TFT 382 may increase the amount of its channel available to conduct electrical power, thereby increasing supply current flowing to the LED 380. On the other hand, as the gate voltage decreases while still being above its threshold voltage, the driver TFT 382 may decrease amount of its channel available to conduct electrical power, thereby decreasing supply current flowing to the LED 380. In this manner, the luminance of the pixel 366 may be controlled and, when similar techniques are applied across the display 18 (e.g., to the pixels 366 of the display 18), an image may be displayed.
As mentioned above, the pixels 366 may be arranged in any suitable layout with the pixels 366 having various colors and/or shapes. For example, the pixels 366 may appear in alternating red, green, and blue in some embodiments, but also may take other arrangements. The other arrangements may include, for example, a red-green-blue-white (RGBW) layout or a diamond pattern layout in which one column of pixels alternates between red and blue and an adjacent column of pixels are green. Regardless of the particular arrangement and layout of the pixels 366, each pixel 366 may be sensitive to changes on the active area 364 of the electronic display 18, such as variations and temperature of the active area 364, as well as the overall age of the pixel 366. Indeed, when each pixel 366 is a light emitting diode (LED), it may gradually emit less light over time. This effect is referred to as aging, and takes place over a slower time period than the effect of temperature on the pixel 366 of the electronic display 18.
Returning to
For example, to perform display panel sensing, the electronic display 18 may program one of the pixels 366 with test data (e.g., having a particular reference voltage or reference current). The sensing analog front end 384 then senses (e.g., measures, receives, etc.) at least one value (e.g., voltage, current, etc.) along sense line 388 of connected to the pixel 366 that is being tested. Here, the data lines 372 are shown to act as extensions of the sense lines 388 of the electronic display 18. In other embodiments, however, the display active area 364 may include other dedicated sense lines 388 or other lines of the display 18 may be used as sense lines 388 instead of the data lines 372. In some embodiments, other pixels 366 that have not been programmed with test data may be also sensed at the same time a pixel 366 that has been programmed with test data is sensed. Indeed, by sensing a reference signal on a sense line 388 when a pixel 366 on that sense line 388 has not been programmed with test data, a common-mode noise reference value may be obtained. This reference signal can be removed from the signal from the test pixel 366 that has been programmed with test data to reduce or eliminate common mode noise.
The analog signal may be digitized by the sensing analog-to-digital conversion circuitry 386. The sensing analog front end 384 and the sensing analog-to-digital conversion circuitry 386 may operate, in effect, as a single unit. The driver integrated circuit 368 (e.g., 368A) may also perform additional digital operations to generate the display feedback 356, such as digital filtering, adding, or subtracting, to generate the display feedback 356, or such processing may be performed by the processor core complex 12.
In some embodiments, a correction map (e.g., stored as a look-up table or the like) that may include correction values that correspond to or represent offsets or other values applied to generated compensated image data 352 being transmitted to the pixels 366 to correct, for example, for temperature differences at the display 18 or other characteristics affecting the uniformity of the display 18. This correction map may be part of the image data generation and processing circuit (e.g., stored in memory therein) or it may be stored in, for example, memory 14 or storage 16. Through the use of the correction map (i.e., the correction information stored therein), effects of the variation and non-uniformity in the display 18 may be corrected using the image data generation and processing circuitry 350 of the processor core complex 12. The correction map, in some embodiments, correspond to the entire active area 364 of the display 18 or a sub-segment of the active area 364. For example, to reduce the size of the memory required to store the correction map (or the data therein), the correction map may include correction values that correspond to only to predetermined groups or regions of the active area 364, whereby one or more correction values may be applied to the group of pixels 366. Additionally, in some embodiments, the correction map be a reduced resolution correction map that enables low power and fast response operations such that, for example, the image data generation and processing circuitry 350 may reduce the resolution of the correction values prior to their storage in memory so that less memory may be required, responses may be accelerated, and the like. Additionally, adjustment of the resolution of the correction map may be dynamic and/or resolution of the correction map may be locally adjusted (e.g., adjusted at particular locations corresponding to one or more regions or groups of pixels 366).
The correction map (or a portion thereof, for example, data corresponding to a particular region or group of pixels 366), may be read from the memory of the image data generation and processing circuitry 350. The correction map (e.g., one or more correction values) may then (optionally) be scaled, whereby the scaling corresponds to (e.g., offsets or is the inverse of) a resolution reduction that was applied to the correction map. In some embodiments, whether this scaling is performed (and the level of scaling) may be based on one or more input signals received as display settings and/or system information by the image data generation and processing circuitry 350.
Conversion of the correction map may be undertaken via interpolation (e.g., Gaussian, linear, cubic, or the like), extrapolation (e.g., linear, polynomial, or the like), or other conversion techniques being applied to the data of the correction map. This may allow for accounting of, for example, boundary conditions of the correction map and may yield compensation driving data that may be applied to raw display content (e.g., image data) so as to generate compensated image data 352 that is transmitted to the pixels 366.
In some embodiments, the correction map may be updated, for example, based on input values generated from the display sense feedback 356 by the image data generation and processing circuitry 350. This updating of the correction map may be performed globally (e.g., affecting the entirety of the correction map) and/or locally (e.g., affecting less than the entirety of the correction map). The update may be based on real time measurements of the active area 364 of the electronic display 18, transmitted as display sense feedback 356. Additionally and/or alternatively, a variable update rate of correction can be chosen, e.g., by the image data generation and processing system 350, based on conditions affecting the display 18 (e.g., display 18 usage, power level of the device, environmental conditions, or the like).
As illustrated at time 400, the first frame 392 is completed and a second frame 402 (which may be referred to as frame n and may, for example, correspond to a frame refresh) begins. However, in other embodiments, frame 402 may begin at time 408 (discussed below) and, accordingly, the time between frame 392 and 402 may be considered a sensing frame (e.g., separate from frame 402 instead of part of frame 402). At time 400, a display panel sensing operation may begin whereby, for example, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may provide sense control signals 354 to cause the electronic display 18 to perform display panel sensing to generate display sense feedback 356. These sense control signals 354 may be used to program one of the pixels 366 with test data (e.g., having a particular reference voltage or reference current). For the purposes of discussion, test currents will be sensed as part of the display panel sensing operation, however, it is understood that the display panel sensing operation may instead operate to sense voltage levels from one of more components of the pixels 366, current levels from one or more components of the pixels 366, brightness of the LED 380, or any combination thereof based on test data supplied to the pixels 366.
As illustrated, when the test data is applied to a pixel 366, hysteresis (e.g., a lag between a present input and a past input affecting operation) of, for example, the driver TFT 382 of the pixel 366 or one or more transient conditions affecting the pixel 366 or one or more component therein can cause a transient state wherein the current to be sensed has not reached a steady state (e.g., such that measurements of the currents at this time would affect their reliability). For example, at time 400 as the pixel is programed with test data, when the pixel 366 previously had a driver TFT current 394 corresponding to a relatively high gray level, this current 394 swings below the threshold current value 396 corresponding to the test data gray level value. The driver TFT current 394 may continue to move towards a steady state. In some embodiments, the amount of time that the current 394 of the driver TFT 382 has to settle (e.g., the relaxation time) is illustrated as time period 404 which represents the time between time 400 and time 406 corresponding to a sensing of the current (e.g., the driver TFT 382 current). Time period 404 may be, for example, less than approximately 10 microseconds (μs), 20 μs, 30 μs, 40 μs, 50 μs, 75 μs, 100 μs, 200 μs, 300 μs, 400 μs, 500 μs, or a similar value. At time 408, the pixel 366 may be programmed again with a data value, returning the current 394 to its original level (assuming the data signal has not changed between frame 392 and frame 402).
Likewise, at time 400 as the pixel is programed with test data, when the pixel 366 previously had a driver TFT current 398 corresponding to a relatively low gray level, this current 398 swings above the threshold current value 396 corresponding to the test data gray level value. The driver TFT current 394 may continue to move towards a steady state. In some embodiments, the amount of time that the current 398 of the driver TFT 382 has to settle (e.g., the relaxation time) is illustrated as time period 404. At time 408, the pixel 366 may be programmed again with a data value, returning the current 398 to its original level (assuming the data signal has not changed between frame 392 and frame 402).
As illustrated, the a technique for updating of the correction map illustrated in graph 390 in conjunction with a display panel sensing operation includes a double sided error (e.g., current 394 swinging below the threshold current value 396 corresponding to the test data gray level value and current 398 swinging above the threshold current value 396 corresponding to the test data gray level value) during time period 404. However, techniques may be applied to reduce the double sided error present in
For example,
As illustrated at time 400, the first frame 392 is completed and a second frame 402 (which, for example, may correspond to a frame refresh) begins. At time 400, a display panel sensing operation may begin whereby, for example, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may provide sense control signals 354 to cause the electronic display 18 to perform display panel sensing to generate display sense feedback 356. These sense control signals 354 may be used to program one of the pixels 366 with test data (e.g., having a particular reference voltage or reference current). For the purposes of discussion, test currents will be sensed as part of the display panel sensing operation, however, it is understood that the display panel sensing operation may instead operate to sense voltage levels from one of more components of the pixels 366, current levels from one or more components of the pixels 366, brightness of the LED 380, or any combination thereof based on test data supplied to the pixels 366.
As illustrated, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may dynamically provide sense control signals 354 to cause the electronic display 18 to perform display panel sensing to generate display sense feedback 356. For example, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may determine whether, in frame 392, the current 394 corresponds to a gray level or desired gray level for a pixel 366 above (or at or above) a reference gray level value that corresponds to threshold current value 396. Alternatively, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may determine whether, in frame 392, the gray level or desired gray level for a pixel 366 is above (or at or above) a reference gray level value that corresponds to threshold current value 396. If the current 394 in frame 392 corresponds to a gray level or desired gray level for a pixel 366 above (or at or above) a reference gray level value corresponding to threshold current value 396, or if the gray level or desired gray level for a pixel 366 in frame 392 is above (or at or above) a reference gray level value corresponding to threshold current value 396, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may produce and provide sense control signals 354 (e.g., test data) corresponding to the gray level or desired gray level of the pixel in frame 392 such that the current level to be sensed at time 406 is equivalent to the current level of the TFT driver 382 during frame 392. This allows for a time period 412 that the current 394 of the driver TFT 382 has to settle (e.g., the relaxation time) which represents the time between the start of frame 392 and time 406 corresponding to a sensing of the current (e.g., the driver TFT 382 current). Time period 412 may be, for example, less than approximately 20 milliseconds (ms), 15 ms, 10 ms, 9 ms, 8 ms, 7, ms, 6 ms, 5 ms, or a similar value.
As additionally illustrated in
In some embodiments, further reduction of sensing errors (e.g., errors due to the sensed current not being able to reach or not being able to nearly reach a steady state) may also be reduced for example, through selection of test data having a gray level corresponding to a threshold current value differing from threshold current value 396.
Current value 416 may be, for example, initially set at a predetermined level based upon, for example, an initial configuration of the device 10 (e.g., at the factory and/or during initial device 10 or display 18 testing) or may be dynamically performed and set (e.g., at predetermined intervals or in response to a condition, such as startup of the device). The current value 416 may be selected to correspond to the lowest gray level or desired gray level for a pixel 366 having a predetermined or desired reliability, a predetermined or desired signal to noise ratio (SNR), or the like. Alternatively, the current value 416 may be selected to correspond to a gray level within 2%, 5%, 10%, or another value the lowest gray level or desired gray level for a pixel 366 having a predetermined or desired reliability, a predetermined or desired SNR, or the like. For example, selection of a current value 416 corresponding to a gray level 0 may introduce too much noise into any sensed current value. However, each device 10 may have a gray level (e.g., gray level 10, 15, 20, 20, 30, or another level) at which a predetermined or desired reliability, a predetermined or desired SNR, or the like may be achieved and this gray value (or a gray value within a percentage value above the minimum gray level if, for example, a buffer regarding the reliability, SNR, or the like is desirable) may be selected for test data, which corresponds to threshold current value 416. In some embodiments, the test data, which corresponds to threshold current value 416, can also be altered based on results from the sensing operation (e.g., altered in a manner similar to the alteration of the compensated image data 352).
Thus, as illustrated in
As illustrated at time 400, the first frame 392 is completed and a second frame 402 (which, for example, may correspond to a frame refresh) begins. At time 400, a display panel sensing operation may begin whereby, for example, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may provide sense control signals 354 to cause the electronic display 18 to perform display panel sensing to generate display sense feedback 356. These sense control signals 354 may be used to program one of the pixels 366 with test data (e.g., having a particular reference voltage or reference current). For the purposes of discussion, test currents will be sensed as part of the display panel sensing operation, however, it is understood that the display panel sensing operation may instead operate to sense voltage levels from one of more components of the pixels 366, current levels from one or more components of the pixels 366, brightness of the LED 380, or any combination thereof based on test data supplied to the pixels 366.
As illustrated, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may dynamically provide sense control signals 354 to cause the electronic display 18 to perform display panel sensing to generate display sense feedback 356. For example, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may determine whether, in frame 392, the current 394 corresponds to a gray level or desired gray level for a pixel 366 above (or at or above) a reference gray level value that corresponds to threshold current value 416. Alternatively, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may determine whether, in frame 392, the gray level or desired gray level for a pixel 366 is above (or at or above) a reference gray level value that corresponds to threshold current value 416. If the current 394 in frame 392 corresponds to a gray level or desired gray level for a pixel 366 above (or at or above) a reference gray level value corresponding to threshold current value 416, or if the gray level or desired gray level for a pixel 366 in frame 392 is above (or at or above) a reference gray level value corresponding to threshold current value 416, the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may produce and provide sense control signals 354 (e.g., test data) corresponding to the gray level or desired gray level of the pixel in frame 392 such that the current level to be sensed at time 406 is equivalent to the current level of the TFT driver 382 during frame 392. This allows for a time period 418 (e.g., less than time period 412) that the current 394 of the driver TFT 382 has to settle (e.g., the relaxation time) which represents the time between the start of frame 392 and time 406 corresponding to a sensing of the current (e.g., the driver TFT 382 current). Time period 418 may be, for example, less than approximately 20 ms, 15 ms, 10 ms, 9 ms, 8 ms, 7, ms, 6 ms, 5 ms, or a similar value.
As additionally illustrated in
Additionally and/or alternatively, sensing errors from hysteresis effects may appear as high frequency artifacts. Accordingly, suppression of a high frequency component of a sensing error may be obtained by having the sensing data run through a low pass filter, which may decrease the amount of visible artifacts. The low pass filter may be a two-dimensional spatial filter, such as a Gaussian filter, a triangle filter, a box filter, or any other two-dimensional spatial filter. The filtered data may then be used by the image data generation and processing circuitry 350 to determine correction factors and/or a correction map. Likewise, by grouping pixels 366 and filtering sensed data of the grouped pixels 366, sensing errors may further be reduced.
In some embodiments, instead of performing a display panel sensing operation (e.g., performing display panel sensing) on each pixel 366 of the display 18, the display panel sensing can be performed on subsets of the group 428 of pixels 366 (e.g., a pixel 366 in an upper row and a lower row of a common column of the group 428). It should be noted that each of the group 428 size and/or dimensions and/or the subsets of the group 428 chosen can be dynamically and/or statically selected and the present example is provided for reference and is not intended to be exclusive of other group 428 sizes and/or dimensions and/or alterations to the subsets of the group 428 (e.g., the number of pixels 366 in the subset of the group 428.
In one embodiment, a current passing through the driver TFT 382 of a pixel 366 at location x,y in a given subset of the group 428 of pixels 366 in frame 392 may correspond to a brightness level (e.g., a gray level) represented by Gx,y. Likewise, a current passing through the driver TFT 382 of a pixel 366 at location x,y−1 in the subset of the group 428 of pixels 366 (e.g., a location in the same column but a row below the pixel 366 of the subset of the group 428 corresponding to the brightness level represented by Gx,y) in frame 392 may correspond to a brightness level (e.g., a gray level) represented by Gx,y-1. Instead of the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) dynamically providing sense control signals 354 to cause the electronic display 18 to perform display panel sensing to generate display sense feedback 356 for each pixel 366 based on a grey level threshold comparison (as detailed above in conjunction with
An embodiment of a threshold comparison is described below. If the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) determines that Gx,y<Gthreshold and Gx,y-1<Gthreshold, whereby Gthreshold is equal to a reference gray level value that corresponds to threshold current value 416 (or the threshold current value 106), then Gtest(x,y)=Gthreshold and Gtest(x,y-1)=Gthreshold, whereby Gtest(x,y) is the test data gray level value (e.g., a reference gray level value that corresponds to threshold current value 416 or the threshold current value 396, depending on the operation of the processor core complex 12 or a portion thereof, such as image data generation and processing circuitry 350) at time 400. Thus, if each of the gray levels of the pixels 366 of a subset of the group of pixels 366 corresponds to a current level (e.g., current 398) below the threshold current value (e.g., threshold current value 416 or the threshold current value 396), the test data gray level that corresponds to threshold current value 416 or the threshold current value 396 is used in the sensing operation. These determinations are illustrated, for example, in regions 434 and 438 of
Likewise, if the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) determines that either Gx,y≧Gthreshold and/or Gx,y-1≧Gthreshold, then the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may choose one of Gx,y or Gx,y-1 to be applied as Gtest(x,y) at time 400, such that Gtest(x,y)=Gx,y and Gtest(x,y-1)=Gx,y or Gtest(x,y)=Gx,y-1 and Gtest(x,y-1)=Gx,y-1. Alternatively, if the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) determines that either Gx,y≧Gthreshold and/or Gx,y-1≧Gthreshold, then the processor core complex 12 (or a portion thereof, such as image data generation and processing circuitry 350) may choose one of Gx,y or Gx,y-1 to be applied at time 400 to one of the pixels 366 of the subset of the group 428 of pixels 366 and choose a lowest gray level value G0 to be applied to the other one of the pixels 366 of the subset of the group 428 of pixels 366, such that Gtest(x,y)=Gx,y and Gtest(x,y-1)=G0 or Gtest(x,y)=G0 and Gtest(x,y-1)=G0. For example, it may be advantageous to apply separate test data values (one of which is the lowest available gray level or another gray level below Gthreshold) so that when the sensed values of the subset of the group 428 of pixels 366 are taken together and applied as correction values, the correction values can be averaged to a desired correction level when taken across the subset of the group 428 of pixels 366 (e.g., to generate a correction map average for the subset of the group 428 of pixels 366) to be applied as corrected feedback 356, which allows for increased accuracy in the correction values calculated, stored (e.g., in a correction map), and/or applied as compensated image data 352.
In some embodiments, a weighting operation may be performed and applied by the processor core complex 12 or a portion thereof, such as image data generation and processing circuitry 350, to select which of Gx,y and Gx,y-1 is supplied with test data G0. For example, test data gray level selection may be based on the weighting of each gray level of the pixels 366 of the subset of the group 428 of pixels 366 in frame 392, by weighting determined based on characteristics of the individual pixels 366 of the subset of the group 428 of pixels 366 (e.g., I-V characteristics, current degradation level of the pixels 366 of the subset, etc.), by weighting determined by the SNR of the respective sensing lines 388, and/or a combination or one or more of these determinations. For example, if the processor core complex 12 or a portion thereof, such as image data generation and processing circuitry 350, determines that, for example, Wx,y≧Wx,y-1, whereby Wx,y is the weight value of the pixel 366 at location x,y and Wx,y-1 is the weight value of the pixel 366 at location x,y−1 (e.g., a weighting factor determined and given to each pixel 366), then Gtest(x,y)=Gx,y and Gtest(x,y-1)=G0. These determinations are illustrated, for example, in regions 432 and 436 of
It may be appreciated that alternate weighing processes or selection of test data processes may additionally and/or alternatively be chosen. Additionally, in at least one embodiment, sensing circuitry (e.g., one or more sensors) may be present in, for example, AFE 384 to perform analog sensing of the response of more than one pixel 366 at a time (e.g., to sense each of the pixels 366 of a subset of the group 428 of pixels 366 in parallel) when, for example, the techniques described above in conjunction with
A sensing scan of an active area of pixels may result in artifacts detected via emissive pixels that emit light during a sensing mode scan. Such artifacts may be more apparent during certain conditions, such as low ambient light and dim user interface (UI) content. Furthermore, when sensing during a scan, some pixels (e.g., green and blue pixels) may display a more apparent artifact than other pixels (e.g., red pixels). Thus, in conditions where artifacts are likely to be more apparent (e.g., low ambient light, dim UI, eye contact) pixels that are more likely to display a more apparent artifact are treated differently than pixels that are less likely to display an apparent artifact. For instance, the pixels that are less likely to display an apparent artifact may be sensed more strongly (e.g., higher sensing current) and/or may include sensing of more pixels per line during a scan. In some situations where artifacts are likely to be more visible, certain pixel colors that are more likely to display visible artifacts may not be sensed at all. Also, a scanning scheme may vary within a single screen based on UI content varying throughout the screen. Furthermore, accounting for potential visibility of artifacts may be ignored when no eyes are detected, are beyond a threshold distance from a screen, and/or are not directed at the screen since even apparent artifacts are unlikely to be seen if a user is too far from the screen or is not looking at the screen.
With the foregoing in mind,
To reduce visibility of scans during the scanning mode, scanning controller 458 of
As an illustration of a change in visibility of a scanning mode,
Lines 516, 518, and 520 respectively correspond to detectable level of emission of red, blue, and green LEDs at a first level (e.g., 0 lux) of luminance of ambient light. Lines 522, 524, and 526 respectively correspond to visible emission of red, blue, and green LEDs at a second and higher level (e.g., 20 lux) of luminance of ambient light. As illustrated, red light is visible at a relatively similar current at both light levels. However, blue and green light visible at substantially lower current at the lower ambient light level. Furthermore, a sensing current 530 may be substantially above a maximum current at which the blue and green lights are visible at the lower level. Thus, red sensing may be on for temperature sensing and red pixel aging sensing regardless of ambient light level without risking detectability. However, blue and green light may be detectable at low ambient light if tested. Thus, the scanning controller 458 may disable blue and green sensing unless ambient light levels is above an ambient light threshold. Additionally or alternatively, a sensing strength (e.g., current, pixel density, duration, etc.) may be set based at least in part on ambient light.
Moreover, in some embodiments, the lines 602, 604, and 606 may correspond to different color pixels being scanned. For example, the line 602 may correspond to a scan of red pixels, the line 604 may correspond to a scan of green pixels, and the line 606 may correspond to a scan of blue pixels. Furthermore, these different colors may be scanned using a similar scanning level or may deploy a scanning level that is based at least in part on visibility of the scan based on scanned color of pixel. For example, the line 602 may be scanned at a relatively high level with the line 604 scanned at a level near the same level. However, the line 606 may be scanned at a relatively lower level (e.g., lower sensing current) during the scan. Alternatively, in the high ambient light and/or bright UI conditions, all scans may be driven using a common level regardless of color being used to sense.
The number of pixels skipped in a line may not be consistent between at least some of the scanned lines 612, 614, and 616. For example, more pixels may be skipped for colors (e.g., blue and green) that are more susceptible to being visible during a scan during low ambient light scans and/or dim UI scans. Additionally or alternatively, a sensing level may be inconsistent between at least some of the scanned lines 612, 614, and 616. For example, the line 612 may be scanned at a higher level (e.g., greater sensing current) than the lines 614 and 616 as reflected by the varying thickness of the lines in
As previously discussed, scanning of a screen may be varied as a function of UI brightness. However, this variation may also occur spatially throughout the UI. In other words, the scan may vary through various regions of content within a single screen.
In the darker UI regions 624 and 626, scanning may be treated differently. For example, lines 634, 636, and 638 may be treated similar to the lines 612, 614, and 616 of
The processes 650 and 660 may be used in series to each other, such that the scanning scheme derived from a first process (e.g., process 650 or 660) may be then further modified by a second process (e.g., process 660 or 650). In some embodiments, some of the scanning schemes may be common to each process. For example, the processes may include a full scan scheme using all colors at same level and frequency, a reduced level or frequency for some colors, and a scheme omitting scans of at least one color. Furthermore, in some embodiments, one process may be applied to select whether to reduce a number of pixels scanned in a row while a different process may be applied to select levels at which pixels are to be scanned.
Furthermore, each process previously discussed may include more than a single threshold.
Visibility of a scan may be dependent upon ambient light levels and/or UI content when eyes are viewing the display. However, if no eyes are viewing the display 18, a scan may not be visible regardless of levels, frequency, or colors used to scan. Thus, the processors 12 may use eye detection to determine whether visibility reduction should be deployed. Eye tracking may be implemented using the camera of the electronic device and software running on the processors. Additionally or alternatively, any suitable eye tracking techniques and/or systems may be used to implement such eye tracking, such as eye tracking solutions provided by iMotions, Inc. of Boston, Mass.
2. Display Panel Adjustment from Temperature Prediction
Display panel sensing involves programming certain pixels with test data and measuring a response by the pixels to the test data. The response by a pixel to test data may indicate how that pixel will perform when programmed with actual image data. In this disclosure, pixels that are currently being tested using the test data are referred to as “test pixels” and the response by the test pixels to the test data is referred to as a “test signal.” The test signal is sensed from a “sense line” of the electronic display. In some cases, the sense line may serve a dual purpose on the display panel. For example, data lines of the display that are used to program pixels of the display with image data may also serve as sense lines during display panel sensing.
Under certain conditions, display panel sensing may be too slow to identify operational variations due to thermal variations on an electronic display. For instance, when a refresh rate of the electronic display is set to a low refresh rate to save power, it is possible that portions of the electronic display could change temperature faster than could be detected through display panel sensing. To avoid visual artifacts that could occur due to these temperature changes, a predicted temperature effect may be used to adjust the operation of the electronic display.
In one example, an electronic device may store a prediction lookup table associated with independent heat-producing components of the electronic device that may create temperature variations on the electronic display. These heat-producing components could include, for example, a camera and its associated image signal processing (ISP) circuitry, wireless communication circuitry, data processing circuitry, and the like. Since these heat-producing components may operate independently, there may be a different heat source prediction lookup table for each one. In some cases, an abbreviated form of display panel sensing may be performed in which a reduced number of areas of the display panel are sensed. The reduced number of areas may correspond to portions of the display panel that are most likely to be affected by each heat source. In this way, a maximum temperature effect that may be indicated by the heat source predication lookup tables may be compared to actual sensed conditions on the electronic display and scaled accordingly. The individual effects of the predictions of the individual heat source lookup tables may be additively combined into a correction lookup table to correct for image display artifacts due to heat from the various independent heat sources.
In addition, the image content itself that is displayed on a display could cause a local change in temperature when content of an image frame changes. For example, when a dark part of an image being displayed on the electronic display suddenly becomes very bright, that part of the electronic display may rapidly increase in temperature. Likewise, when a bright part of an image being displayed on the electronic display suddenly becomes very dark, that part of the electronic display may rapidly decrease in temperature. If these changes in temperature occur faster than would be identified by display panel sensing, display panel sensing alone may not adequately identify and correct for the change in temperature due to the change in image content.
Accordingly, this disclosure also discusses taking corrective action based on temperature changes due to changes in display panel content. For instance, blocks of the image frames to be displayed on the electronic display may be analyzed for changes in content from frame to frame. Based on the change in content, a rate of change in temperature over time may be predicted. The predicted rate of the temperature change over time may be used to estimate when the change in temperature is likely to be substantial enough to produce a visual artifact on the electronic display. Thus, to avoid displaying a visual artifact, the electronic display may be refreshed sooner that it would have otherwise been refreshed to allow the display panel to display new image data that has been adjusted to compensate for the new display temperature.
As shown in
The electronic display 18 includes an active area or display panel 764 with an array of pixels 766. The pixels 766 are schematically shown distributed substantially equally apart and of the same size, but in an actual implementation, pixels of different colors may have different spatial relationships to one another and may have different sizes. In one example, the pixels 766 may take a red-green-blue (RGB) format with red, green, and blue pixels, and in another example, the pixels 766 may take a red-green-blue-green (RGBG) format in a diamond pattern. The pixels 766 are controlled by a driver integrated circuit 768, which may be a single module or may be made up of separate modules, such as a column driver integrated circuit 768A and a row driver integrated circuit 768B. The driver integrated circuit 768 (e.g., 768B) may send signals across gate lines 770 to cause a row of pixels 766 to become activated and programmable, at which point the driver integrated circuit 768 (e.g., 768A) may transmit image data signals across data lines 772 to program the pixels 766 to display a particular gray level (e.g., individual pixel brightness). By supplying different pixels 766 of different colors with image data to display different gray levels, full-color images may be programmed into the pixels 766. The image data may be driven to an active row of pixel 766 via source drivers 774, which are also sometimes referred to as column drivers.
As mentioned above, the pixels 766 may be arranged in any suitable layout with the pixels 766 having various colors and/or shapes. For example, the pixels 766 may appear in alternating red, green, and blue in some embodiments, but also may take other arrangements. The other arrangements may include, for example, a red-green-blue-white (RGBW) layout or a diamond pattern layout in which one column of pixels alternates between red and blue and an adjacent column of pixels are green. Regardless of the particular arrangement and layout of the pixels 766, each pixel 766 may be sensitive to changes on the active area 764 of the electronic display 18, such as variations and temperature of the active area 764, as well as the overall age of the pixel 766. Indeed, when each pixel 766 is a light emitting diode (LED), it may gradually emit less light over time. This effect is referred to as aging, and takes place over a slower time period than the effect of temperature on the pixel 766 of the electronic display 18.
Display panel sensing may be used to obtain the display sense feedback 756, which may enable the processor core complex 12 to generate compensated image data 752 to negate the effects of temperature, aging, and other variations of the active area 764. The driver integrated circuit 768 (e.g., 768A) may include a sensing analog front end (AFE) 776 to perform analog sensing of the response of pixels 766 to test data. The analog signal may be digitized by sensing analog-to-digital conversion circuitry (ADC) 778.
For example, to perform display panel sensing, the electronic display 18 may program one of the pixels 766 with test data. The sensing analog front end 776 then senses a sense line 780 of connected to the pixel 766 that is being tested. Here, the data lines 772 are shown to act as the sense lines 780 of the electronic display 18. In other embodiments, however, the display active area 764 may include other dedicated sense lines 780 or other lines of the display may be used as sense lines 780 instead of the data lines 772. Other pixels 766 that have not been programmed with test data may be sensed at the same time a pixel that has been programmed with test data. Indeed, by sensing a reference signal on a sense line 780 when a pixel on that sense line 780 has not been programmed with test data, a common-mode noise reference value may be obtained. This reference signal can be removed from the signal from the test pixel that has been programmed with test data to reduce or eliminate common mode noise.
The analog signal may be digitized by the sensing analog-to-digital conversion circuitry 778. The sensing analog front end 776 and the sensing analog-to-digital conversion circuitry 778 may operate, in effect, as a single unit. The driver integrated circuit 768 (e.g., 768A) may also perform additional digital operations to generate the display feedback 756, such as digital filtering, adding, or subtracting, to generate the display feedback 756, or such processing may be performed by the processor core complex 12.
A variety of sources can produce heat that could cause a visual artifact to appear on the electronic display 18 if the image data 752 is not compensated for the thermal variations on the electronic display 18. For example, as shown in a thermal diagram 790 of
As shown in
Because the amount of heating on the active area 764 of the electronic display 18 may change faster than could be updated using display panel sensing to update the temperature lookup table (LUT) 800, in some embodiments, predictive compensation may be performed based on the current frame rate of the electronic display 18. However, it should be understood that, in other embodiments, predictive compensation may be performed at all times or when activated by the processor core complex 12. An example of determining to perform predictive compensation based on the current frame rate of the electronic display 18 is shown by a flowchart 810 of
Indeed, as shown in a flowchart 830 of
A predictive heat correction system 860 is shown in a block diagram of
Each heat source correction loop 862 may have an operation that is similar to the first heat source correction loop 862A, but which relates to a different heat source. That is, each heat source loop 862 can be used to correct for visual artifacts that can be used to update the temperature lookup table (LUT) 800 to correct for artifacts due to that particular heat source (but not other heat sources). Thus, referring particularly to the first heat source correction loop 862A, a first heat source prediction lookup table (LUT) 866 may be used to update the temperature lookup table (LUT) 800 for a particular reference value of the amount of heat being emitted by the first heat source (e.g., heat source 792). Yet because the amount of heat emitted by the first heat source to account for the variations in the amount of heat that could be emitted by the first heat source (e.g., heat source 792), the first heat source prediction lookup table (LUT) 866 can be scaled up or down depending how closely the first heat source prediction lookup table (LUT) 866 matches current conditions on the active area 764.
The first heat source correction loop 862A may receive a reduced form of display sense feedback 756A at least from pixels that are located on the active area 764 where the first heat source will most prominently affect the active area 764. The display sense feedback 756A may be an average, for example of multiple pixels 766 that have been sensed on the active area 764. In the particular example shown in
Since the amount of correction that may be used to correct from the first heat source may scale with this amount of heat, the values of the first heat source prediction LUT 866 may be scaled based on the comparison of the values from the display sense feedback 756A and the predicted first heat source correction value 868 from the same row as the display sense feedback 756A. This comparison may identify a relationship between the predicted heat source row correction values (predicted first heat source correction value 868) and the measured first heat source row correction values (display sense feedback 756A) to obtain a scaling factor “a”. The entire set of values of the first heat source prediction lookup table 866 may be scaled by the scaling factor “a” and applied to a first heat source temperature lookup table (LUT) 800A. Each of the other heat source correction loops 862B, 862C, . . . 862N may similarly populate a respective heat source temperature lookup tables (not shown) similar to the first heat source temperature lookup table (LUT) 800A, which may be added together into the overall temperature lookup table (LUT) 800 that is used to compensate the image data 802 to obtain the compensated image data 752.
Additional corrections may be made using the residual correction loop 864. The residual correction loop 864 may receive other display sense feedback 756B that may be from a location on the active area 764 of the electronic display 18 other than one that is most greatly affected by one of the heat sources 1, 2, 3, . . . N. The display sense feedback 756B may be converted to appropriate correction factor(s) using the correction factor LUT 820 and these correction factors may be used to populate a temperature lookup table (LUT) 800B, which may also be added to the overall temperature lookup table (LUT) 800.
To summarize, as shown by a flowchart 890 of
Display panel sensing involves programming certain pixels with test data and measuring a response by the pixels to the test data. The response by a pixel to test data may indicate how that pixel will perform when programmed with actual image data. In this disclosure, pixels that are currently being tested using the test data are referred to as “test pixels” and the response by the test pixels to the test data is referred to as a “test signal.” The test signal is sensed from a “sense line” of the electronic display and may be a voltage or a current, or both a voltage and a current. In some cases, the sense line may serve a dual purpose on the display panel. For example, data lines of the display that are used to program pixels of the display with image data may also serve as sense lines during display panel sensing.
To sense the test signal, it may be compared to some reference value. Although the reference value could be static—referred to as “single-ended” testing using a static reference value may cause too much noise to remain in the test signal. Indeed, the test signal often contains both the signal of interest, which may be referred to as the “pixel operational parameter” or “electrical property” that is being sensed, as well as noise due to any number of electromagnetic interference sources near the sense line. This disclosure provides a number of systems and methods for mitigating the effects of noise on the sense line that contaminate the test signal. These include, for example, differential sensing (DS), difference-differential sensing (DDS), correlated double sampling (CDS), and programmable capacitor matching. These various display panel sensing systems and methods may be used individually or in combination with one another.
Differential sensing (DS) involves performing display panel sensing not in comparison to a static reference, as is done in single-ended sensing, but instead in comparison to a dynamic reference. For example, to sense an operational parameter of a test pixel of an electronic display, the test pixel may be programmed with test data. The response by the test pixel to the test data may be sensed on a sense line (e.g., a data line) that is coupled to the test pixel. The sense line of the test pixel may be sensed in comparison to a sense line coupled to a reference pixel that was not programmed with the test data. The signal sensed from the reference pixel does not include any particular operational parameters relating to the reference pixel in particular, but rather contains common-noise that may be occurring on the sense lines of both the test pixel and the reference pixel. In other words, since the test pixel and the reference signal are both subject to the same system-level noise—such as electromagnetic interference from nearby components or external interference—differentially sensing the test pixel in comparison to the reference pixel results in at least some of the common-mode noise subtracted away from the signal of the test pixel.
Difference-differential sensing (DDS) involves differentially sensing two differentially sensed signals to mitigate the effects of remaining differential common-mode noise. Thus, a differential test signal may be obtained by differentially sensing a test pixel that has been programmed with test data and a reference pixel that has not been programmed with test data, and a differential reference signal may be obtained by differentially sensing two other reference pixels that have not been programmed with the test data. The differential test signal may be differentially compared to the differential reference signal, which further removes differential common-mode noise.
Correlated double sampling (CDS) involves performing display panel sensing at least two different times and digitally comparing the signals to remove temporal noise. At one time, a test sample may be obtained by performing display panel sensing on a test pixel that has been programmed with test data. At another time, a reference sample may be obtained by performing display panel sensing on the same test pixel but without programming the test pixel with test data. Any suitable display panel sensing technique may be performed, such as differential sensing or difference-differential sensing, or even single-ended sensing. There may be temporal noise that is common to both of the samples. As such, the reference sample may be subtracted out of the test sample to remove temporal noise.
Programmable integration capacitance may further reduce the impact of display panel noise. In particular, different sense lines that are connected to a particular sense amplifier may have different capacitances. These capacitances may be relatively large. To cause the sense amplifier to sensing signals on these sense lines as if the sense line capacitances were equal, the integration capacitors may be programmed to have the same ratio as the ratio of capacitances on the sense lines. This may account for noise due to sense line capacitance mismatch.
As shown in
The electronic display 18 includes an active area 1164 with an array of pixels 1166. The pixels 1166 are schematically shown distributed substantially equally apart and of the same size, but in an actual implementation, pixels of different colors may have different spatial relationships to one another and may have different sizes. In one example, the pixels 1166 may take a red-green-blue (RGB) format with red, green, and blue pixels, and in another example, the pixels 1166 may take a red-green-blue-green (RGBG) format in a diamond pattern. The pixels 1166 are controlled by a driver integrated circuit 1168, which may be a single module or may be made up of separate modules, such as a column driver integrated circuit 1168A and a row driver integrated circuit 1168B. The driver integrated circuit 1168 may send signals across gate lines 1170 to cause a row of pixels 1166 to become activated and programmable, at which point the driver integrated circuit 1168 (e.g., 1168A) may transmit image data signals across data lines 1172 to program the pixels 1166 to display a particular gray level. By supplying different pixels 1166 of different colors with image data to display different gray levels or different brightness, full-color images may be programmed into the pixels 1166. The image data may be driven to an active row of pixel 1166 via source drivers 1174, which are also sometimes referred to as column drivers.
As mentioned above, the pixels 1166 may be arranged in any suitable layout with the pixels 1166 having various colors and/or shapes. For example, the pixels 1166 may appear in alternating red, green, and blue in some embodiments, but also may take other arrangements. The other arrangements may include, for example, a red-green-blue-white (RGBW) layout or a diamond pattern layout in which one column of pixels alternates between red and blue and an adjacent column of pixels are green. Regardless of the particular arrangement and layout of the pixels 1166, each pixel 1166 may be sensitive to changes on the active area 1164 of the electronic display 18, such as variations and temperature of the active area 1164, as well as the overall age of the pixel 1166. Indeed, when each pixel 1166 is a light emitting diode (LED), it may gradually emit less light over time. This effect is referred to as aging, and takes place over a slower time period than the effect of temperature on the pixel 1166 of the electronic display 18.
Display panel sensing may be used to obtain the display sense feedback 1156, which may enable the processor core complex 12 to generate compensated image data 1152 to negate the effects of temperature, aging, and other variations of the active area 1164. The driver integrated circuit 1168 (e.g., 1168A) may include a sensing analog front end (AFE) 1176 to perform analog sensing of the response of pixels 1166 to test data. The analog signal may be digitized by sensing analog-to-digital conversion circuitry (ADC) 1178.
For example, to perform display panel sensing, the electronic display 18 may program one of the pixels 1166 with test data. The sensing analog front end 1176 then senses a sense line 1180 of connected to the pixel 1166 that is being tested. Here, the data lines 1172 are shown to act as the sense lines 1180 of the electronic display 18. In other embodiments, however, the display active area 1164 may include other dedicated sense lines 1180 or other lines of the display may be used as sense lines 1180 instead of the data lines 1172. Other pixels 1166 that have not been programmed with test data may be sensed at the same time a pixel that has been programmed with test data. Indeed, as will be discussed below, by sensing a reference signal on a sense line 1180 when a pixel on that sense line 1180 has not been programmed with test data, a common-mode noise reference value may be obtained. This reference signal can be removed from the signal from the test pixel that has been programmed with test data to reduce or eliminate common mode noise.
The analog signal may be digitized by the sensing analog-to-digital conversion circuitry 1178. The sensing analog front end 1176 and the sensing analog-to-digital conversion circuitry 1178 may operate, in effect, as a single unit. The driver integrated circuit 1168 (e.g., 1168A) may also perform additional digital operations to generate the display feedback 1156, such as digital filtering, adding, or subtracting, to generate the display feedback 1156, or such processing may be performed by the processor core complex 12.
The single-ended display panel sensing shown in
Although the single-ended approach of
i. Differential Sensing (DS)
Differential sensing involves sensing a test pixel that has been driven with test data in comparison to a reference pixel that has not been applied with test data. By doing so, common-mode noise that is present on the sense lines 1180 of both the test pixel and the reference pixel may be excluded.
As shown by a process 1250 of
As a result, the signal-to-noise ratio of the sensed test pixel 1166 data may be substantially better using the differential sensing approach than using a single-ended approach. Indeed, this is shown in a plot 1260 of
Differential sensing may take place by comparing a test pixel 1166 from one column with a reference pixel 1166 from any other suitable column. For example, as shown in
One reason different electrical characteristics could occur on the sense lines 1180 of different columns of pixels 1166 is illustrated by
Such layer misalignment is shown in
ii. Difference-Differential Sensing (DDS)
The different capacitances on the data lines 1172A and 1172B may mean that even differential sensing may not fully remove all common-mode noise appearing on two different data lines 1172 that are operating as sense lines 1180, as shown in
Difference-differential sensing may mitigate the effect of differential common-mode noise that remains after differential sensing due to differences in capacitance on different data lines 1172 when those data lines 1172 are used as sense lines 1180 for display panel sensing.
A process 1300 shown in
Difference-differential sensing may also take place in the analog domain. For example, as shown in
iii. Correlated Double Sampling (CDS)
Correlated double sampling involves sensing the same pixel 1166 for different samples at different, at least one of the samples involving programming the pixel 1166 with test data and sensing a test signal and at least another of the samples involving not programming the pixel 1166 with test data and sensing a reference signal. The reference signal may be understood to contain temporal noise that can be removed from the test signal. Thus, by subtracting the reference signal from the test signal, temporal noise may be removed. Indeed, in some cases, there may be noise due to the sensing process itself. Thus, correlated double sampling may be used to cancel out such temporal sensing noise.
One manner of performing correlated double sampling is described by a flowchart 1370 of
It should be appreciated that correlated double sampling may be performed in a variety of manners, such as those shown by way of example in
A reference sample 1338 and a test sample 1340 may not necessarily occur sequentially. Indeed, as shown in
Correlated double sampling may lend itself well for use in combination with differential sensing or difference-differential sensing, as shown in
iv. Capacitance Balancing
Capacitance balancing represents another way of improving the signal quality used in differential sensing by equalizing the effect of a capacitance difference (AC) between two sense lines 1180 (e.g., data lines 1172A and 1172B). In an example shown in
Placing additional capacitor structures between the conductive lines 1268 and some of the data lines 1172 (e.g., the data lines 1172A), however, may involve relatively large capacitors that take up a substantial amount of space. Thus, additionally or alternatively, a much smaller programmable capacitor may be programmed to a value that is proportional to the difference in capacitance (ΔC) between the two data lines 1172A and 1172B (shown in
v. Combinations of Approaches
While many of the techniques discussed above have been discussed generally as independent noise-reduction techniques, it should be appreciated that these may be used separately or in combination with one another. Indeed, the specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
vi. Edge Column Differential Sensing
Display panel sensing involves programming certain pixels with test data and measuring a response by the pixels to the test data. The response by a pixel to test data may indicate how that pixel will perform when programmed with actual image data. In this disclosure, pixels that are currently being tested using the test data are referred to as “test pixels” and the response by the test pixels to the test data is referred to as a “test signal.” The test signal is sensed from a “sense line” of the electronic display and may be a voltage or a current, or both a voltage and a current. In some cases, the sense line may serve a dual purpose on the display panel. For example, data lines of the display that are used to program pixels of the display with image data may also serve as sense lines during display panel sensing.
To sense the test signal, it may be compared to some reference value. Although the reference value could be static—referred to as “single-ended” testing using a static reference value may cause too much noise to remain in the test signal. Indeed, the test signal often contains both the signal of interest, which may be referred to as the “pixel operational parameter” or “electrical property” that is being sensed, as well as noise due to any number of electromagnetic interference sources near the sense line. Differential sensing (DS) may be used to cancel out common mode noise of the display panel during sensing.
Differential sensing involves performing display panel sensing not in comparison to a static reference, as is done in single-ended sensing, but instead in comparison to a dynamic reference. For example, to sense an operational parameter of a test pixel of an electronic display, the test pixel may be programmed with test data. The response by the test pixel to the test data may be sensed on a sense line (e.g., a data line) that is coupled to the test pixel. The sense line of the test pixel may be sensed in comparison to a sense line coupled to a reference pixel that was not programmed with the test data. The signal sensed from the reference pixel does not include any particular operational parameters relating to the reference pixel in particular, but rather contains common-noise that may be occurring on the sense lines of both the test pixel and the reference pixel. In other words, since the test pixel and the reference signal are both subject to the same system-level noise—such as electromagnetic interference from nearby components or external interference—differentially sensing the test pixel in comparison to the reference pixel results in at least some of the common-mode noise being subtracted away from the signal of the test pixel. The resulting differential sensing may be used in combination with other techniques, such as difference-differential sensing, correlated double sampling, and the like.
It may be beneficial to perform differential sensing using two lines with similar electrical characteristics. For example, every other sense line may have electrical characteristics that are more similar than adjacent sense lines. An electronic display panel with an odd number of electrically similar sense lines may not perform differential sensing with every other sense line without having one remaining sense line that is left out. Accordingly, this disclosure provides systems and methods to enable differential sensing of sense lines in a display panel even when the display panel contains odd numbers of electrically similar sense lines. In one example, some or all of the sense lines may be routed to sense amplifiers be differentially sensed with different sense lines at different points in time. These may be considered to be “dancing channels” that are not fixed in place, but rather may dance from sense amplifier to sense amplifier in a way that mitigates odd pairings.
As shown in
The electronic display 18 includes an active area 1564 with an array of pixels 1566. The pixels 1566 are schematically shown distributed substantially equally apart and of the same size, but in an actual implementation, pixels of different colors may have different spatial relationships to one another and may have different sizes. In one example, the pixels 1566 may take a red-green-blue (RGB) format with red, green, and blue pixels, and in another example, the pixels 1566 may take a red-green-blue-green (RGBG) format in a diamond pattern. The pixels 1566 are controlled by a driver integrated circuit 1568, which may be a single module or may be made up of separate modules, such as a column driver integrated circuit 1568A and a row driver integrated circuit 1568B. The driver integrated circuit 1568 may send signals across gate lines 1570 to cause a row of pixels 1566 to become activated and programmable, at which point the driver integrated circuit 1568 (e.g., 1568A) may transmit image data signals across data lines 1572 to program the pixels 1566 to display a particular gray level. By supplying different pixels 1566 of different colors with image data to display different gray levels or different brightness, full-color images may be programmed into the pixels 1566. The image data may be driven to an active row of pixel 1566 via source drivers 1574, which are also sometimes referred to as column drivers.
As mentioned above, the pixels 1566 may be arranged in any suitable layout with the pixels 1566 having various colors and/or shapes. For example, the pixels 1566 may appear in alternating red, green, and blue in some embodiments, but also may take other arrangements. The other arrangements may include, for example, a red-green-blue-white (RGBW) layout or a diamond pattern layout in which one column of pixels alternates between red and blue and an adjacent column of pixels are green. Regardless of the particular arrangement and layout of the pixels 1566, each pixel 1566 may be sensitive to changes on the active area 1564 of the electronic display 18, such as variations and temperature of the active area 1564, as well as the overall age of the pixel 1566. Indeed, when each pixel 1566 is a light emitting diode (LED), it may gradually emit less light over time. This effect is referred to as aging, and takes place over a slower time period than the effect of temperature on the pixel 1566 of the electronic display 18.
Display panel sensing may be used to obtain the display sense feedback 1556, which may enable the processor core complex 12 to generate compensated image data 1552 to negate the effects of temperature, aging, and other variations of the active area 1564. The driver integrated circuit 1568 (e.g., 1568A) may include a sensing analog front end (AFE) 1576 to perform analog sensing of the response of pixels 1566 to test data. The analog signal may be digitized by sensing analog-to-digital conversion circuitry (ADC) 1578.
For example, to perform display panel sensing, the electronic display 18 may program one of the pixels 1566 with test data. The sensing analog front end 1576 then senses a sense line 1580 of connected to the pixel 1566 that is being tested. Here, the data lines 1572 are shown to act as the sense lines 1580 of the electronic display 18. In other embodiments, however, the display active area 1564 may include other dedicated sense lines 1580 or other lines of the display may be used as sense lines 1580 instead of the data lines 1572. Other pixels 1566 that have not been programmed with test data may be sensed at the same time a pixel that has been programmed with test data. Indeed, as will be discussed below, by sensing a reference signal on a sense line 1580 when a pixel on that sense line 1580 has not been programmed with test data, a common-mode noise reference value may be obtained. This reference signal can be removed from the signal from the test pixel that has been programmed with test data to reduce or eliminate common mode noise.
The analog signal may be digitized by the sensing analog-to-digital conversion circuitry 1578. The sensing analog front end 1576 and the sensing analog-to-digital conversion circuitry 1578 may operate, in effect, as a single unit. The driver integrated circuit 1568 (e.g., 1568A) may also perform additional digital operations to generate the display feedback 1556, such as digital filtering, adding, or subtracting, to generate the display feedback 1556, or such processing may be performed by the processor core complex 12.
The single-ended display panel sensing shown in
Although the single-ended approach of
Differential sensing involves sensing a test pixel that has been driven with test data in comparison to a reference pixel that has not been applied with test data. By doing so, common-mode noise that is present on the sense lines 1580 of both the test pixel and the reference pixel may be excluded.
As shown by a process 1650 of
As a result, the signal-to-noise ratio of the sensed test pixel 1566 data may be substantially better using the differential sensing approach than using a single-ended approach. Indeed, this is shown in a plot 1660 of
Differential sensing may take place by comparing a test pixel 1566 from one column with a reference pixel 1566 from any other suitable column. For example, as shown in
One reason different electrical characteristics could occur on the sense lines 1580 of different columns of pixels 1566 is illustrated by
Such layer misalignment is shown in
These different capacitances on the data lines 1572A compared to 1572B suggest that differential sensing may be enhanced by differentially sensing a data line 1572A with another data line 1572A, and sensing a data line 1572B with another data line 1572B. When there are an even number of electrically similar data lines 1572A and an even number of electrically similar data lines 1572B, differential sensing can take place in the manner described above with reference to
A few approaches to differential sensing that can accommodate an odd number of electrically similar data lines 1572A or 1572B are described with reference to the subsequent drawings. Namely, as shown in
Another example is shown in
A variation of the circuitry of
Another manner of differentially sensing an odd number of electrically similar columns is shown in
When the active area 1564 of the electronic display 18 includes an even number of electrically similar columns, such as an even number of data lines 1572A and even number of data lines 1572B, the routing circuitry 1710 may route all of the columns to the main fixed channels 1712. When the active area 1564 of the electronic display 18 includes an odd number N of the data lines 1572A or 1572B, the routing circuitry 1710 may route at least three of each of the data lines 1572A and at least three of the 1572B to the dancing channels 1714. In this example, the electronic display 18 includes an active area 1564 with in N odd groups of columns, each of which includes two data lines 1572A and 1572B that are more electrically similarly to other respective data lines 1572A and 1572B than to each other (i.e., a data line 1572A may be more electrically similar to another data line 1572A, and a data line 1572B may be more electrically similar to another data line 1572B). For ease of explanation, only sense amplifiers 1590A, 1590B, 1590C, and 1590D that are used to sense the data lines 1572A are shown. However, it should be understood that similar circuitry may be used to differentially sense the other electrically similar data lines 1572B. Here, the last three groups of columns N, N-1, and N-2 are routed to the dancing channels 1714.
The dancing channels 1714 allow differential sensing of the odd number of electrically similar using switches 1716 and 1718. The switches 1716 and 1718 may be used to selectively route the data line 1572A from the N−1 group of columns to the sense amplifier 1590C for comparison with (1) the data line 1572A from the N−2 group of columns or (2) the sense amplifier 1590D for comparison with the data line 1572A from the N group of columns. Dummy switches 1720 and 1722 may be provided for load-matching purposes to offset the loading effects of the switches 1716 and 1718.
Thus, the dancing channels 1714 shown in
The dancing channels shown in
An example of dancing channels that use current sensing is shown in
The pattern shown in
Dancing channels shown in
An example of dancing channels shown in
Visual artifacts, such as images that remain on the display subsequent to powering off the display, changing the image, ceasing to drive the image to the display, or the like, can be reduced and/or eliminated through the use of active panel conditioning during times when one or more portions of the display is off (e.g., powered down or otherwise has no image being driven thereto). The active panel conditioning can be chosen, for example, based on the image most recently driven to the display (e.g., the image remaining on the display) and/or characteristics of the unique to the display so as to effectively increase hysteresis of driver TFTs of the display.
To help illustrate, one embodiment of a display 18 is described in
As described above, display 18 may display image frames by controlling luminance of its display pixels 1940 based at least in part on received image data. To facilitate displaying an image frame, a timing controller may determine and transmit timing data 1942 to the gate driver 1936 based at least in part on the image data. For example, in the depicted embodiment, the timing controller may be included in the source driver 1934. Accordingly, in such embodiments, the source driver 1934 may receive image data that indicates desired luminance of one or more display pixels 1940 for displaying the image frame, analyze the image data to determine the timing data 1942 based at least in part on what display pixels 1940 the image data corresponds to, and transmit the timing data 1942 to the gate driver 1936. Based at least in part on the timing data 1942, the gate driver 1936 may then transmit gate activation signals to activate a row of display pixels 1940 via a gate line 1944.
When activated, luminance of a display pixel 1940 may be adjusted by image data received via data lines 1946. In some embodiments, the source driver 1934 may generate the image data by receiving the image data and voltage of the image data. The source driver 1934 may then supply the image data to the activated display pixels 1940. Thus, as depicted, each display pixel 1940 may be located at an intersection of a gate line 1944 (e.g., scan line) and a data line 1946 (e.g., source line). Based on received image data, the display pixel 1940 may adjust its luminance using electrical power supplied from the power supply 1938 via power supply lines 1948.
As depicted, each display pixel 1940 includes a circuit switching thin-film transistor (TFT) 1950, a storage capacitor 1952, an LED 1954, and a driver TFT 1956 (whereby each of the storage capacitor 1952 and the LED 1954 may be coupled to a common voltage, Vcom). However, variations of display pixel 1940 may be utilized in place of display pixel 1940 of
Additionally, in the depicted embodiment, the gate of the driver TFT 1956 is electrically coupled to the storage capacitor 1952. As such, voltage of the storage capacitor 1952 may control operation of the driver TFT 1956. More specifically, in some embodiments, the driver TFT 1956 may be operated in an active region to control magnitude of supply current flowing from the power supply line 1948 through the LED 1954. In other words, as gate voltage (e.g., storage capacitor 1952 voltage) increases above its threshold voltage, the driver TFT 1956 may increase the amount of its channel available to conduct electrical power, thereby increasing supply current flowing to the LED 1954. On the other hand, as the gate voltage decreases while still being above its threshold voltage, the driver TFT 1956 may decrease amount of its channel available to conduct electrical power, thereby decreasing supply current flowing to the LED 1954. In this manner, the display 18 may control luminance of the display pixel 1940. The display 18 may similarly control luminance of other display pixels 1940 to display an image frame.
As described above, image data may include a voltage indicating desired luminance of one or more display pixels 1940. Accordingly, operation of the one or more display pixels 1940 to control luminance should be based at least in part on the image data. In the display 18, a driver TFT 1956 may facilitate controlling luminance of a display pixel 1940 by controlling magnitude of supply current flowing into its LED 1954 (e.g., its OLED). Additionally, the magnitude of supply current flowing into the LED 1954 may be controlled based at least in part on voltage supplied by a data line 1946, which is used to charge the storage capacitor 1952.
Furthermore, the controller processor 1960 may interact with one or more tangible, non-transitory, machine-readable media (e.g., memory 1962) that stores instructions executable by the controller to perform the method and actions described herein. By way of example, such machine-readable media can include RAM, ROM, EPROM, EEPROM, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by the controller processor 1960 or by any processor, controller, ASIC, or other processing device of the controller 1958.
The controller 1958 may receive information related to the operation of the display 18 and may generate an output 1964 that may be utilized to control operation of the display pixels 1940. The output 1964 may be utilized to generate, for example, control signals in the source driver 1934 for control of the display pixels 1940. Additionally, in some embodiments, the output 1964 may be an active panel conditioning signal utilized to reduce hysteresis in driver TFTs 1956 of the LEDs 1954. Likewise, the memory 1962 may be utilized to store the most recent image data transmitted to the display 18 such that, for example, the controller processor 1960 may operate to actively select characteristics of the output 1964 (e.g., amplitude, frequency, duty cycle values) for the output 1964 (e.g., a common mode waveform) based on the most recent image displayed on the LED 1954. Additionally or alternatively, the output 1964 may be selected for example, by the controller processor 1960, based on stored characteristics of the LED 1954 that may be unique to each device 10.
Active panel conditioning may be undertaken when the display 18 is turned off. In some embodiments, a gate source voltage (Vgs) value may be transmitted to and applied to the driver TFTs 1956, for example, as an active panel conditioning signal, which may be part of output 1964 or may be output 1964. In some embodiments, the active panel conditioning signal (e.g., the Vgs signal) may be a fixed value (e.g., a fixed bias voltage level or value) while in other embodiments, the active panel conditioning signal may be a waveform, which will be discussed in greater detail with respect to
As previously noted, elimination of the emission of light from the display 18 may coincide with application of an active panel control signal.
Additionally, alteration or selection of the characteristics of the active panel conditioning control signal 1972 (e.g., adjustment of one or more of the frequency 1974, the duty cycle 1976, and/or the amplitude 1978) may be chosen based on device 10 characteristics (e.g., characteristics of the display panel 1932) such that the active panel conditioning control signal 1972 may be optimized for a particular device 10. Additionally and/or alternatively, the most recent image displayed on the display 18 may be stored in memory (e.g., memory 1962) and the processor 1960, for example, may perform alteration or selection of the characteristics of the active panel conditioning control signal 1972 (e.g., adjustment of one or more of the frequency 1974, the duty cycle 1976, and/or the amplitude 1978) based on the saved image data such that the active panel conditioning control signal 1972 may be optimized for a particular image. However, in some embodiments, a waveform as the active panel conditioning control signal 1972 may not be the only type of signal that may be used as part of the active panel conditioning of a display 18.
As illustrated in
Alteration or selection of a fixed bias level for an active panel conditioning control signal may be chosen based on device 10 characteristics (e.g., characteristics of the display panel 1932) such that the active panel conditioning control signal may be optimized for a particular device 10. Additionally and/or alternatively, the most recent image displayed on the display 18 may be stored in memory (e.g., memory 1962) and the processor 1960, for example, may perform alteration or selection of a fixed bias level for an active panel conditioning control signal based on the saved image data such that the active panel conditioning control signal may be optimized for a particular image.
During the second period of time 1990, the active panel conditioning control signal 1972 may be transmitted to each of the pixels 1940 of the display 18 (or to a portion of the pixels 1940 of the display 18) for a third period of time 1996, which may be a subset of time of the second period of time 1990 that begins at time 1998 between the first period of time 1986 and the second period of time 1990 (e.g., where time 1998 corresponds to a time at which the display 18 is turned off or otherwise deactivated). Through application of the active panel conditioning control signal 1972 to the respective pixels 1940, the hysteresis of the driving TFTs 1956 associated with the respective pixels 1940 may be reduced so that at the completion of the second period of time 1990, the Vgs values 1992 and 1994 will be reduced from their levels illustrated in the first period of time 1986 so that the image being displayed during the first period of time 1986 will not be visible or will be visually lessened in intensity (e.g., to reduce or eliminate any ghost image, image retention, etc. of the display 18).
Effects from the aforementioned active panel conditioning are illustrated in the timing diagram 2000 of
Additionally, during the periods of time 1990, an active panel conditioning control signal (e.g., active panel conditioning control signal 1972 or active panel conditioning control signal 1980) may be transmitted to each of the pixels 1940 of the display 18 (or to a portion of the pixels 1940 of the display 18) for the periods of time 1996, which may be a subset of times 1990 that begin at times 1998. As illustrated, through application of the active panel conditioning control signal to the respective pixels 1940, the hysteresis of the driving TFTs 1956 associated with the respective pixels 1940 may be reduced so that at the completion of times 1990, the Vgs values 1992 and 1994 are reduced from their levels illustrated in the respective periods of time 1986 so that images corresponding to the Vgs values 1992 and 1994 of a prior period of time 1986 are not carried over into a subsequent period of time 1986 (e.g., reducing or eliminating any ghost image, image retention, etc. of the display 18 from previous content during subsequent display time periods 1986).
As illustrated in
As illustrated in the timing diagram 2007 of
Additionally, during the period of time 1990, an active panel conditioning control signal (e.g., active panel conditioning control signal 1972 or active panel conditioning control signal 1980) may be transmitted to each of the pixels 1940 of the display 18 for the period of time 1996, which may be a subset of time 1990 that begins at time 1998. Alternatively, as will be discussed in conjunction with
As illustrated in
Display panel uniformity can be improved by estimating or measuring a parameter (e.g., current) through pixel, such as an organic light emitting diode (LED). Based on the measured parameter, a corresponding correction value may be applied to compensate for any offsets from an intended value. Per-pixel sensing schemes can employ the use of filters and other processing steps to help reduce or eliminate the unwanted effects of pixel leakage, noise, and other error sources. Although the application generally relates to sensing individual pixels, some embodiments may group pixels for sensing and observation such that at least one channel senses more than a single pixel. However, some external noise and error sources, such as capacitively coupled fluctuations in local supply voltage that result in common-mode error, may not be fully removable through the filtering process, resulting in erroneous correction values that compromise the effectiveness of the non-uniformity compensation. Moreover, this common-mode error is amplified by the inherent mismatches of parasitic capacitance values between different sensing channels within a display as a result of imperfect device process variations.
To address this common-mode error, when a given pixel current is being sensed through a channel (i.e., the sensing channel), a nearby pixel is also sensed through its own channel (i.e., the observation channel) while keeping the pixel emission off for the observation channel. Sensed parameter (e.g., current) value from the observation channel is scaled according to the relative mismatches of the sensing and observation channels as determined through an initial calibration process. Then, the scaled parameter is subtracted from the sensed current value from the sensing channel to determine a compensated sensing value.
The proximity of the nearby pixel, and hence the observation channel, is dependent on the accuracy level to be used in the system and correspondingly determines the spatial correlation to be used to achieve this accuracy level.
The differential input mismatch of the observation channel may be adjustable to ensure that the component of the sensed value attributed to noise and error is higher in the observation channel than it is in the sensing channel. Sensing from both the sensing channel and observation channel may occur at the same time to establish high time correlation. Moreover, the observation channel and/or the sensing channel may utilize single-ended and/or differential sensing channels.
The single-channel current sensing scheme 2100 detects at least some issues for the target pixel. But, common-mode noise sources, such as the noise source 2110, may be picked up by the current sensing system 2104 and converted into differential input by any inherent mismatches in the sensing channel 2106. This differential input may result in an error in the sensed current and a resultant error in the pixel current compensation of the output 2108.
Instead of using a single channel to sense current, two channels may be used.
To ensure that only noise is passed through the observation channel 2150, the observation channel 2150 may be decoupled from a corresponding current source 2154 via a switch 2155. A sensed observation current 2156 is scaled at scaling circuitry 2158 and subtracted from a sensed current 2160 at summing circuitry 2162 to generate a compensated output 2164 indicative of current through the sensing channel 2146 substantially attributable to the current provided by the current source 2142. The scaling factor may be determined in a calibration of the display panel to determine an output of each channel in response to an aggressor image/injected signal to determine channel properties to determine a common-mode error between channels.
Furthermore, the dual-channel current sensing scheme 2140 may include amplifiers, filters, analog-to-digital converters, digital-to-analog converters, and/or other circuitry used for processing in the dual-channel current sensing scheme 2140 that have been omitted from
Each channel may include differential inputs. In embodiments with differential input channels, a sensing channel may utilize an inherent differential input mismatch while the observation channel may utilize an intentionally induced differential input mismatch to sense a time-correlated common-mode error.
For another pixel (e.g., a pixel near to the target pixel), a sensing system 2190 is used to detect current through an observation channel 2192 that receives current from a noise source 2194 (e.g., capacitive coupling). The observation channel 2192 includes an induced differential input mismatch 2196 that is induced to sense a time-correlated common-mode error with the sensing channel 2186. In other words, the observation channel 2192 is used to observe noise (e.g., common-mode noise) in the observation channel 2192 during driving of the sensing channel 2186 to determine a magnitude of the noise (e.g., common-mode noise).
To ensure that only noise is passed through the observation channel 2192, the observation channel 2192 may be decoupled from a corresponding current source 2198 using a switch 2200. The current source 2198 is used to supply data to a pixel corresponding to the observation channel 2192. A sensed observation current 2202 is scaled at scaling circuitry 2204 and subtracted from a sensed current 2206 at summing circuitry 2208 to generate a compensated output 2210 indicative of current through the sensing channel 2186 substantially attributable to the current provided by the current source 2182.
Furthermore, the dual-channel current sensing scheme 2180 may include amplifiers, filters, analog-to-digital converters, digital-to-analog converters, and/or other circuitry used for processing in the dual-channel current sensing scheme 2180 that have been omitted from
The scaling factor may be determined in a calibration of the display panel to determine an output of each channel in response to an aggressor image/injected signal to determine channel properties to determine a common-mode error between channels.
The channel is also tested with an induced differential mismatch by inducing a differential mismatch in the channel (block 2226). While in the induced mismatch state, the current (e.g., using the same aggressor image/injected signal) is passed into the channel (block 2228). A second output is sensed for the channel based on the current through the channel with the induced mismatch (block 2230).
Once these outputs are obtained for each channel to be calibrated, the outputs are stored in a lookup table used to establish the scaling factors (block 2232). For instance, the first output of the sensed channel (Gsi) is stored for each channel in an inherent differential sensing mode, and the second output of the sensed channel (Goi) is stored for each channel in an induced differential observing mode. The storage of these values may be stored in a lookup table, such as that shown below in Table 1.
These stored outputs may be used to determine a scaling factor using a relationship between outputs of a sensing channel and an observational channel. For example, the scaling factor that is used to scale observation channel sensed currents may be determined using the following Equation 1:
where channel i is the sensing channel, channel j is the observational channel, SFij is the scaling factor used to scale an output of the observational channel j when sensing via channel i, Goj is the output of channel j during induced differential mode calibration, and Gsi is the output of channel i during inherent differential mode calibration. As previously discussed, the scaling factor is used to scale the observational channel output before subtracting from the sensing channel output to ensure that the resulting compensated output is substantially attributable to the sensing channel's effects on the current through channel without inappropriately applying common-mode noise to the compensation.
In some embodiments, calibration measurements may be conducted multiple times to average the results to improve a signal-to-noise ratio of the outputs.
The sensing channel mode 2252 generates a current that is sent through a channel of the display panel 2256 corresponding to one or more pixels that is sensed through a sensing channel 2258 having an inherent (e.g., non-induced) amount of differential input mismatch 2260. The current through the channel 258 having the inherent differential input mismatch 2260 is sensed at a current sensing system 2262 producing an output (Gsi) 2264 that is stored in memory (e.g., lookup table illustrated in Table 1) for the inherent mismatch value used in scaling factor calculations.
During another calibration step before or after sensing channel mode 2252 analysis, an observational channel mode 2254 is employed. In the observational channel mode 2254, the same current is generated (e.g., using the same image or injected signal). However, the observation channel 2259 is now equipped with an induced differential input mismatch 2266. The amount of mismatch may be an amount of mismatch used in the observational channel operation during dual-channel sensing previously discussed or may differ to tune the scaling factor. The current in the channel 2259 with the induced differential input mismatch 2266 is sensed using the current sensing system 2262 and an output (Goi) 2268 is stored in memory (e.g., lookup table illustrated in Table 1) for the induced mismatch in scaling factor calculations.
A temperature prediction based on the change in content on the electronic display may also be used to prevent visual artifacts from appearing on the electronic display 18. For instance, as shown by a flowchart 910 of
Identifying a change in content may involve identifying a change in content within in a particular block 920 of content on the display of active area 764, as shown in
The size of the blocks 920 may be fixed at a particular size and location or may be adaptive. For example, the size of the blocks that are analyzed for changes in content may vary depending on a particular frame rate. Namely, since a slower frame rate could produce a greater amount of local heating, blocks 920 may be smaller for slower frame rates and larger for faster frame rates. In another example, the blocks may be larger for slower frame rates to computing power. Moreover, the blocks 920 may be the same size throughout the electronic display 18 or may have different sizes. For example, blocks 920 from areas of the electronic display 18 that may be more susceptible to thermal variations may be smaller, while blocks 920 from areas of the electronic display 18 that may be less susceptible to thermal variations may be larger.
As shown by a timing diagram 940, the content of a particular block 920 may vary upon a frame refresh 942, at which point content changes from that provided in a previous frame 946 to that provided in a current frame 948. When the current frame 948 begins to be displayed, a particular block 920 may have a change in the brightness from the previous frame 946 to the current frame 948. In the example of
Thus, as the content between the previous frame 946 and the current frame 948 has changed, the temperature also changes. If the temperature changes too quickly, even though the image data 752 may have been compensated for a correct temperature at the point of starting to display the current frame 948, the temperature may cause the appearance of the current frame 948 to have a visual artifact. Indeed, the temperature may change fast enough that the amount of compensation for the current frame 948 may be inadequate. This situation is most likely to occur when the refresh rate of the electronic display 18 is slower, such as during a period of reduced refresh rate to save power.
A baseline temperature 950 thus may be determined and predicted temperature changes accumulated based on the baseline temperature 950. The baseline temperature 950 may correspond to a temperature understood to be present at the time when the previous frame 946 finishes being displayed and the current frame 948 begins. In some cases, the baseline temperature 950 may be determined from an average of additional previous frames in addition to the most recent previous frame 946. Other functions than average may also be used (e.g., a weighted average of previous frames that weights the most recent frames more highly) to estimate the baseline temperature 950. From the baseline 950, a curve 952 is shown a likely temperature change as the content increases in brightness 944 between the previous frame 946 and the current frame 948. There may be an artifact threshold 954 representing a threshold amount of temperature change, beyond which point a visual artifact may become visible at a time 956. To avoid having a visual artifact appear due to temperature change, at the time 956, a change in temperature over time (dT/dt) 958 may be identified. A new, early frame may be provided when the estimated rate of change in temperature (dT/dt) 958 crosses the artifact threshold 954.
One example of a system for operating the electronic display 18 to avoid visual artifacts due to temperature changes based on content appears in a block diagram of
The content-dependent temperature correction loop 970 may include circuitry or logic to determine changes in the content of various blocks 920 of content in the image data 972 (block 976). A content-dependent temperature correction lookup table (CDCT LUT) 978 may obtain a rate of temperature change estimated based on a previous content of a previous frame or an average of previous frames and the current frame of image data 972. An example of the content-dependent temperature correction lookup table (CDCT LUT) 978 will be discussed further below with reference to
When a new frame is caused to be sent to the electronic display 18 and the display sense feedback 756 for the block that triggered the new frame is obtained, the correction factor associated with that block may be provided to the content-dependent temperature correction loop 970. This may act as a new baseline temperature for predicting a new accumulation of temperature changes in block 980. In addition, virtual temperature sensing 984 (e.g., as provided by other components of the electronic device 10, such as an operating system running on processor core complex 12, or actual temperature sensors disposed throughout the electronic device 10) may also be used by the content-dependent temperature correction loop 970 to predict a temperature change accumulation at block 980 to trigger provision of new image frames and new display sense feedback 756 from the frame duration control/frame control circuitry or logic block 982.
Another example of performing the content-dependent temperature correction for a particular block of content is described by a timing diagram 990 of
Display block content is shown to begin upon writing a new frame 1036. In the example of
Upon receiving a subsequent frame 1042, the content of block B4 changes to become much darker. Here, the content of block B4 has an estimated rate in change of temperature per accumulation cycle of −1000 units, resulting in an accumulation of −5000 at point 1044, thereby crossing the threshold value of a magnitude of 5000 units of temperature change. This triggers a new frame 1046. A new temperature baseline for the content block B4 is established as zero and a new estimated rate of change in temperature (dT/dt) is estimated based on the average content of the previous frames for the content block B4. In this case, the estimated rate of change in temperature (dT/dt) for the content block B4 is now determined to be −700 units of temperature per accumulation cycle. In this way, even for relatively slow refresh rates, rapid changes in temperature may be predicted and visual artifacts based on temperature variation may be avoided.
Pixels may vary when a driving current/voltage is applied under variable conditions, such as different temperatures or different online times of different pixels in the display. External compensation using one or more processors may be used to compensate for these variations. During a scan, these variations of the display are scanned using test data, and the results are provided to image processing circuitry external to the display. Based on the sensed variations of the pixels, the image processing circuitry adjusts the image data before it is provided to the display. When the image data reaches the display, it has been compensated in advance for the expected display variations based on the scans.
However, the compensation loops used to compensate for variations may not be capable of fully compensating for more than a single factor (e.g., temperature, aging). Dual-loop compensation may be used to apply compensation for multiple variation types. However, loops directed to different classifications of variation may utilize filtering or may not run simultaneously. Instead, the dual-loop compensation scheme may utilize a fast loop and a slow loop.
The fast loop is updated rapidly to cover variations with high temporal variations. The fast loop may also be populated with low-spatial variance scans to handle low-spatial variations, such as a generally broad area of aging of pixels (e.g., low-spatial aging variations) and temperature variations. The fast loop will also the low-spatial aging variations even though the low-spatial aging variations may have a relatively low frequency of variation.
The slow loop may handle aging variations that are not handled by the fast loop. Specifically, the slow loop may be updated much slower than the fast loop and with a higher spatial frequency (e.g., finer granularity) than the fast loop. Thus, the slow loop will handle aging that has a low-temporal frequency and a high spatial aging variations.
Since the variations that are picked up the fast loop and the slow loop, their compensations may be applied independently without complicated processing between the calculated compensations. These compensations may be added together before application to image data and/or may be applied to image data compensation settings independently.
With the foregoing in mind,
The processors 12 are in communication with the scanning controller 2358 and/or the scanning driving circuitry 2356. The processors 12 compensate image data for results from scanning using the scanning driving circuitry 2356 using dual-loops of processing. For example,
In the first loop 2402, a panel 2406 receives test data from a digital-to-analog-converter (DAC) 2408 that sends test data to a panel 2406 for sensing characteristics of pixels in the panel 2406. Sensed data returning from the panel 2406 are submitted to an analog-to-digital converter (ADC) 2410. The digital sensed data is sent to processors 12 and compensated using temperature compensation logic 2412 running on the processors 12. Specifically, any temperature fluctuations causing a change in brightness of resulting pixels. The temperature compensation logic 2412 compensates for variations that would occur from the temperature variations by applying inverted versions of the temperature changes to image data to reduce or eliminate fluctuations from transmitted image data.
In the second loop 2404, the panel 2406 receives test data from the digital-to-analog-converter (DAC) 2408 that sends test data to the panel 2406 for sensing characteristics of pixels in the panel 2406. Sensed data returning from the panel 2406 are submitted to the analog-to-digital converter (ADC) 2410. The digital sensed data is sent to processors 12 and compensated using aging compensation logic 2414 running on the processors 12. Specifically, since the electronic device 10 may be on standby, results of the sensed data may include only aging data without temperature variation effects. The aging compensation logic 2414 compensates for variations that would occur from the aging of circuitry of the panel 2406 variations by applying inverted versions of the temperature changes to image data to reduce or eliminate fluctuations from transmitted image data.
As illustrated, there is no interaction between the first loop 2402 and the second loop 2404. By allowing the first loop 2402 and the second loop 2404 to operate independently, implementation may be more simple and compensation may be generally less complex. However, aging data may be collected at a relatively low collection speed and corresponds to a relatively high visibility risk.
Temperature also varies little from pixel-to-pixel but rather only fluctuates with a relatively low spatial frequency 2456 of variance. However, aging may vary from pixel-to-pixel in a high spatial frequency 2458 of variance since adjacent pixels may have differing levels of usage. Aging may also vary in a low spatial frequency 2456 due to groups of pixels (e.g., whole display, a notification area of a user interface, etc.) that are used substantially together. Neither aging nor temperature has a high temporal frequency 2454 variation and high spatial frequency 2458. To cover aging and compensation, if a fast loop 2460 has a low spatial frequency or coarse scanning pattern in sensing scans and/or compensation, the slow loop 2462 may apply a high spatial frequency or more fine tuned pattern at less frequent intervals. This dual-loop scheme 2440 results in aging and temperature variations being compensated for properly. Furthermore, the dual-loop scheme 2440 may be deployed without filtering to remove temperature data from aging data or vice versa since the slow loop 2462 only handles high spatial frequency, low temporal variation aging that is not handled by the fast loop 2460.
Furthermore, using only a single loop with low spatial variation would not properly address all issues arising from aging and temperature variations.
The processors 12 also store results from the scans in a second scan memory at a second rate (block 2536). The second rate may be low relative to the first rate with a frequency of scan (or at least storage of scans) being stored only once every several minutes, once an hour, once per several hours, or other periods of low temporal rates.
Using the sensing results stored in the first scan memory and the second scan memory, the processors 12 compensate image data (block 2538). Compensation for the variations detected using each loop may be compensated for in series with the fast loop or the slow loop compensation performed first with the other performed after. For example, the fast loop may be compensated for with the slow loop being compensated after or vice versa. This sequential compensation is feasible for the dual-loop scheme since each loop addresses non-overlapping areas of concern. Additionally or alternatively, a summed compensation may be applied. For example, if the slow loop indicates that a pixel's driving level (e.g., current or voltage) should be increased by a certain amount due to aging while the fast loop indicates that the pixel's driving level should be decreased by a certain amount. The compensations may be compounded together by subtracting the values from each other.
Analysis of the sensed data is performed using two loops. In a “fast” loop, the sensed data is stored in a first memory location (block 2554). Before or after storage, the sensed data in the first memory location is spatially averaged to create a coarse scan (block 2556). As previously discussed, this coarse scan (sampled at a high temporal rate) results in the fast loop capturing variations related to low spatial aging and temperature of high and low temporal frequency variations. These variations are compensated for (block 2558) by inverting expected image fluctuations in the image data where the expected fluctuations are based on the spatially averaged data in the first memory location.
In the second loop or the “slow” loop, the processors 12 determine whether a first threshold has elapsed since the last scan of the slow loop (block 2560). For example, this threshold may be several minutes to several hours of time. If the threshold has not elapsed, no new data is sampled into the slow loop and a previous compensation using the slow loop is maintained. However, if the duration has elapsed, the processors 12 store the sensed data in a second memory location (block 2562). In some embodiments, the first threshold may be forgone if no data is stored in the second memory location after start up of the electronic device 10. As previously noted, the data in the second memory may have a fine grain resolution (e.g., high spatial frequency) that captures variations due to high spatial frequency aging of pixels or small groups of pixels. These variations are compensated for (block 2564) based on the sensed data stored in the second memory location. The compensations from the first and second loop may be mathematically combined using an accumulator and/or each may be applied directly to the image data independently.
Once compensations using the fast and slow loops have been applied to image data, the compensated image data is displayed based on the compensations using the first and second memory locations (block 2566).
The rescan process is repeated once a second threshold elapses (block 2568). The second threshold may be used to control how often the fast loop obtains data. Therefore, the second threshold may be less than a second, a second, more than a second, a few minutes, or any value less than the first threshold. If the second threshold has not elapsed, current compensations are maintained, but if the second threshold has elapsed, a new scan is begun and at least fed to the fast loop. Since a single set of scan results may be used for both the fast loop and the slow loop, the loops may share scan data (prior to spatial averaging in the fast loop). Thus, the second threshold determines when to begin a new scan and the first threshold determines whether the new scan is submitted to the slow loop or only the fast loop. Additionally or alternatively, the first threshold may independently begin a new scan for the slow loop when the first threshold has elapsed.
As previously noted, the fast loop may use a sample of data rather than spatially averaged values.
Furthermore, as previously noted, the processors 12 cause sensing of pixels (block 2552). However, unlike sensing in the process 2580, some scans of the display 18 may include sensing only a portion of the pixels of the display rather than all of the pixels of the display 18. For example, when a threshold period has elapsed for the second threshold, a scan may be initiated, but a scan type may depend upon whether a threshold period has elapsed for the first threshold. If the second first threshold has elapsed, the scan may be complete for every pixel to generate a fine scan with a high spatial frequency pattern, but if the second threshold has elapsed, the scan may include only the pixels that are to be included in the first memory rather than sampling a full scan.
Process, system, and/or environmental induced panel non-uniformities may be corrected by providing an area based dynamic display uniformity correction. This area based display uniformity correction can be applied at particular locations of the display or across the entirety of the display. In some embodiments, a lookup table of correction values may be a reduced resolution correction map to allow for reduced power consumption and increased response times. Additional techniques are disclosed to allow for dynamic and/or local adjustments of the resolution of the lookup table (e.g., a correction map), which also may be globally or locally updated based on real time measurements of the display, one or more system sensors, and/or virtual measurements of the display (e.g., estimates of temperatures affecting a display generated from measurements of power consumption, currents, voltages, or the like).
Additionally, per-pixel compensation may use large storage memory and computing power. Accordingly, reduced size representative values may be stored in a look-up table whereby the representative values subsequently may subsequently be decompressed, scaled, interpolated, or otherwise converted for application to input data of a pixel. Furthermore, the update rate for display image data and/or the lookup table may be variable or set at a preset rate. Dynamic reference voltages may also be applied to pixels of the display in conjunction with the corrective measures described above.
Additional compensation techniques related to adaptive correction of the display are also described. Pixel response (e.g., luminance and/or color) can vary due to component processing, temperature, usage, aging, and the like. In one embodiment, to compensate for non-uniform pixel response, a property of the pixel (e.g., a current or a voltage) may be measured and compared to a target value to generate correction value using estimated pixel response as a correction curve. However, mismatch between correction curve and actual pixel response due to panel variation, temperature, aging, and the like can cause correction error across the panel and can cause display artifacts, such as luminance disparities, color differences, flicker, and the like, to be present on the display.
Accordingly, pixel response to input values may be measured and checked for differences against a target response. Corrected input values may be transmitted to the pixel in response to any differences determined in the pixel response. The pixel response may be checked again and a second correction (e.g., an offset) may be additionally applied to insure that any residual errors are accounted for. The aforementioned correction values may supplement values transmitted to the pixel so that a target response of the pixel to an input is generated. This process may be done at an initial time (e.g., when the display is manufactured, when the device is powered on, etc.) and then repeated at one or more times to account for time-varying factors. In this manner, to accommodate for mismatches, a correction curve can be continuously monitored (or at predetermined intervals) in real time and adaptively adjusted on the fly to minimize correction error.
As shown in
The electronic display 18 includes an active area 2664 with an array of pixels 2666. The pixels 2666 are schematically shown distributed substantially equally apart and of the same size, but in an actual implementation, pixels of different colors may have different spatial relationships to one another and may have different sizes. In one example, the pixels 2666 may take a red-green-blue (RGB) format with red, green, and blue pixels, and in another example, the pixels 2666 may take a red-green-blue-green (RGBG) format in a diamond pattern. The pixels 2666 are controlled by a driver integrated circuit 2668, which may be a single module or may be made up of separate modules, such as a column driver integrated circuit 2668A and a row driver integrated circuit 2668B. The driver integrated circuit 2668 (e.g., 2668B) may send signals across gate lines 2670 to cause a row of pixels 2666 to become activated and programmable, at which point the driver integrated circuit 2668 (e.g., 2668A) may transmit image data signals across data lines 2672 to program the pixels 2666 to display a particular gray level (e.g., individual pixel brightness). By supplying different pixels 2666 of different colors with image data to display different gray levels, full-color images may be programmed into the pixels 2666. The image data may be driven to an active row of pixel 2666 via source drivers 2674, which are also sometimes referred to as column drivers.
As mentioned above, the pixels 2666 may be arranged in any suitable layout with the pixels 2666 having various colors and/or shapes. For example, the pixels 2666 may appear in alternating red, green, and blue in some embodiments, but also may take other arrangements. The other arrangements may include, for example, a red-green-blue-white (RGBW) layout or a diamond pattern layout in which one column of pixels alternates between red and blue and an adjacent column of pixels are green. Regardless of the particular arrangement and layout of the pixels 2666, each pixel 2666 may be sensitive to changes on the active area 2664 of the electronic display 18, such as variations and temperature of the active area 2664, as well as the overall age of the pixel 2666. Indeed, when each pixel 2666 is a light emitting diode (LED), it may gradually emit less light over time. This effect is referred to as aging, and takes place over a slower time period than the effect of temperature on the pixel 2666 of the electronic display 18.
Display panel sensing may be used to obtain the display sense feedback 2656, which may enable the processor core complex 12 to generate compensated image data 2652 to negate the effects of temperature, aging, and other variations of the active area 2664. The driver integrated circuit 2668 (e.g., 2668A) may include a sensing analog front end (AFE) 2676 to perform analog sensing of the response of pixels 2666 to test data. The analog signal may be digitized by sensing analog-to-digital conversion circuitry (ADC) 2678.
For example, to perform display panel sensing, the electronic display 18 may program one of the pixels 2666 with test data. The sensing analog front end 2676 then senses a sense line 2680 of connected to the pixel 2666 that is being tested. Here, the data lines 2672 are shown to act as extensions of the sense lines 2680 of the electronic display 18. In other embodiments, however, the display active area 2664 may include other dedicated sense lines 2680 or other lines of the display 18 may be used as sense lines 2680 instead of the data lines 2672. Other pixels 2666 that have not been programmed with test data may be sensed at the same time a pixel that has been programmed with test data. Indeed, by sensing a reference signal on a sense line 2680 when a pixel on that sense line 2680 has not been programmed with test data, a common-mode noise reference value may be obtained. This reference signal can be removed from the signal from the test pixel that has been programmed with test data to reduce or eliminate common mode noise.
The analog signal may be digitized by the sensing analog-to-digital conversion circuitry 2678. The sensing analog front end 2676 and the sensing analog-to-digital conversion circuitry 2678 may operate, in effect, as a single unit. The driver integrated circuit 2668 (e.g., 2668A) may also perform additional digital operations to generate the display feedback 2656, such as digital filtering, adding, or subtracting, to generate the display feedback 2656, or such processing may be performed by the processor core complex 12.
In some embodiments, a variety of sources can produce heat that could cause a visual artifact to appear on the electronic display 18 if the image data 2652 is not compensated for the thermal variations on the electronic display 18. For example, as shown in a thermal diagram 2690 of
As further illustrated in
As shown in
The correction map 2696 (or a portion thereof, for example, data corresponding to a particular region 2692), may be read from the memory of the image data generation and processing system 2650. The correction map 2696 (e.g., one or more correction values) may then (optionally) be scaled (represented by step 2700), whereby the scaling corresponds to (e.g., offsets or is the inverse of) a resolution reduction that was applied to the correction map 2696. In some embodiments, whether this scaling is performed (and the level of scaling) may be based on one or more input signals 2702 received as display settings and/or system information.
In step 2704, conversion of the correction map 2696 may be undertaken via interpolation (e.g., Gaussian, linear, cubic, or the like), extrapolation (e.g., linear, polynomial, or the like), or other conversion techniques being applied to the data of the correction map 2696. This may allow for accounting of, for example, boundary conditions of the correction map 2696 and may yield compensation driving data that may be applied to raw display content 2706 (e.g., image data) so as to generate compensated image data 2652 that is transmitted to the pixels 2666. A visual example of this process of step 2704 is illustrated in
Returning to
As illustrated in graph 2722, which represents an update at time n+1 (corresponding to, for example, a second frame refresh). An additional new data value data value 2724 may be generated based on the display sense feedback 2656 during an update at time n+1. As part of the update of the correction map 2696, as illustrated in graph 2718, the new data value 2724 may be applied to current look up table values 2716 associated with (e.g., proximate to) the new data value 2724. This results in shifting of the look up table values 2716 corresponding to pixels 2666 affected by the condition represented by the new data value 2724 to generate corrected look up table values 2726 (illustrated along with the former look up table values 2716 that were adjusted). The illustrated update process in
In some embodiments, dynamic correction voltages may be provided to the pixels 2666 singularly and/or globally.
Some pixels 2666 may use one terminal for image dependent voltage driving and a different terminal for global reference voltage driving. Accordingly, as illustrated in
Other techniques for corrections of non-uniformity of a display are additionally contemplated. For example, as illustrated in graph 2734 of
Additionally, the property of the pixel 2666 (e.g., a current a voltage) may be measured 2752 at a second time, yielding a second measurement 2746 that allows for residual correction (e.g., curve offset 2752) to be additionally applied with the correction value 2750 to generate a panel curve 2754 that may be utilized (e.g., in conjunction with a lookup table) to apply the combined value of the correction value 2750 and the curve offset 2752 to, for example, raw display content 2706 (e.g., image data) so as to generate compensated image data 2652 that is transmitted to the pixels 2666 (e.g., the panel curve 2754 may be used to choose offset voltages to be applied to the raw display content 2706 based on a target current to be achieved). This process may be performed prior to or subsequent to the corrections discussed in conjunction with
As illustrated in
The aforementioned described process may be performed on the fly (e.g., the panel curve 2754 and/or the adapted panel curve 2764 can be continuously monitored in real time and/or in near real time and adaptively adjusted on the fly to minimize correction error). Likewise, this process may be performed at regular intervals (e.g., in connection to the refresh rate of the display 18) to allows for enhancement correction accuracy for pixel 2666 response estimation. In other embodiments, for example, in order to enhance curve adaptation further such as slope, the above adaptation procedure can be performed in multiple different current levels. Furthermore, as each pixel 2666 may have its own I-V (current-voltage) curve, the above noted process may be done for each pixel 2666 of the display.
Many electronic devices may use display panels to provide user interfaces. Many user display panels may be pixel-based panels, such as light-emitting diode (LED) panels, organic light emitting diodes (OLED) panels and/or plasma panels. In these panels, each pixel may be driven individually by a display driver. For example, a display driver may receive an image to be displayed, determine what intensity each pixel of the display should display, and drive that pixel individually. Minor distinctions between circuitry of the pixels due to fabrication variations, aging effects and/or degradation may lead to differences between a target intensity and the actual intensity. These differences may lead to non-uniformities in the panel. To prevent or reduce the effects of such non-uniformities, displays may be provided with a sensing and processing circuitry that measures the actual intensity being provided by a pixel, compares the measured intensity to a target intensity, and provides a correction map to the display driver.
The sensing circuitry may be susceptible to errors. These errors may lead to generation of incorrect correction maps, which in its turn may lead to overcorrection in the display. The accumulated errors due to overcorrections as well as due to delays associated to this correction process may lead to visible artifacts such as luminance jumps, screen flickering, and non-uniform flickering. Embodiments described herein are related to methods and system that reduce visible artifacts and lead to a more comfortable interface for users of electronic devices. In some embodiments, sensing errors from sensor hysteresis are addressed. In some embodiments, sensing error from thermal noise are addressed. Embodiments may include spatial filters, such as 2D filters, feedforward sensing, and partial corrections to reduce the presence of visible artifacts due to sensing errors.
Sensing data may be provided to a sensor data processing circuitry 2808 from the sensing circuitry 2806. Sensor data processing circuitry 108 may compare the target intensities with the measured intensities to provide a correction map 2810. As detailed below, in some embodiments, the sensor data processing circuitry 2808 may include image filtering schemes. In some embodiments, the sensor data processing circuitry 2808 may include feedforward sensing schemes that may be associated with the provision of partial correction maps 2810. These schemes may substantially decrease visual artifacts generated by undesired errors introduced in the sensing circuitry 2806 and provide an improved user experience.
As discussed above, sensing errors from hysteresis effects appear as high frequency artifacts while sensing errors from thermal effects appear as low frequency artifacts. Suppression of the high frequency component of the error may be obtained by having the sensing data run through a low pass filter, which may decrease the amount of visible artifacts, as discussed below.
The charts in
Filtering of high frequency sensing errors may lead to a reduced impact on the visual experience for a user of an electronic device. The chart 2970 in
The schematic diagram 2990 of
As discussed above, some artifacts may be generated by an overcorrection of the display luminance due to faulty sensing data. In some situations, this overcorrection may be minimized by employing a partial correction scheme. In such situations, a partial correction map is calculated from the total correction map that is based on the differences between target luminance and sensed luminance. This partial correction map is used by the display driver. A system that employs partial corrections may present a more gradual change in the luminance, and artifacts from sensing errors as the ones discussed above may be unperceived by the user of the display. In some implementations, this scheme may use partial corrections to generate images in the display, but it may instead use the total correction map for adjusting the sensed data. This strategy may be known as a feedforward sensing scheme. Feedforward sensing schemes may be useful as they allow faster convergence of the correction map to the total correction map.
With the foregoing in mind,
In certain situations, the partial correction and feedforward sensing scheme may be added to a sensing and correction system, such as system 2800 in
The charts in
The charts illustrated in
The use of per-frame partial corrections is illustrated in chart 3412 of
Image artifacts due to thermal variations on an electronic display (e.g., an organic light emitting diode, or OLED) display panel can be corrected using external compensation (e.g., using processors) by adjusting image data based on a correction profile using a sensed thermal profile of the electronic display. The thermal profile is actual distribution of heat inside the electronic display, and the correction profile is the sensed heating and a resulting image data correction for each heat level. For instance, higher thermal levels may cause pixels to display brighter in response to image data. Once these levels are sensed, the processor may create a correction profile based on the sensed data that inverts expected changes based on the thermal profile and applies them to image data so that the correction and the thermal variation cancel each other out causing the image data to appear as it was stored.
After power cycling, a residual (or pre-existing) thermal profile from previous usage can cause significant artifacts until an external compensation loop corrects the artifact using processors external to the display. The processors may use the external compensation loop to generate the correction profile In addition, any thermal variation built during off-display, such as LTE usage, light, and ambient temperature, can also cause artifacts. In this warm boot-up condition, sensing of variation due to temperature and correction of image data may be performed quickly to minimize initial artifacts. Every power cycle, sensing and correction of the whole screen can be performed during power-on sequence. This may take place even before panel starts to display images or even establishes communication with processors used to externally compensate for the thermal profile. Sensing and correction of the entire screen may involve programming driving circuitry to conduct sensing after a boot up before establishing communication with the processors that may cause sensing during scanning phases of normal operation. Furthermore, since the scanning may be performed before establishment of communication with the processors for external compensation, sensing results may be stored in a local buffer (e.g., group of line buffers) until communication with the processors 12 is established.
External or internal heat sources may heat at least a portion of the active area 3552. Operation of the electronic device 10 with the active area heated unevenly may result in display artifacts if these heat variations are not compensated for. For example, heat may change a threshold voltage of the an access transistor of a respective pixel, causing power applied to the pixel to appear differently than an appearance the same power would cause in adjacent pixels undergoing a different amount of heat. During operation of the electronic device 10, compensation using the processors 12 may account for such artifacts due to ongoing sensing. However, during startup of the device 10, this external compensation may generally begin after communication is established between the display 18 (e.g., scanning driving circuitry 3556 and/or scanning controller 3558. During this startup time, if a preexisting thermal profile preexists the power cycle, the correction speed (e.g., τ=0.3 s) may be too slow to prevent a waving artifact issue.
Due to internal or external heat sources, heat in the regions 3610-3620 may vary throughout the active area 3552 due to light (e.g., sunlight), ambient air temperatures, and/or other outside heat sources. As illustrated, the region 3610 corresponds to a relatively high temperature. This temperature may correspond to a processing chip (e.g., camera chip, video processing chip) or other circuitry located underneath the active area 3552. When the electronic device 10 boots up while having the thermal profile 3600, the relatively high temperature of the region 3610 may result in an artifact, such as the artifact 3650 illustrated in
Furthermore, the thermal profile 3600 may be built prior to or during the power cycle. For example, heat may remain through the power cycle due to operation of the electronic device 10 during a previous ON state for the electronic device 10. Additionally or alternatively, the power cycle may correspond to only some portions of the electronic device 10 (e.g., the display 18) while other portions (e.g., network interface 26, I/O interface 24, and/or power source 28) remain active and possibly generating heat. The thermal profile 3600 may be stored in memory 14 upon shutdown of the previous ON state. However, this thermal profile 3600 is likely to change over time, and external compensation using the processors 12 is unlikely to be correct since the processors 12 may correct video data using a thermal profile 3600 that is no longer current. Thus, such embodiments may result in artifacts corresponding to an incorrect thermal profile. Instead, the thermal profile 3600 may be reset and to be correctly mapped during a sense phase of the display 18. However, since the sensing phase is generally sent to the processors 12 after communication is established with the processors 12 by the display 18. In other words, the processors 12 traditionally send image data to the display 18 at substantially the same time that the first image data is sent to the display 18 after start up or image data is sent after the first image data is sent to the display 18.
As illustrated in
Furthermore, sensing of the pixels of the active area 3552 may include sensing only a portion of the pixels. For example, pixels in key locations, such as those near known heat sources, may be scanned. Additionally or alternatively, a sampling representative of the active area 3552 may be made. It is noted that an amount of pixels scanned may be a function of available buffer space since the sensing data is stored in a local buffer (block 3706). The local buffer may be located in or near the scanning driving circuitry 3556 and/or the scanning controller 3558. The local buffer is used for boot up scanning since communication with the processors 12 has not been established in the boot up process before the sensing of pixels begin. As previously noted, the buffer size may be related to how many pixels are sensed during the sensing scan. For example, if only strategic locations are stored, the local buffer may include twenty line buffers, over a thousand line buffers may be used if all pixels are sensed during the boot up scan.
Once communication is established between the display 18 and the processors 12, the sensing data is transferred to the processors 12 (block 3708). The processors 12 then modify image data to compensate for the potential artifacts (block 3710). For example, the image data may be modified to reduce luminance levels of pixels corresponding to locations indicating a relatively high temperature.
Display panel quality and/or uniformity can be negatively effected by temperature. For example, as the temperature changes a voltage (VHILO) across the high and low terminals of a light-emissive solid-state device may cause unintended variation of light emission from the light-emissive solid-state device. The light-emissive solid-state device may include an organic light emitting diode (OLED), a light emitting diode (LED), or the like. Herein, the following refers to an OLED, but some embodiments may include any other light-emissive solid-state devices.
Specifically, as the temperature changes in a pixel around the OLED, a corresponding driving transistor (e.g., thin-film transistor TFT) fluctuates a voltage/current provided to the OLED. Using a temperature index and a relationship between system temperature and a temperature of the OLED, a VHILO may be predicted and compensated for even when direct measurement of the OLED temperature is impossible or impractical.
Generally, the brightness depicted by each respective pixel in the display 18 is generally controlled by varying an electric field associated with each respective pixel in the display 18. Keeping this in mind,
The self-emissive pixel array 3880 is shown having a controller 3884, a power driver 3886A, an image driver 3886B, and the array of self-emissive pixels 3882. The self-emissive pixels 3882 are driven by the power driver 3886A and image driver 3886B. Each power driver 3886A and image driver 3886B may drive one or more self-emissive pixels 3882. In some embodiments, the power driver 3886A and the image driver 3886B may include multiple channels for independently driving multiple self-emissive pixels 3882. The self-emissive pixels may include any suitable light-emitting elements, such as organic light emitting diodes (OLEDs), micro-light-emitting-diodes (μ-LEDs), and the like.
The power driver 3886A may be connected to the self-emissive pixels 3882 by way of scan lines S0, S1, . . . Sm-1, and Sm and driving lines D0, D1, . . . Dm-1, and Dm. The self-emissive pixels 3882 receive on/off instructions through the scan lines S0, S1, . . . Sm-1, and Sm and generate driving currents corresponding to data voltages transmitted from the driving lines D0, D1, . . . Dm-1, and Dm. The driving currents are applied to each self-emissive pixel 3882 to emit light according to instructions from the image driver 3886B through driving lines M0, M1, . . . Mn-1, and Mn. Both the power driver 3886A and the image driver 3886B transmit voltage signals through respective driving lines to operate each self-emissive pixel 3882 at a state determined by the controller 3884 to emit light. Each driver may supply voltage signals at a duty cycle and/or amplitude sufficient to operate each self-emissive pixel 3882.
The controller 3884 may control the color of the self-emissive pixels 3882 using image data generated by the processor core complex 12 and stored into the memory 14 or provided directly from the processor core complex 12 to the controller 3884. A sensing system 3888 may provide a signal to the controller 3884 to adjust the data signals transmitted to the self-emissive pixels 3882 such that the self-emissive pixels 3882 may depict substantially uniform color and luminance provided the same current input in accordance with the techniques that will be described in detail below.
With the foregoing in mind,
As shown in
In order to incorporate the sensing period 3902 into the progressive scans of the display 18, pixel driving circuitry may transmit data signals to pixels of each row of the display 18 and may pause its transmission of data signals during any portion of the progressive scan to determine the sensitivity properties of any pixel on any row of the display 18. Moreover, as sizes of displays decrease and smaller bezel or border regions are available around the display, integrated gate driver circuits may be developed using a similar thin film transistor process as used to produce the transistors of the pixels 3882. In some embodiments, the sensing periods may be between progressive scans of the display.
The output of the current source 3946 depends upon the voltage stored in the storage capacitor 3944. For example, the storage capacitor 3944 may equal a gate-source voltage VGS of a TFT of the current source 3946. However, the voltage in the storage capacitor 3944 may change due to parasitic capacitances represented by the capacitor 3948. The amount of parasitic capacitance may change with temperature that causes operation of the current source 3946 to vary thereby causing changes in emission of the OLED 3942 based at least in part on temperature fluctuations. Temperature may also cause other fluctuations in the pixel current through the OLED 3942, such as fluctuations of operation of the TFTs making up the current source and/or operation of the OLED 3942 itself.
Furthermore, grayscale levels may also affect a change in an amount of shift in VHILO and its corresponding IOLED.
With this in mind, the pixel driving circuitry 3970 may include switches 3974, 3978, and 3980 along with transistor 3976. These switches may include any type of suitable circuitry, such as transistors. Transistors (e.g., transistor 3976) may include N-type and/or P-type transistors. That is, depending of the type of transistors used within the pixel driving circuitry 3970, the waveforms or signals provided to each transistor should be coordinated in a manner to cause the pixel control circuitry.
As shown in
As illustrated in
where CGATE is the capacitance of parasitic capacitance at the gate and CST is the capacitance of the storage capacitor 3988.
Although the pixel sensitivity ratio may be reduced by increasing capacitance of the storage capacitor, size in the pixel control circuitry 3970 may be limited due to display size, compactness of pixels (i.e., pixels-per-inch), part costs, and/or other constraints. In other words, the VHILO sensitivity cannot be reasonably eliminated. Thus, in realistic situations, as previously discussed, VHILO may shift due to temperature and/or other causes.
In other words, this ΔVgs error is created by parasitic capacitance on the gate of the transistor 3976 in a source-follower-type pixel. In other embodiments, the error may be shifted around to other locations due to other parasitic capacitances.
To address these problems a predictive VHILO model may be used to mitigate a temperature effect on VHILO.
The processor core complex 12 then predicts a change in VHILO based at least in part on the indication of the temperature (block 4004). If the indication of temperature corresponds to an overall system temperature, the indication of temperature may be interpolated from a system temperature to a temperature for a pixel or group of pixels based on a location of the pixel or group of pixels relative to heat sources in the electronic device 10, operating states (e.g., camera running, high processor usage, etc.) of the electronic device, an outside temperature (e.g., received via the network interface(s) 26), and/or other temperature factors.
Using either the received indication directly or an interpolation based on the received indication, the prediction may be performed using a lookup table that has been populated using empirical data reflecting how ΔVHILO is related to temperature for the pixel in an array of pixels in a display panel, a grid of the panel, an entire panel, and/or a batch of panels. This empirical data may be derived at manufacture of the panels. In some embodiments, the empirical data may be captured multiple times and averaged together to reduce noise in the correlation between ΔVHILO and temperature. In some embodiments, instead of a lookup table with empirically derived data, the empirical data may be used to derive a transfer function that is formed from a curve fit of one or more empirical data gathering passes.
As previously note, in addition to temperature, ΔVHILO may depend on grayscale levels and/or emission color of the OLED 3972. Thus, the prediction of the ΔVHILO may also be empirically gathered for color effects and/or grayscale levels. In other words, the predicted ΔVHILO may be based at least in part on the temperature, the (upcoming) grayscale level of the OLED 3972, the color of the OLED 3972, or any combination thereof.
The processor core complex 12 compensates a pixel voltage inside the pixel control circuitry 3970 to compensate based at least in part on the predicted ΔVHILO (block 4006). Compensation includes offsetting the voltage based on the predicted ΔVHILO by submitting a voltage having an opposite polarity but similar amplitude on the pixel voltage (e.g., VANODE). The compensation may also include compensating for other temperature-dependent (e.g., transistor properties) or temperature-independent factors. Furthermore, since some grayscale levels are more likely to be visible due to human detection factors or properties of the grayscale level and ΔVHILO, in some embodiments, the compensation voltage may be applied for some grayscale level content but not applied for other grayscale level content.
The correlation model 4020 is used by the processor core complex 12 to convert to predict VHILO (block 4026) based on the temperature index and a current ΔV as determined from a sensing control 4028 used to determine how to drive voltages for operating a pixel 4030. The sensing control 4028 is used to control voltages used during an emission state based on results of a sensing phase. Additionally or alternatively, a transfer function may be used from the temperature index/ΔV. This prediction may be made using a first lookup table that converts ΔV and a temperature index to a predicted ΔVHILO. The predicted ΔVHILO is then used to determine a VSENSE level that is used in a sensing state to offset the ΔVHILO using the processor to access a second lookup table (block 4032). Additionally or alternatively, a transfer function may be used from ΔVHILO to determine the VSENSE compensating for the ΔVHILO.
The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
This application claims priority to and benefit from U.S. Provisional Application No. 62/394,595, filed Sep. 14, 2016, entitled “Systems and Methods for In-Frame Sensing and Adaptive Sensing Control”; U.S. Provisional Application No. 62/483,237, filed Apr. 7, 2017, entitled “Sensing Considering Image”; U.S. Provisional Application No. 62/396,659, filed Sep. 19, 2016, entitled “Low-Visibility Display Sensing;” U.S. Provisional Application No. 62/397,845, filed Sep. 21, 2016, entitled “Noise Mitigation for Display Panel Sensing;” U.S. Provisional Application No. 62/398,902, filed Sep. 23, 2016, entitled “Edge Column Differential Sensing Systems and Methods;” U.S. Provisional Application No. 62/483,264, filed Apr. 7, 2017, entitled “Device And Method For Panel Conditioning;” U.S. Provisional Application No. 62/511,812, filed May 26, 2017, entitled “Common-Mode Noise Compensation;” U.S. Provisional Application No. 62/396,538, filed Sep. 19, 2016, entitled “Dual-Loop Display Sensing For Compensation;” U.S. Provisional Application No. 62/399,371, filed Sep. 24, 2016, entitled “Display Adjustment;” U.S. Provisional Application No. 62/483,235, filed Apr. 7, 2017, entitled “Correction Schemes For Display Panel Sensing;” U.S. Provisional Application No. 62/396,547, filed Sep. 19, 2016, entitled “Power Cycle Display Sensing”; and U.S. Provisional Application No. 62/511,818, filed on May 26, 2017, entitled “Predictive Temperature Compensation”; the contents of which are incorporated by reference in their entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62394595 | Sep 2016 | US | |
62483237 | Apr 2017 | US | |
62396659 | Sep 2016 | US | |
62397845 | Sep 2016 | US | |
62398902 | Sep 2016 | US | |
62483264 | Apr 2017 | US | |
62511812 | May 2017 | US | |
62396538 | Sep 2016 | US | |
62399371 | Sep 2016 | US | |
62483235 | Apr 2017 | US | |
62396547 | Sep 2016 | US | |
62511818 | May 2017 | US |