A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
Pixel-based display panels may generate images by the use of driving signals (e.g., a voltage or a current) provided to the individual pixels of the display. Due to inhomogeneities across pixels of a display, the brightness level of the pixel in response to a specific electrical signal may vary. Compensation circuitry that receives data from sensing circuitry may be used to correct the driving signals and prevent image artifacts from appearing. However, coupling the sensing circuitry to a pixel may change the electrical characteristics of the pixel circuitry. To prevent the changes caused by the presence of the sensing circuitry, such differences may be calibrated and correction factors may be programmed into the compensation circuitry of the display. Embodiments described herein include systems and methods that are capable of performing such calibrations and employing the correction factors during compensation using measurements from the sensing circuitry. The use of the embodiments described herein may improve the quality of the images provided by the display.
In one embodiment, an electronic device is described. The electronic device may include a pixel panel having multiple pixels, sensing circuitry that can be coupled or decoupled to the pixels, and compensation circuitry that may process image signals for the pixel. Processing of the image signals may use data including a received image signal from processing circuitry of the electronic device and received measurements from the sensing circuitry. The compensation circuitry may also employ a correction factor formula that may use the received image signal, the received measurements, and correction factor that is calculated to compensate an effect of the measurement circuitry on the pixels. Using the received data, as well as a correction factor, the compensation circuitry may generate a compensated signal, which may be provided to the pixel.
In another embodiment, a method for calibration is described. The method may include a determination of a current-voltage characteristic for pixels of the pixel panel in a condition in which the sensing circuitry is not coupled to the pixels or does not affect the pixel. The method may also include a determination of a current-voltage characteristics for pixels of the pixel panel in a condition in which the measurement circuitry is coupled to the pixels. Based on the two current-voltage characteristics calculated, a correction factor may be determined. The correction factor may be stored in a compensation circuitry of the pixel panel and may be used as part of a formula for compensation of signals.
In another embodiment, a method for compensating brightness in a pixel panel is described. The method may include a process for receiving a driving signal from processing circuitry, which is expected to generate a target current in the pixel, which may be associated with a target brightness for the pixel. The method may also include a process for receiving a measurement of an actual current generated in the pixel in response to the electric signal. The method may also include a process for generation of a compensated signal, which takes into account a difference between the target and the actual current in the pixel as well as a correction factor that may be stored in the compensation circuitry.
The correction factor is calculated to compensate for the impact of the measurement circuitry. The compensated signal may be provided to the pixel.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
Many electronic devices may use display panels to display images or provide user interfaces. The displays may be line-based displays, such as cathode-ray tube (CRT) displays, or pixel-based displays, such as light-emitting diode (LED) displays, organic LED (OLED) displays, active-matrix OLED (AMOLED) displays, electronic-ink displays, electronic paper displays, among others. Pixel-based displays may operate by means of driving circuitry (or circuitries) that provides an electrical signal (e.g., a current or a voltage) to each pixel. In response to the electric signal, each the pixel circuitry may provide a specific level of brightness or color. For example, in LED displays, each pixel circuitry may receive a voltage corresponding to a target brightness and may drive a current through the LED. In this example, the brightness of the pixel may be associated to the current passing through the LED.
Many electronic devices, such as televisions, smartphones, computer panels, smartwatches, and automobile dashboards, among others, include electronic displays that can display content and provide user interfaces. The electronic displays may employ pixel panels, which may be operatively coupled to image generation circuitry in the electronic device. The electronic display may receive image data from image generation circuitry or processing circuitry, and generate driving signals to the individual pixels in the pixel panel. As an example, in panels using pixels formed from light-emitting diodes (LEDs) or organic light-emitting diodes (OLEDs), pixel-driving circuitry in the display may receive image data and may set a target pixel brightness for each pixel and form the image, by providing a voltage signal to the individual pixels. The current induced through the LED or OLED through in response to the voltage signal may cause the target brightness.
Due to variations that may occur by, for example, fabrication artifacts, component age, temperature, humidity variations, or material variations, different pixels may respond differently to the driving signals. For example, in an OLED-based pixel panel, different OLED pixels circuits may induce different currents in response to a given input voltage. To correct such errors, pixel circuitry may be coupled to sensing circuitry, and the data generated by the sensing circuitry may be used to adjust the input voltage. The use of compensation circuitry may improve the quality of images and prevent artifacts in the display panel due to the pixel inhomogeneities through the display.
As an example of inhomogeneities in a display, consider an OLED-based panel in which each pixel is driven using a voltage signal received from the driver. A transistor associated with the pixel may receive the voltage signal and may drive a current through the OLED of the pixel. The brightness of the OLED pixel may be proportional to the source current (IS), which may be determined, among other things, by the gate-source voltage (VGS) of the transistor and the impedance displayed by the OLED. The relationship between the VGS and the Is (i.e., the IV characteristic of the pixel circuitry) may be different across the display panel due to differences among the transistors or the OLEDs.
In order to prevent variations in the IV characteristic from causing visual artifacts in the display panel, compensation systems may be used. Compensation systems may include sensing circuitry that can measure the actual source current IS obtained in response to the input electrical signal, and compensation circuitry may be use the measured Is to adjust the input electrical signal based on the measurements. However, the IV characteristic of the pixel circuitry during sensing may be different from the IV characteristic of the pixel circuitry under normal conditions (i.e., not sensing). This may be caused, for example, by impact of the coupling to the sensing circuitry, as detailed below. This effect may impact the quality of the compensation system in the display panel. The embodiments of the present application detailed below include methods and systems that take into account the impact of the sensing circuitry to perform a calibration and generate a correction factor for compensation systems. As detailed below are method and system embodiments that employ the calibrated correction factor to improve the image quality of the display.
With the foregoing in mind, a general description of suitable electronic devices with reduced bezel dimensions that may compensation circuitry for pixels, as discussed herein, are provided below. Turning first to
By way of example, the electronic device 10 may represent a block diagram of the notebook computer depicted in
In the electronic device 10 of
In certain embodiments, the display 18 may be a liquid-crystal display (LCD), which may allow users to view images generated on the electronic device 10. In some embodiments, the display 18 may include a touch screen, which may allow users to interact with a user interface of the electronic device 10. Furthermore, it should be appreciated that, in some embodiments, the display 18 may include one or more organic light emitting diode (OLED) displays, or some combination of LCD panels and OLED panels. The display 18 may receive images, data, or instructions from processor(s) 12 or memory 14, and provide an image in display 18 for interaction.
The input structures 22 of the electronic device 10 may enable a user to interact with the electronic device 10 (e.g., pressing a button to increase or decrease a volume level). The I/O interface 24 may enable electronic device 10 to interface with various other electronic devices, as may the network interface 26. The network interface 26 may include, for example, one or more interfaces for a personal area network (PAN), such as a Bluetooth network, for a local area network (LAN) or wireless local area network (WLAN), such as an 802.11x Wi-Fi network, and/or for a wide area network (WAN), such as a 3rd generation (3G) cellular network, 4th generation (4G) cellular network, long term evolution (LTE) cellular network, or long term evolution license assisted access (LTE-LAA) cellular network. The network interface 26 may also include one or more interfaces for, for example, broadband fixed wireless access networks (WiMAX), mobile broadband Wireless networks (mobile WiMAX), asynchronous digital subscriber lines (e.g., ADSL, VDSL), digital video broadcasting-terrestrial (DVB-T) and its extension DVB Handheld (DVB-H), ultra-Wideband (UWB), alternating current (AC) power lines, and so forth. As further illustrated, the electronic device 10 may include a power source 28. The power source 28 may include any suitable source of power, such as a rechargeable lithium polymer (Li-poly) battery and/or an alternating current (AC) power converter.
In certain embodiments, the electronic device 10 may take the form of a computer, a portable electronic device, a wearable electronic device, or other type of electronic device. Such computers may include computers that are generally portable (such as laptop, notebook, and tablet computers) as well as computers that are generally used in one place (such as conventional desktop computers, workstations, and/or servers). In certain embodiments, the electronic device 10 in the form of a computer may be a model of a MacBook®, MacBook® Pro, MacBook Air®, iMac®, Mac® mini, or Mac Pro® available from Apple Inc.
By way of example, the electronic device 10, taking the form of a notebook computer 10A, is illustrated in
User input structures 22, in combination with the display 18, may allow a user to control the handheld device 10B. For example, the input structures 22 may activate or deactivate the handheld device 10B, navigate user interface to a home screen, a user-configurable application screen, and/or activate a voice-recognition feature of the handheld device 10B. Other input structures 22 may provide volume control, or may toggle between vibrate and ring modes. The input structures 22 may also include a microphone may obtain a user's voice for various voice-related features, and a speaker may enable audio playback and/or certain phone capabilities. The input structures 22 may also include a headphone input may provide a connection to external speakers and/or headphones.
Turning to
Similarly,
As shown in
The scan lines S0, S1, . . . , and Sm and driving lines D0, D1, . . . , and Dm may connect the power driver 86A to the pixel 82. The pixel 82 may receive on or off instructions through the scan lines S0, S1, . . . , and Sm and may generate programming voltages corresponding to data voltages transmitted from the driving lines D0, D1, . . . , and Dm. The programming voltages may be transmitted to each of the pixel 82 and cause emission of light according to instructions from the image driver 86B through driving lines M0, M1, . . . , and Mn. Both the power driver 86A and the image driver 86B may transmit voltage signals at programmed voltages through respective driving lines to operate each pixel 82 at a state determined by the controller 84 to emit light. Each driver may supply voltage signals at a duty cycle or amplitude sufficient to operate each pixel 82.
The target brightness of each of the pixels 82 may be defined by the received image data. In this way, a first brightness of light may emit from a pixel 82 in response to a first value of the image data and the pixel 82 may emit a second brightness of light in response to a second value of the image data. Thus, image data may form images by generating driving signals to each individual pixel 82 that causes the individual pixels 82 to provide the target brightness.
The controller 84 may retrieve image data stored in the storage device(s) 14 indicative of the target brightness for the colored light outputs of individual pixels 82. In some embodiments, the processing circuit(s) 12 may provide image data directly to the controller 84. The controller 84 may coordinate the signals provided to each pixel 82 from the power driver 86A or image driver 86B. The pixel 82 may include pixel circuitry, which may include a controllable element, such as a transistor, one example of which is an MOSFET. The pixel circuitry may process the signals received from the power driver 86A or image driver 86B, and may generate the target brightness. However, any other suitable type of controllable elements, including thin film transistors (TFTs), p-type and or n-type MOSFETs, and other transistor types, may also be used.
The diagram in
To perform the measurement and obtain the measurement data 110, sensing circuitry 108 may be coupled to the pixels of the pixel panel 102 through an electrical coupling 112. The electrical coupling 112 may be configurable (e.g., switchable), such that the sensing circuitry 108 is coupled to the pixel circuitry during sensing, and uncoupled from the pixel circuitry during normal operations. However, as discussed above and detailed below, the sensing circuitry 108 may, through the electrical coupling 112, impact the IV characteristic of the pixel circuitry in the pixel panel 102. To compensate for differences in the IV characteristics between sensing and normal conditions, the compensation circuitry 106 may employ a correction factor in the compensation of the driving signal, which is detailed below.
The chart 130 in
As illustrated in chart 130, during sensing conditions 138, the IV characteristic may be shifted up relative to the normal conditions 136. For example, at a voltage of approximately 1.5V (voltage 140), the pixel current may be approximately 22 nA in normal conditions 136 and may be 31 nA in sensing conditions 138, resulting in a shift 142 of approximately 9 nA. As a result of the shift 142, a system employing data obtained during sensing may underestimate the required VGS 134 that produces a particular IS 132. Moreover, chart 130 illustrates that the difference is not uniform. As illustrated, at a voltage of approximately 2.5V (voltage 144), the pixel current may be approximately 80 nA in normal conditions 136, but may be 140 nA in sensing conditions 138, leading to a shift 146 at voltage 144 that is substantially different from the shift 142 at voltage 140. Therefore, the compensation strategy may benefit from employing a content-dependent (e.g., current-dependent, voltage dependent, brightness-dependent) correction factor.
One cause for the impact of the sensing circuitry in the measurements is illustrated in
Vgs=Vref−VdataS+k(VdataS−Vsense). (1)
In the above equation, as well as in the following descriptions, k is determined by the voltage divider expression:
The diagram in
As a result, the VGS 158 in pixel 150 in the
Vgs=Vref−VdataD+k(VdataD−Vanode). (3)
Note that the Vanode 178 may be different from the Vsense 168. As a result, if VdataS 166 and VdataD 176 are equal, the current IS, and thus, the brightness may be different. In order to prevent the difference in brightness, and due to the fact that the VGS 158 may determine the brightness of the pixel, the VGS expressions (1) and (3) may be equated, to identify a calibration curve, or compensation curve. To that end, an expression for the input voltage under normal conditions, VdataD 176, as a function of VdataS 166 may be identified as:
In the above expression, VOLED(I) corresponds to the correction applied in view of the current going through the OLED, and VSSEL(DBV) corresponds to a baseline or bias voltage that may be associated with the global display brightness level. The above expression allows calculation of the VdataD 176 that should be used under normal conditions to obtain a target brightness when the VdataS 166 is the voltage that provided that brightness during sensing.
Diagram 200 in
The characterization described in diagram 200 may be performed in the production of the electronic device 10 (e.g., during manufacturing, testing, or quality control), and the identified correction term 222 may be stored in the compensation circuitry or in a memory of the electronic device 10. The calibration and generation of the correction term 222 may be generated automatically by a calibration electronic device. Such calibration electronic device may include, or be coupled to, low-impedance current sensors, brightness sensors, or any other instrument capable of measuring currents or brightness without affecting any biasing voltage in the pixel circuitry. In some electronic devices 10, the calibration device may be included, and may be configured to perform the characterization process described in diagram 200 periodically (e.g., after a time period established by a wall clock, after a number of initializations, after a number of hours of uptime of the device), to recalculate the correction term 222 and incorporate variations resulting from regular usage of the display after the initial programming of the compensation circuitry.
The process illustrated described by diagram 200 employs a spatial averaging 212. As a result, the correction term 222 described may be specific to a region of the display panel. For example, a display panel having 1920×1080 pixels may be divided into 200 regions in a 20×10 grid with 96×108 pixels in each region, and the compensation circuitry may store one correction term 222 for each region. In some embodiments, the process illustrated by diagram 200 may bypass the spatial averaging 212, and the compensation circuitry may perform compensation on an individual pixel basis.
In the above expression, the discrete differences ΔVdata (i.e., differential data voltage) and ΔVsense (i.e., differential sense voltage) are calculated with respect to a baseline data voltage Vdata and baseline sense voltage Vsense that provides a matching current IS and in which ΔVdata and ΔVsense lead to a similar change in current. The diagram on
In the first stage 252, the pixel circuitry may be set to a baseline iteratively. The iterations loop between steps 262 and 264. A Vsense voltage may be set in step 264. With the set Vsense voltage set, a search for Vdata voltage that reaches a target current, in step 262, may be applied. The search for Vdata may proceed by testing voltages over a range of values. In the second stage 254, the pixel circuitry may iterate between steps 272 and 274. In step 274, a Vsense+ΔVsense voltage may be set in step 274. In steps 272, a search for a ΔVdata that causes Vdata+ΔVdata the pixel to provide the target current, may be performed. The correction factor may be then calculated by the expression (5), shown above. The search for the ΔVdata may be proceed by testing voltages over a range of values. Stages 252 and 254 may be repeated for multiple baselines of Vsense and Vdata.
Stages 252 and 254 may be performed with every pixel of the display panel or may implemented over a sparse subset of the pixels. The third stage 256 illustrates a sparse implementation. The sampling 282 illustrates a division that may be used for sparse calibration. For example, in a panel having 1920×1080 pixels, the display may be divided into 200 regions in a 20×10 grid with 96×108 pixels in each region, and stages 252 and 254 may be performed in one or few pixels for each region. The correction factor for the tested pixels in each region may be then averaged (process 284) to produce a grid 286 of correction factors. The correction factor for region of the grid 286 may be applied to all pixels of the region. The data for each region of the grid may be stored in the compensation circuitry, as discussed above.
In process 306, a correction factor may be determined based on the data obtained in processes 302 and 304. It should be noted that the correction factor determination in process 306 may be integral to the processes 302 and 304. For example, the calibration process may simultaneously perform processes 302 and 304 and may determine a correction factor using process 306 simultaneously, without long-term storage of the intermediate values. As discussed above, the correction factor calculated in process 306 may be used to provide improved images in process 308. Process 308 may include programming the compensation circuitry using the correction factor calculated. As discussed above, the compensation circuitry may employ the correction factor for each individual pixel or for all pixels in a region. The distribution of pixels may be based on a spatial location, as discussed above.
The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ” it is intended that such elements are to be interpreted under 35 U.S.C. § 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. § 112(f).
This application claims priority from and the benefit of U.S. Provisional Application Ser. No. 62/669,898, entitled “EXTERNAL COMPENSATION FOR DISPLAYS USING SENSING AND EMISSION DIFFERENCES,” filed May 10, 2018, which is hereby incorporated by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
20150077315 | Chaji | Mar 2015 | A1 |
20170018219 | Wang | Jan 2017 | A1 |
20170365205 | Kishi | Dec 2017 | A1 |
Number | Date | Country | |
---|---|---|---|
62669898 | May 2018 | US |