Embodiments of disclosure generally relate to displays and, more particularly, mura correction for a light emitting diode (LED) display.
In a light-emitting diode (LED) display, variations of thin-film transistors (TFTs) per sub-pixel, and variations of the sub-pixels themselves, cause visually detectable non-uniformities on the displayed images. Manufacturers rely on an externally measured luminosity of each sub-pixel as a function of grey level to program their display drivers to equalize the luminance of all sub-pixels to compensate for non-uniformities. Present equalization techniques may fail to completely compensate for the non-uniformity depending on the presence of parasitic resistances in the sub-pixels.
In an embodiment, a processing system for driving a display having light-emitting diode (LED) sub-pixels is described. The processing system comprises a memory that stores a first plurality of values for a first LED sub-pixel of the LED sub-pixels and a processing circuit coupled to the memory. The processing circuit is configured to: receive image data for the display, the image data comprising a first input digital code for the first LED sub-pixel; and generate a first output digital code using a non-linear function having an argument and a plurality of constant parameters, where the processing circuit is configured to set the argument to the first input digital code and the plurality of constant parameters to the first plurality of values. The processing system further comprises a display driver, coupled to the display and the processing circuit, comprising a first digital-to-analog converter (DAC) configured to control intensity of emitted light from the first LED sub-pixel in response to the first output digital code.
In another embodiment, an input device comprises a display having light-emitting diode (LED) sub-pixels and a processing system coupled to the display. The processing system comprises a memory that stores a first plurality of values for a first LED sub-pixel of the LED sub-pixels and a processing circuit coupled to the memory. The processing circuit is configured to: receive image data for the display, the image data comprising a first input digital code for the first LED sub-pixel; and generate a first output digital code using a non-linear function having an argument and a plurality of constant parameters, where the processing circuit is configured to set the argument to the first input digital code and the plurality of constant parameters to the first plurality of values. The processing system further comprises a display driver, coupled to the display and the processing circuit, comprising a first digital-to-analog converter (DAC) configured to control intensity of emitted light from the first LED sub-pixel in response to the first output digital code.
In another embodiment, a method of driving a display, the display comprising light-emitting diode (LED) sub-pixels is described. The method comprises receiving image data for the display, the image data comprising a first input digital code for a first LED sub-pixel of the LED sub-pixels; obtaining a first plurality of values for the first LED sub-pixel from a memory; generating, using a processing circuit coupled to the memory, a first output digital code using a non-linear function having an argument and a plurality of constant parameters, where the processing circuit is configured to set the argument to the first input digital code and the plurality of constant parameters to the first plurality of values; and supplying the first output digital code to a first digital-to-analog converter (DAC) configured to control intensity of emitted light from the first LED sub-pixel in response to the first output digital code.
So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. The drawings referred to here should not be understood as being drawn to scale unless specifically noted. Also, the drawings are often simplified and details or components omitted for clarity of presentation and explanation. The drawings and discussion serve to explain principles discussed below, where like designations denote like elements.
The input device 100 can be implemented as a physical part of the electronic system, or can be physically separate from the electronic system. As appropriate, the input device 100 may communicate with parts of the electronic system using any one or more of the following: buses, networks, and other wired or wireless interconnections. Examples include I2C, SPI, PS/2, Universal Serial Bus (USB), Bluetooth, RF, and IRDA.
In
The sensing region 120 encompasses any space above, around, in and/or near the input device 100 in which the input device 100 is able to detect user input (e.g., user input provided by one or more input objects 140). The sizes, shapes, and locations of particular sensing regions may vary widely from embodiment to embodiment. In some embodiments, the sensing region 120 extends from a surface of the input device 100 in one or more directions into space until signal-to-noise ratios prevent sufficiently accurate object detection. The distance to which this sensing region 120 extends in a particular direction, in various embodiments, may be on the order of less than a millimeter, millimeters, centimeters, or more, and may vary significantly with the type of sensing technology used and the accuracy desired. Thus, some embodiments sense input that comprises no contact with any surfaces of the input device 100, contact with an input surface (e.g. a touch surface) of the input device 100, contact with an input surface of the input device 100 coupled with some amount of applied force or pressure, and/or a combination thereof. In various embodiments, input surfaces may be provided by surfaces of casings within which the sensor electrodes reside, by face sheets applied over the sensor electrodes or any casings, etc. In some embodiments, the sensing region 120 has a rectangular shape when projected onto an input surface of the input device 100.
The input device 100 may utilize any combination of sensor components and sensing technologies to detect user input in the sensing region 120. The input device 100 comprises one or more sensing elements for detecting user input. As several non-limiting examples, the input device 100 may use capacitive, elastive, resistive, inductive, magnetic, acoustic, ultrasonic, and/or optical techniques. In an embodiment, the processing system 110 operates the sensing elements to implement touch nodes 125. A touch node 125 is an area in sensing region 120 in which the processing system 110 can detect a change in capacitance due to the presence of input objects 140.
Some implementations are configured to provide images that span one, two, three, or higher dimensional spaces. Some implementations are configured to provide projections of input along particular axes or planes.
In some capacitive implementations of the input device 100, voltage or current is applied to create an electric field. Nearby input objects cause changes in the electric field, and produce detectable changes in capacitive coupling that may be detected as changes in voltage, modulated current, or the like.
Some capacitive implementations utilize arrays or other regular or irregular patterns of capacitive sensing elements to create electric fields. In some capacitive implementations, separate sensing elements may be ohmically shorted together to form larger sensor electrodes. Some capacitive implementations utilize resistive sheets, which may be uniformly resistive. Some sensing elements may be integrated or combined with the display device (e.g. diode anode or cathode) or they may be separate (e.g. on another electrically isolated layer) from the display device electrodes.
Some capacitive implementations utilize “self capacitance” (or “absolute capacitance”) sensing methods based on changes in the capacitive coupling between sensor electrodes and an input object. In various embodiments, an input object near the sensor electrodes alters the electric field near the sensor electrodes, thus changing the measured capacitive coupling. In one implementation, an absolute capacitance sensing method operates by modulating sensor electrodes with respect to a reference voltage (e.g. system ground), and by detecting the capacitive coupling between the sensor electrodes and input objects.
A processing system 110 is shown as part of the input device 100. The processing system 110 is configured to operate the hardware of the input device 100 to detect input in the sensing region 120. The processing system 110 comprises parts of or all of one or more integrated circuits (ICs) and/or other circuitry components. For example, a processing system for a mutual capacitance sensor device may comprise transmitter circuitry configured to transmit signals with transmitter sensor electrodes, and/or receiver circuitry configured to receive signals with receiver sensor electrodes (e.g. the receiver electrodes may be segmented cathode electrodes of the display). In some embodiments, the processing system 110 also comprises electronically-readable instructions, such as firmware code, software code, and/or the like. In some embodiments, components composing the processing system 110 are located together, such as near sensing element(s) of the input device 100. In other embodiments, components of processing system 110 are physically separate with one or more components close to sensing element(s) of input device 100, and one or more components elsewhere. For example, the input device 100 may be a peripheral coupled to a desktop computer, and the processing system 110 may comprise software configured to run on a central processing unit of the desktop computer and one or more ICs (perhaps with associated firmware) separate from the central processing unit. As another example, the input device 100 may be physically integrated in a phone, and the processing system 110 may comprise circuits and firmware that are part of a main processor of the phone. In some embodiments, the processing system 110 is dedicated to implementing the input device 100. In other embodiments, the processing system 110 also performs other functions, such as operating display screens, driving haptic actuators, etc.
The processing system 110 may be implemented as a set of modules that handle different functions of the processing system 110. Each module may comprise circuitry that is a part of the processing system 110, firmware, software, or a combination thereof. In various embodiments, different combinations of modules may be used. Example modules include hardware operation modules for operating hardware such as sensor electrodes and display screens, data processing modules for processing data such as sensor signals and positional information, and reporting modules for reporting information. Further example modules include sensor operation modules configured to operate sensing element(s) to detect input, identification modules configured to identify gestures such as mode changing gestures, and mode changing modules for changing operation modes.
In some embodiments, the processing system 110 responds to user input (or lack of user input) in the sensing region 120 directly by causing one or more actions. Example actions include changing operation modes, as well as GUI actions such as cursor movement, selection, menu navigation, and other functions. In some embodiments, the processing system 110 provides information about the input (or lack of input) to some part of the electronic system (e.g. to a central processing system of the electronic system that is separate from the processing system 110, if such a separate central processing system exists). In some embodiments, some part of the electronic system processes information received from the processing system 110 to act on user input, such as to facilitate a full range of actions, including mode changing actions and GUI actions.
For example, in some embodiments, the processing system 110 operates the sensing element(s) of the input device 100 to produce electrical signals indicative of input (or lack of input) in the sensing region 120. The processing system 110 may perform any appropriate amount of processing on the electrical signals in producing the information provided to the electronic system. For example, the processing system 110 may digitize analog electrical signals obtained from the sensor electrodes. As another example, the processing system 110 may perform filtering or other signal conditioning. As yet another example, the processing system 110 may subtract or otherwise account for a baseline, such that the information reflects a difference between the electrical signals and the baseline. As yet further examples, the processing system 110 may determine positional information, recognize inputs as commands, recognize handwriting, and the like.
“Positional information” as used herein broadly encompasses absolute position, relative position, velocity, acceleration, and other types of spatial information. Exemplary “zero-dimensional” positional information includes near/far or contact/no contact information. Exemplary “one-dimensional” positional information includes positions along an axis. Exemplary “two-dimensional” positional information includes motions in a plane. Exemplary “three-dimensional” positional information includes instantaneous or average velocities in space. Further examples include other representations of spatial information. Historical data regarding one or more types of positional information may also be determined and/or stored, including, for example, historical data that tracks position, motion, or instantaneous velocity over time.
In some embodiments, the input device 100 is implemented with additional input components that are operated by the processing system 110 or by some other processing system. These additional input components may provide redundant functionality for input in the sensing region 120, or some other functionality.
In some embodiments, the input device 100 comprises a touch screen interface, and the sensing region 120 overlaps at least part of an active area of a display screen. For example, the input device 100 may comprise substantially transparent sensor electrodes overlaying the display screen and provide a touch screen interface for the associated electronic system. The display screen may be any type of dynamic display capable of displaying a visual interface to a user, and may include any type of light emitting diode (LED), organic LED (OLED), cathode ray tube (CRT), liquid crystal display (LCD), plasma, electroluminescence (EL), or other display technology. The input device 100 and the display screen may share physical elements. For example, some embodiments may utilize some of the same electrical components for displaying and sensing. As another example, the display screen may be operated in part or in total by the processing system 110. In one embodiment, OLED display driver circuitry and touch sensing circuitry may be combined into a single Integrated Circuit (TDDI).
It should be understood that while many embodiments of the disclosure are described in the context of a fully functioning apparatus, the mechanisms of the present disclosure are capable of being distributed as a program product (e.g., software) in a variety of forms. For example, the mechanisms of the present disclosure may be implemented and distributed as a software program on information bearing media that are readable by electronic processors (e.g., non-transitory computer-readable and/or recordable/writable information bearing media readable by the processing system 110). Additionally, the embodiments of the present disclosure apply equally regardless of the particular type of medium used to carry out the distribution. Examples of non-transitory, electronically readable media include various discs, memory sticks, memory cards, memory modules, and the like. Electronically readable media may be based on flash, optical, magnetic, holographic, or any other storage technology.
The display 202 includes a plurality of light-emitting diode (LED) sub-pixels 204, such as OLED sub-pixels or the like. The display 202 can also include sensor electrodes 206. The sensor electrodes 206 implement the touch nodes 125 used for capacitive input sensing. In an embodiment, the sensor electrodes 206 are dedicated electrodes disposed on one or more substrates of the display 202. In another embodiment, the sensor electrodes 206 are also electrodes used by the LED sub-pixels 204 (e.g., cathode electrodes, anode electrodes, etc.). In another embodiment, the sensor electrodes 206 include both dedicated sensor electrodes and electrodes of the LED sub-pixels 204. In some embodiments, the processing system 110 operates the sensor electrodes 206 using absolute capacitive sensing to obtain capacitive images. In other embodiments, the processing system 110 operates some sensor electrodes 206 as transmitters and other sensor electrodes 206 as receivers and obtains capacitive images using transcapacitive sensing.
The processing system 110 includes sensor circuitry 208, a touch processor 210, a display panel driver 212, processing circuits 224, and a host interface 218. The processing circuits 224 include a buffer 213, a correction circuit 220, a gamma circuit 218, a parameter generator circuit 214, a memory 216, and an address generator 222. The sensor circuitry 208 is coupled to the sensor electrodes 206. The sensor circuitry 208 operates the sensor electrodes 206 to receive resulting signals using either an absolute capacitive or transcapacitive sensing scheme. The sensor circuitry 208 can include charge measurement circuits (e.g., charge integrators, current conveyers, etc.), demodulators, filters, analog-to-digital converter(s), and the like. The sensor circuitry 208 supplies the resulting signals to the touch processor 210. The touch processor 210 determines changes in capacitance, object proximity, object location, object force, or the like by processing the resulting signals. The touch processor 210 can include special purpose processors (e.g., digital signal processor(s)), general purpose processors executing software/firmware, or a combination thereof. The touch processor 210 can further include various support circuitry, including memory, input/output (IO) circuits, and the like.
The buffer 213 stores image data received from the host interface 218. For example, the buffer 213 can be a frame buffer. The stored image data includes digital codes for representing the grey scale value of each of the LED sub-pixels 204. For example, the digital codes can be 8-bit codes representing a grey scale value between 0 and 255. An output of the buffer 213 is coupled to an input of the correction circuit 220. The correction circuit 220 is configured to modify the digital codes stored in the buffer 213 before the digital codes are supplied to the display panel driver 212. As discussed further below, the correction circuit 220 modifies the digital codes of the image data to compensate for variations of the LED sub-pixels 204 that, if uncompensated, would cause detectable non-uniformities in the display images (e.g., mura compensation). The memory 216 stores a set of values for each LED sub-pixel 204 that control the applied compensation. The address generator 222 generates addresses for the memory 216, which outputs the sets of values. The parameter generator 214 generates parameters from the sets of values. The correction circuit 220 includes multipliers, adders, and the like that implement a non-linear function having an argument and a plurality of constant parameters. The buffer 213 supplies the arguments of the function and the parameter generator 214 supplies the values of the constant parameters. The gamma circuit 218 applies a gamma function to the digital codes output by the correction circuit 220. The gamma circuit 218 outputs digital codes to the display panel driver 212.
In an embodiment, the memory 216 stores a compressed representation of the values for each LED sub-pixel 204. This reduces the size of the memory 216. In such an embodiment, the parameter generator circuit 214 includes a decompressor configured to obtain the values for each LED sub-pixel 204 from the compressed data stored in the memory 216.
The display panel driver 212 can include source drivers or both source drivers and gate drivers. In some embodiments, the gate drivers can be part of the display 202 (e.g., a gate-in-panel (GIP) display). The source drivers drive the display 202 to display the image data via the LED sub-pixels 204. The source drivers include digital-to-analog converters (DACs) 215 that covert the digital codes output by the gamma circuit 218 into analog voltages. The gate drivers select lines of the display 202 based on timing control data. The gate drivers select LED sub-pixels 204 through gate switches, and the source drivers can update the selected LED sub-pixels 204 drive current, according to image data.
The source of the transistor M1 is coupled to a supply line 303. The capacitor Cst is coupled between the supply line 303 and the gate of the transistor M1. The gate and source of the transistor M1 are capacitively coupled by way of the capacitor Cst. The drain of the transistor M1 is coupled to the anode of the LED 306. A gate of the transistor M2 is coupled to a gate line 302. A drain of the transistor M2 is coupled to the gate of the transistor M1. A source of the transistor M2 is coupled to a source line 304. The source line 304 is coupled to a DAC 215. The DAC 215 can be formed on the display 202 or be part of the display panel driver 212. The gate line 302 can be coupled to a gate driver (not shown) formed either on the display 202 or as part of the display panel driver 212.
To emit light (when displaying an image), the LED 306 can be forward-biased (and can thus have current flowing through it). To forward-bias the LED 306, the voltage at the gate line 302 can be sufficiently high to turn on the transistor M2. When the transistor M2 is on, the transistor M2 can act substantially as a short-circuit and can cause the voltage at the source line 304 to be substantially mirrored at the gate of the transistor M1 and the voltage stored on Cst. The voltage at the source line 304, and thus the voltage at the gate of the transistor M1, can be sufficiently low relative to the anode supply voltage to turn on the current-controlling transistor M1. When the transistor M1 is on, the transistor M1 can act substantially as current source and can cause the voltage at the anode of the LED 306 to be maintained at a voltage for a controlled current through the LED 306. For the LED 306 to be forward biased, the voltage at the anode must be higher than the voltage at the cathode electrode 307. The configuration of the LED sub-pixel shown in
The grey scale of the LED sub-pixel 204 is controlled by changing the gate-to-source voltage of the transistor M1. The gate-to-source voltage of the transistor M1 is set by the DAC 215 (when transistor M2 is turned on). The DAC 215 receives input digital codes from the processing circuit 224 and outputs analog voltages for setting the gate-to-source voltage of the transistor M1. The DAC 215 is configured to control intensity of emitted light from the first LED sub-pixel in response to the input digital code. The DAC 215 generates a voltage in response to the input digital code that controls a current through the LED.
In an ideal case, Rs is zero and R is infinite (the current through R is zero). In the ideal case, the gate-to-source voltage of the transistor M1 can be expressed as:
V
GS
=V
th−√{square root over (Acn)} Eq. 1,
where VGS is the gate-to-source voltage of the transistor M1, Vth is the threshold voltage of the transistor M1, c is the digital code input to the DAC 310 (in integer form), n is a real number representing gamma, and A is a real number and property of the LED sub-pixel 204. The values of Vth and A can differ across different LED sub-pixels 204 in the display.
If the DAC 215 is programmed for a specific voltage out as a function of digital code c, then there exists a relationship between the digital codes of two differing sub-pixels (due to factors such as differing Vth, A, etc.) that make their luminosity equal:
V
th
1−√{square root over (A1cn1)}=Vth2−√{square root over (A2cn2)} Eq. 2,
where the superscripts 1 and 2 represent a first and a second LED sub-pixel, respectively. To simplify the calculations, it can be assumed that the threshold voltages of the two subpixels are equal to each other. In order to compensate for the differences in A1 and A2, the digital code for one of the sub-pixels can be scaled by a factor m:
√{square root over (A1cn1)}=√{square root over (A2(mc)n2)} Eq. 3.
After setting the threshold voltages to be equal, solving for m results in:
If the gamma of the two subpixels are the same, as is typically the case if the subpixels are in close proximity, then the above equation reduces to a constant value for the scaling factor m. In other words, in order to keep the brightness of the two subpixels the same, the second sub-pixel should be addressed with the code c scaled by m, whereas the first sub-pixel should be addressed at c with a scale factor of 1. In equation form:
Cout=mCin, Eq. 5.
As shown in Equation 4, m is a constant for the ideal LED sub-pixel. However, in practice, the LED sub-pixel is not ideal. Thus, Rs is not zero and R is not infinite. In the non-ideal case, Equation 1 becomes:
V
GS
=V
th−√{square root over (Acn+B ln(1+Dcn))} Eq. 6,
where B and D are real number properties of the LED sub-pixel 204. Equation 2 becomes:
V
th
1−√{square root over (A1cn+B1 ln(1+D1cn))}=Vth2−√{square root over (A2cn+B2 ln(1+D2cn))} Eq. 7.
Finally, setting the threshold voltages and the gammas to be equal (for simplicity, as before) results in:
√{square root over (A1cn+B1 ln(1+D1cn))}=√{square root over (A2(mc)n+B2 ln(1+D2(mc)n))} Eq. 8.
Equation 8 cannot be satisfied using a constant value of m for arbitrary values of A, B, and D. In order to make the equality in Equation 8 hold, m has to be a function of digital code c.
Returning to
In an embodiment, the correction circuit 220 implements a second order polynomial and the memory 216 stores a pair of values to be used as parameters of the polynomial for each LED sub-pixel. There is a tradeoff between accuracy, the complexity of the correction circuit 220, and the number of parameters needed for each LED sub-pixel. A second order polynomial can provide sufficient accuracy with the need of generating and storing only two values for each LED. In other embodiments, higher order polynomials can be used, which would require generation and storage of more than two parameters for each LED sub-pixel, as well as additional multipliers and adders in the correction circuit 220. The parameters for each LED sub-pixel can be generated during manufacture of the display by externally measuring the luminosity of each LED sub-pixel as a function of grey level. The luminosity of each LED sub-pixel for various grey levels can be compared with a reference LED sub-pixel to generate the values for m in Equation 8. A non-linear function is then fit to the determined values of m and the constant parameters of the non-linear function are stored in the memory 216 for each LED sub-pixel.
In an embodiment, the memory 216 also stores a code offset for each LED sub-pixel and the correction circuit 220 applies the offset for each LED sub-pixel. The offset is used to compensate for differences in threshold voltage of the driving transistor across different LED sub-pixels. The offsets for each LED sub-pixel can be stored in the memory 216 and updated over time.
The embodiments and examples set forth herein were presented in order to best explain the embodiments in accordance with the present technology and its particular application and to thereby enable those skilled in the art to make and use the disclosure. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purposes of illustration and example only. The description as set forth is not intended to be exhaustive or to limit the disclosure to the precise form disclosed.
In view of the foregoing, the scope of the present disclosure is determined by the claims that follow.