The present disclosure generally relates to displays such as active matrix organic light emitting diode displays that monitor the values of selected parameters of the display and compensate for non-uniformities in the display.
Displays can be created from an array of light emitting devices each controlled by individual circuits (i.e., pixel circuits) having transistors for selectively controlling the circuits to be programmed with display information and to emit light according to the display information. Thin film transistors (“TFTs”) fabricated on a substrate can be incorporated into such displays. TFTs tend to demonstrate non-uniform behavior across display panels and over time as the displays age. Compensation techniques can be applied to such displays to achieve image uniformity across the displays and to account for degradation in the displays as the displays age.
Some schemes for providing compensation to displays to account for variations across the display panel and over time utilize monitoring systems to measure time dependent parameters associated with the aging (i.e., degradation) and/or fabrication of the pixel circuits. The measured information can then be used to inform subsequent programming of the pixel circuits so as to ensure that any measured degradation is accounted for by adjustments made to the programming. Such monitored pixel circuits may require the use of additional transistors and/or lines to selectively couple the pixel circuits to the monitoring systems and provide for reading out information. The incorporation of additional transistors and/or lines may undesirably decrease pixel-pitch (i.e., “pixel density”).
In accordance with one embodiment, a system is provided for compensating for structural non-uniformities in an array of solid state devices in a display panel. The system displays images in the panel, and extracts the outputs of a pattern based on structural non-uniformities of the panel, across the panel, for each area of the structural non-uniformities. Then the non-uniformities are quantified, based on the values of the extracted outputs, and input signals to the display panel are modified to compensate for the non-uniformities.
In one implementation, the extracting is done with image sensors, such as optical sensors, associated with a pattern matching the structural non-uniformities. The non-uniformities may be modified at multiple response points by modifying the input signals, and the response points may be used to interpolate an entire response curve for the display panel. The response curve can then be used to create a compensated image.
In another implementation, black values are inserted for selected areas of said pattern to reduce the effect of optical cross talk.
In accordance with another embodiment, a system is provided for compensating for random non-uniformities in an array of solid state devices in a display panel. The system extracts low-frequency non-uniformities across the panel by applying patterns, and takes images of the pattern. The area and resolution of the image are adjusted to match the panel by creating values for pixels in the display, and then low-frequency non-uniformities across the panel are compensated, based on the created values.
In accordance with a further embodiment, a system is provided for compensating for non-uniformities in an array of solid state devices in a display panel. The system creates target points in the input-output characteristics of the panel, extracts structural non-uniformities by optical measurement using patterns matching the structural non-uniformities, compensates for the structural non-uniformities, extracts low-frequency non-uniformities by applying flat field and extracting the patterns, and compensates for the low-frequency non-uniformities.
The foregoing and additional aspects and embodiments of the present invention will be apparent to those of ordinary skill in the art in view of the detailed description of various embodiments and/or aspects, which is made with reference to the drawings, a brief description of which is provided next.
The foregoing and other advantages of the invention will become apparent upon reading the following detailed description and upon reference to the drawings.
While the invention is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
For illustrative purposes, the display system 50 in
Each pixel 10 includes a driving circuit (“pixel circuit”) that generally includes a driving transistor and a light emitting device. Hereinafter the pixel 10 may refer to the pixel circuit. The light emitting device can optionally be an organic light emitting diode (OLED), but implementations of the present disclosure apply to pixel circuits having other electroluminescence devices, including current-driven light emitting devices. The driving transistor in the pixel 10 can optionally be an n-type or p-type amorphous silicon thin-film transistor, but implementations of the present disclosure are not limited to pixel circuits having a particular polarity of transistor or only to pixel circuits having thin-film transistors. The pixel circuit can also include a storage capacitor for storing programming information and allowing the pixel circuit to drive the light emitting device after being addressed. Thus, the display panel 20 can be an active matrix display array.
As illustrated in
With reference to the top-left pixel 10 shown in the display panel 20, the select line 24i is provided by the address driver 8, and can be utilized to enable, for example, a programming operation of the pixel 10 by activating a switch or transistor to allow the data line 22j to program the pixel 10. The data line 22j conveys programming information from the data driver 4 to the pixel 10. For example, the data line 22j can be utilized to apply a programming voltage or a programming current to the pixel 10 in order to program the pixel 10 to emit a desired amount of luminance. The programming voltage (or programming current) supplied by the data driver 4 via the data line 22j is a voltage (or current) appropriate to cause the pixel 10 to emit light with a desired amount of luminance according to the digital data received by the controller 2. The programming voltage (or programming current) can be applied to the pixel 10 during a programming operation of the pixel 10 so as to charge a storage device within the pixel 10, such as a storage capacitor, thereby enabling the pixel 10 to emit light with the desired amount of luminance during an emission operation following the programming operation. For example, the storage device in the pixel 10 can be charged during a programming operation to apply a voltage to one or more of a gate or a source terminal of the driving transistor during the emission operation, thereby causing the driving transistor to convey the driving current through the light emitting device according to the voltage stored on the storage device.
Generally, in the pixel 10, the driving current that is conveyed through the light emitting device by the driving transistor during the emission operation of the pixel 10 is a current that is supplied by the first supply line 26i and is drained to a second supply line 27i. The first supply line 26i and the second supply line 27i are coupled to the supply voltage 14. The first supply line 26i can provide a positive supply voltage (e.g., the voltage commonly referred to in circuit design as “Vdd”) and the second supply line 27i can provide a negative supply voltage (e.g., the voltage commonly referred to in circuit design as “Vss”). Implementations of the present disclosure can be realized where one or the other of the supply lines (e.g., the supply line 27i) is fixed at a ground voltage or at another reference voltage.
The display system 50 also includes a monitoring system 12. With reference again to the top left pixel 10 in the display panel 20, the monitor line 28j connects the pixel 10 to the monitoring system 12. The monitoring system 12 can be integrated with the data driver 4, or can be a separate stand-alone system. In particular, the monitoring system 12 can optionally be implemented by monitoring the current and/or voltage of the data line 22j during a monitoring operation of the pixel 10, and the monitor line 28j can be entirely omitted. Additionally, the display system 50 can be implemented without the monitoring system 12 or the monitor line 28j. The monitor line 28j allows the monitoring system 12 to measure a current or voltage associated with the pixel 10 and thereby extract information indicative of a degradation of the pixel 10. For example, the monitoring system 12 can extract, via the monitor line 28j, a current flowing through the driving transistor within the pixel 10 and thereby determine, based on the measured current and based on the voltages applied to the driving transistor during the measurement, a threshold voltage of the driving transistor or a shift thereof.
The monitoring system 12 can also extract an operating voltage of the light emitting device (e.g., a voltage drop across the light emitting device while the light emitting device is operating to emit light). The monitoring system 12 can then communicate signals 32 to the controller 2 and/or the memory 6 to allow the display system 50 to store the extracted degradation information in the memory 6. During subsequent programming and/or emission operations of the pixel 10, the degradation information is retrieved from the memory 6 by the controller 2 via memory signals 36, and the controller 2 then compensates for the extracted degradation information in subsequent programming and/or emission operations of the pixel 10. For example, once the degradation information is extracted, the programming information conveyed to the pixel 10 via the data line 22j can be appropriately adjusted during a subsequent programming operation of the pixel 10 such that the pixel 10 emits light with a desired amount of luminance that is independent of the degradation of the pixel 10. In an example, an increase in the threshold voltage of the driving transistor within the pixel 10 can be compensated for by appropriately increasing the programming voltage applied to the pixel 10.
The driving circuit for the pixel 110 also includes a storage capacitor 116 and a switching transistor 118. The pixel 110 is coupled to a select line SEL, a voltage supply line Vdd, a data line Vdata, and a monitor line MON. The driving transistor 112 draws a current from the voltage supply line Vdd according to a gate-source voltage (Vgs) across the gate and source terminals of the drive transistor 112. For example, in a saturation mode of the drive transistor 112, the current passing through the drive transistor 112 can be given by Ids=β(Vgs−Vt)2, where β is a parameter that depends on device characteristics of the drive transistor 112, Ids is the current from the drain terminal to the source terminal of the drive transistor 112, and Vt is the threshold voltage of the drive transistor 112.
In the pixel 110, the storage capacitor 116 is coupled across the gate and source terminals of the drive transistor 112. The storage capacitor 116 has a first terminal, which is referred to for convenience as a gate-side terminal, and a second terminal, which is referred to for convenience as a source-side terminal. The gate-side terminal of the storage capacitor 116 is electrically coupled to the gate terminal of the drive transistor 112. The source-side terminal 116s of the storage capacitor 116 is electrically coupled to the source terminal of the drive transistor 112. Thus, the gate-source voltage Vgs of the drive transistor 112 is also the voltage charged on the storage capacitor 116. As will be explained further below, the storage capacitor 116 can thereby maintain a driving voltage across the drive transistor 112 during an emission phase of the pixel 110.
The drain terminal of the drive transistor 112 is connected to the voltage supply line Vdd, and the source terminal of the drive transistor 112 is connected to (1) the anode terminal of the OLED 114 and (2) a monitor line MON via a read transistor 119. A cathode terminal of the OLED 114 can be connected to ground or can optionally be connected to a second voltage supply line, such as the supply line Vss shown in
The switching transistor 118 is operated according to the select line SEL (e.g., when the voltage on the select line SEL is at a high level, the switching transistor 118 is turned on, and when the voltage SEL is at a low level, the switching transistor is turned off). When turned on, the switching transistor 118 electrically couples node A (the gate terminal of the driving transistor 112 and the gate-side terminal of the storage capacitor 116) to the data line Vdata.
The read transistor 119 is operated according to the read line RD (e.g., when the voltage on the read line RD is at a high level, the read transistor 119 is turned on, and when the voltage RD is at a low level, the read transistor 119 is turned off). When turned on, the read transistor 119 electrically couples node B (the source terminal of the driving transistor 112, the source-side terminal of the storage capacitor 116, and the anode of the OLED 114) to the monitor line MON.
During the second cycle 154, the SEL line is low to turn off the switching transistor 118, and the drive transistor 112 is turned on by the charge on the capacitor 116 at node A. The voltage on the read line RD goes high to turn on the read transistor 119 and thereby permit a first sample of the drive transistor current to be taken via the monitor line MON, while the OLED 114 is off. The voltage on the monitor line MON is Vref, which may be at the same level as the voltage Vb in the previous cycle.
During the third cycle 158, the voltage on the select line SEL is high to turn on the switching transistor 118, and the voltage on the read line RD is low to turn off the read transistor 119. Thus, the gate of the drive transistor 112 is charged to the voltage Vd2 of the data line Vdata, and the source of the drive transistor 112 is set to VOLED by the OLED 114. Consequently, the gate-source voltage Vgs of the drive transistor 112 is a function of VOLED (Vgs=Vd2−VOLED).
During the fourth cycle 162, the voltage on the select line SEL is low to turn off the switching transistor, and the drive transistor 112 is turned on by the charge on the capacitor 116 at node A. The voltage on the read line RD is high to turn on the read transistor 119, and a second sample of the current of the drive transistor 112 is taken via the monitor line MON.
If the first and second samples of the drive current are not the same, the voltage Vd2 on the Vdata line is adjusted, the programming voltage Vd2 is changed, and the sampling and adjustment operations are repeated until the second sample of the drive current is the same as the first sample. When the two samples of the drive current are the same, the two gate-source voltages should also be the same, which means that:
After some operation time (t), the change in VOLED between time 0 and time t is ΔVOLED=VOLED(t)−VOLED(0)=Vd2(t)−Vd2(0). Thus, the difference between the two programming voltages Vd2(t) and Vd2(0) can be used to extract the OLED voltage.
During the first cycle 200 of the exemplary timing diagram in
When multiple readout circuits are used, multiple levels of calibration can be used to make the readout circuits identical. However, there are often remaining non-uniformities among the readout circuits that measure multiple columns, and these non-uniformities can cause steps in the measured data across any given row. One example of such a step is illustrated in
The above adjustment technique can be executed on each row independently, or an average row may be created based on a selected number of rows. Then the delta values are calculated based on the average row, and all the rows are adjusted based on the delta values for the average row.
Another technique is to design the panel in a way that the boundary columns between two readout circuits can be measured with both readout circuits. Then the pixel values in each readout circuit can be adjusted based on the difference between the values measured for the boundary columns, by the two readout circuits.
If the variations are not too great, a general curve fitting (or low pass filter) can be used to smooth the rows and then the pixels can be adjusted based on the difference between real rows and the created curve. This process can be executed for all rows based on an average row, or for each row independently as described above.
The readout circuits can be corrected externally by using a single reference source (or calibrated sources) to adjust each ROC before the measurement. The reference source can be an outside current source or one or more pixels calibrated externally. Another option is to measure a few sample pixels coupled to each readout circuit with a single measurement readout circuit, and then adjust all the readout circuits based on the difference between the original measurement and the measured values made by the single measurement readout circuit.
The OLED layer 210 includes a substantially transparent anode 220, e.g., indium-tin-oxide (ITO), adjacent the glass substrate 214, an organic semiconductor stack 221 engaging the rear surface of the anode 220, and a cathode 222 engaging the rear surface of the stack 221. The cathode 222 is made of a transparent or semi-transparent material, e.g., thin silver (Ag), to allow light to pass through the OLED layer 210 to the solar panel 211. (The anode 220 and the semiconductor stack 221 in OLEDs are typically at least semi-transparent, but the cathode in previous OLEDs has often been opaque and sometimes even light-absorbing to minimize the reflection of ambient light from the OLED.)
Light that passes rearwardly through the OLED layer 210, as illustrated by the right-hand arrow in
One or more switches may be connected to the terminals 232 and 233 to permit the solar panel 211 to be controllably connected to either (1) an electrical energy storage device such as a rechargeable battery or one or more capacitors, or (2) to a system that uses the solar panel 211 as a touch screen, to detect when and where the front of the display is “touched” by a user.
In the illustrative embodiment of
One example of a suitable semitransparent OLED layer 210 includes the following materials:
Anode 220
Semiconductor Stack 221
Semitransparent Cathode 222
The performance of the above OLED layer in an integrated device using a commercial solar panel was compared with a reference device, which was an OLED with exactly the same semiconductor stack and a metallic cathode (Mg/Ag). The reflectance of the reference device was very high, due to the reflection of the metallic electrode; in contrast, the reflectance of the integrated device is very low. The reflectance of the integrated device with the transparent electrode was much lower than the reflectances of both the reference device (with the metallic electrode) and the reference device equipped with a circular polarizer.
The current efficiency-current density characteristics of the integrated device with the transparent electrode and the reference device are shown in
For both the integrated device and the reference device described above, all materials were deposited sequentially at a rate of 1-3 Å/s using vacuum thermal evaporation at a pressure below 5×10−6 Torr on ITO-coated glass substrates. The substrates were cleaned with acetone and isopropyl alcohol, dried in an oven, and finally cleaned by UV ozone treatment before use. In the integrated device, the solar panel was a commercial Sanyo Energy AM-1456CA amorphous silicon solar cell with a short circuit current of 6 μA and a voltage output of 2.4V. The integrated device was fabricated using the custom cut solar cell as encapsulation glass for the OLED layer.
The optical reflectance of the device was measured by using a Shimadzu UV-2501PC UV-Visible spectrophotometer. The current density (J)-luminance (L)-voltage (V) characteristics of the device was measured with an Agilent 4155C semiconductor parameter analyzer and a silicon photodiode pre-calibrated by a Minolta Chromameter. The ambient light was room light, and the tests were carried out at room temperature. The performances of the fabricated devices were compared with each other and with the reference device equipped with a circular polarizer.
Overall, the integrated device shows a higher current efficiency than the reference device with a circular polarizer, and further recycles the energy of the incident ambient light and the internal luminance of the top OLED, which demonstrates a significant low power consumption display system.
Conventional touch displays stack a touch panel on top of an LCD or AMOLED display. The touch panel reduces the luminance output of the display beneath the touch panel and adds extra cost to the fabrication. The integrated device described above is capable of functioning as an optical-based touch screen without any extra panels or cost. Unlike previous optical-based touch screens which require extra IR-LEDs and sensors, the integrated device described here utilizes the internal illumination from the top OLED as an optical signal, and the solar cell is utilized as an optical sensor. Since the OLED has very good luminance uniformity, the emitted light is evenly spread across the device surface as well as the surface of the solar panel. When the front surface of the display is touched by a finger or other object, a portion of the emitted light is reflected off the object back into the device and onto the solar panel, which changes the electrical output of the solar panel. The system is able to detect this change in the electrical output, thereby detecting the touch. The benefit of this optical-based touch system is that it works for any object (dry finger, wet finger, gloved finger, stylus, pen, etc.), because detection of the touch is based on the optical reflection rather than a change in the refractive index, capacitance or resistance of the touch panel.
When the front of the display is touched or obstructed by a finger 240 (
The solar panel may also be used for imaging, as well as a touch screen. An algorithm may be used to capture multiple images, using different pixels of the display to provide different levels of brightness for compressive sensing.
In a modified embodiment, the solar panel is calibrated with different OLED and/or ambient brightness levels, and the values are stored in a lookup table (LUT). Touching the surface of the display changes the optical behavior of the stacked structure, and an expected value for each cell can be fetched from the LUT based on the OLED luminance and the ambient light. The output voltage or current from the solar cells can then be read, and a profile created based on differences between expected values and measured values. A predefined library or dictionary can be used to translate the created profile to different gestures or touch functions.
In another modified embodiment, each solar cell unit represents a pixel or sub-pixel, and the solar cells are calibrated as smaller units (pixel resolution) with light sources at different colors. Each solar cell unit may represent a cluster of pixels or sub-pixels. The solar cells are calibrated as smaller units (pixel resolution) with reference light sources at different color and brightness levels, and the values stored in LUTs or used to make functions. The calibration measurements can be repeated during the display lifetime by the user or at defined intervals based on the usage of the display. Calibrating the input video signals with the values stored in the LUTs can compensate for non-uniformity and aging. Different gray scales may be applied while measuring the values of each solar cell unit, and storing the values in a LUT.
Each solar cell unit can represent a pixel or sub-pixel. The solar cell can be calibrated as smaller units (pixel resolution) with reference light sources at different colors and brightness levels and the values stored in LUTs or used to make functions. Different gray scales may be applied while measuring the values of each solar cell unit, and then calibrating the input video signals with the values stored in the LUTs to compensate for non-uniformity and aging. The calibration measurements can be repeated during the display lifetime by the user or at defined intervals based on the usage of the display.
Alternatively, each solar cell unit can represent a pixel or sub-pixel, calibrated as smaller units (pixel resolution) with reference light sources at different colors and brightness levels with the values being stored in LUTs or used to make functions, and then applying different patterns (e.g., created as described in U.S. Patent Application Publication No. 2011/0227964, which is incorporated by reference in its entirety herein) to each cluster and measuring the values of each solar cell unit. The functions and methods described in U.S. Patent Application Publication No. 2011/0227964 may be used to extract the non-uniformities/aging for each pixel in the clusters, with the resulting values being stored in a LUT. The input video signals may then be calibrated with the values stored in LUTs to compensate for non-uniformity and aging. The measurements can be repeated during the display lifetime either by the user or at defined intervals based on display usage.
The solar panel can also be used for initial uniformity calibration of the display. One of the major problems with OLED panels is non-uniformity. Common sources of non-uniformity are the manufacturing process and differential aging during use. While in-pixel compensation can improve the uniformity of a display, the limited compensation level attainable with this technique is not sufficient for some displays, thereby reducing the yield. With the integrated OLED/solar panel, the output current of the solar panel can be used to detect and correct non-uniformities in the display. Specifically, calibrated imaging can be used to determine the luminance of each pixel at various levels. The theory has also been tested on an AMOLED display, and
As can be seen from the foregoing description, the integrated display can be used to provide AMOLED displays with a low ambient light reflectance without employing any extra layers (polarizer), low power consumption with recycled electrical energy, and functionality as an optical based touch screen without an extra touch panel, LED sources or sensors. Moreover, the output of the solar panel can be used to detect and correct the non-uniformity of the OLED panel. By carefully choosing the solar cell and adjusting the semitransparent cathode of the OLED, the performance of this display system can be greatly improved.
Arrayed solid state devices, such as active matrix organic light emitting (AMOLED) displays, are prone to structural and/or random non-uniformity. The structural non-uniformity can be caused by several different sources such as driving components, fabrication procedure, mechanical structure, and more. For example, the routing of signals through the panel may cause different delays and resistive drop. Therefore, it can cause a non-uniformity pattern.
In one example of driver-induced structural non-uniformity, when the select (address lines) are generated by a central source at the edge of the panel and distributed to different columns or rows can experience different delays. Although some can match the delay by adjusting the trace widths by different patterning, the accuracy is limited due to the limited area available for routing.
In another example of driver-induced structural non-uniformity, the measurement units used to extract the pixel non-uniformity will not match accurately. Therefore the measured data can have an offset (or gain) variation across the measurement units
In an example of fabrication-induced structural non-uniformity, the patterning can cause a repeated pattern (especially if step-and-repeat is used. Here a smaller mask is used but it is moved across the substrate to pattern the entire area that has the same pattern).
In another example of fabrication-induced structural non-uniformity, the material development process such as laser annealing can create repeated pattern in orientation of the process.
An example of mechanical structural non-uniformity is the effect of mechanical stress caused by the conformal structure of the device.
Also, the random non-uniformity can consist of low frequency and high frequency patterns. Here, the low frequency patterns are considered as global non-uniformities and the high-frequency patterns are called local non-uniformity.
Invention Overview
Array structure solid state devices such as active matrix OLED (AMOLED) displays are prone to structural non-uniformity caused by drivers, fabrication process, and/or physical conditions. An example for driver structural non-uniformity can be the mismatches between different drivers used in one array device (panel). These drivers could be providing signals to the panels or extracting signals from the panels to be used for compensation. For example, multiple measurement units are used in an AMOLED panel to extract the electrical non-uniformity of the panel. The data is then used to compensate the non-uniformity. The fabrication non-uniformity can be caused by process steps. In one case, the step-and-repeat process in patterning can result in structural non-uniformity across the panel. Also, mechanical stress as the result of packaging can result in structural non-uniformity.
In one embodiment, some images (e.g. flat-field or patterns based on structural non-uniformity) are displayed in the panel; image/optical sensors in association with a pattern matching the structural non-uniformity are used to extract the output of the patterns across the panel for each area of the structural non-uniformity. For example, if the non-uniformities are vertical bands caused by the drivers (or measurement units), a value for each band is extracted. These values are used to quantify the non-uniformities and compensate for them by modifying the input signals.
In another aspect of the invention, some images (e.g. flat-field or patterns based on structural non-uniformity) are displayed on the panel; and image/optical sensors in association with a pattern matching the structural non-uniformity are used to extract the output of the patterns across the panel for each area of the structural non-uniformity. For example, if the non-uniformities are vertical bands caused by the drivers (or measurement units), a value for each band is extracted. These values are used to quantify the non-uniformities and compensate for them at several response points by modifying the input signals. Then use those response points to interpolate (or curve fit) the entire response curve of the pixels. Then the response curve is used to create a compensated image for each input signals.
In another aspect of the invention, one can insert black values (or different values) for some of the areas in the structural pattern to eliminate the optical cross talks.
For example, if the panel has vertical bands, one can replace the odds bands with black and the other one with a desired value. In this case, the effect of cross talk is reduced significantly.
In another example, in case of the structural non-uniformity that is in the shape of 2D (two dimensional) patterns, the checker board approach can be used. Or one area is programmed with the desired value and all the surrounding areas are programmed with different values (e.g., black).
This can be done for any pattern; more than two different values can be used for differentiating the areas in the pattern.
For example, if the patterns are too small (e.g., the vertical or horizontal bands are very narrow or the checker board boxes are very narrow) more than one adjacent area can be programmed with different values (e.g., black).
In another embodiment, low frequency non-uniformities across the panel are extracted by applying the patterns (flat field), images are taken of the panel; the image is corrected to eliminate the non-ideality such as field of view and other factors; and its area and resolution is adjusted to match the panel by creating values for each pixel in the display; and the value is used to compensate the low frequency non-uniformities across the panel.
Under ideal conditions, after compensation (either in-pixel or external compensation) the uniformity should be within expected specifications.
For external compensation, each measurement attained through system yields the voltage (or a current) required to produce a specified output current (or voltage) for each and every sub-pixel. Then these values are used to create a compensated value for the entire panel or for a point in the output response of the display. Thus, after applying the compensated values to create a flat-field, the display should produce a perfectly uniform response. In reality, however, several factors may contribute to a non-perfect response. For instance, a mismatch in calibration between measurement circuits may artificially induce parasitic vertical banding into each measurement. Alternatively, loading effects on the panel coupled with non-idealities in panel layout may introduce darker or brighter horizontal waves known as ‘gate bands.’ In general, these issues are easiest to solve through external, optical correction.
Two applications of optical correction are (1) structural non-uniformity correction and (2) global non-uniformity correction.
Structural non-uniformity caused by measurement units
Here the process to fix the structural non-uniformity caused by measurement units is described, but it will be understood that the process can be modified to compensate the other structural non-uniformities.
After the panel is measured at a few different operating points, compensated patterns (e.g., flat-field images) are created based on the measurement.
The optical measurement equipment (e.g., camera) is tuned to the appropriate exposure for maximum variation detection. In the case of vertical (or horizontal) bands two templates can be used. The first template turns off the even bands and the second template turns off the odd band. In this way, regions can be easily detected and the average variation determined for each region. Once the photographs are taken, the average variation is calculated. As mentioned above, each measurement should have a uniform response. Thus, the goal is to apply the following inverse to the entire measurement:
where Mraw is the raw measurement and LM is the optically measured luminance variation.
The following is one example of a detailed procedure:
1. Setup the Optical Measurement Device (e.g., Camera)
Adjust the optical measurement device (OMD) to be as straight and level as possible. The internal level on the optical measurement device can be used in conjunction with a level held vertically against the front face of the lens. Fix the position of the OMD.
2. Setup the Panel
The panel should be centered in the frame of the camera. This can be done using guides such as the grid lines in the view finder if available. In one method, physical levels can be used to check that the panel is aligned. Also, a pre-adjusted gantry can be used for the panels. Here, as the panels arrive for measurement, they are aligned with the gantry. The gantry can have some physical marker that the panel can be rest against them or aligned with them. In addition, some alignment patterns shown in the display can be used to align the panel by moving or rotating based on the output of the OMD (which can be the same as the main OMD) and the alignment pattern. Moreover, the measurement image of the alignment patterns can be used to preprocess the actual measurement images taken by the OMD for non-uniformity correction.
3. Photograph the Template Images
Two template files are created, one of which blacks out all the even bands and the other all the odd bands. These are used to create template images for extracting the measurement structural non-uniformity data. These masks can be directly applied to the target compensated images created based on the externally measured data. The resulting files can now be displayed with only the selected sub-pixel (for example white) enabled. Since the bands in this case are all of equal width, the OMD settings should be adjusted such that the pixel width of bright areas is approximately equal to the pixel width of dark areas in the resulting images. One picture is needed of each of the template variations. The same OMD settings should be used for both.
4. Photograph the Curve Fit Points
While the correction data can be extracted directly from the above two images, in another embodiment of the invention implementation, an image of each of the target points in the output response of the display is taken. Here, the target points are compensated first based on the electrically measured data. The same OMD settings and adjustments described in step 2 are used. It was found experimentally that extracting the variance in white and applying it to all colors gave good final results while reducing the number of images and amount of data processing required. The position of the camera and the panel should remain fixed throughout steps 3 and 4.
5. Image Correction
In an effort to produce optimal correction, both the template images and curve-fit points should be corrected for artifacts introduced by the OMD. For instance, image distortion and chromatic aberration are corrected using parameters specified by the OMD and applied using standard methods. As a result, the images attained from the OMD can directly be matched to defects seen in electrically measured data for each curve-fit point.
For template images, boundaries at the edges of mask regions are first de-skewed and then further cropped using a threshold. As a result, each of the resulting edges is smooth, preventing adjacent details in the underlying image from leaking in. For instance, the underlying image to which the mask is being applied may have a bright region adjacent to a dark region. Rough edges on the applied mask may introduce inaccuracy in later stages as the bright region's OMD reading may leak into that of the dark region.
6. Find Image Co-Ordinates
Here, the alignment mark images can be used to identify the image coordinate in relation to display pixels. Since the alignments are shown in known display pixel index, the image can now be cropped to roughly the panel area. This reduces the amount of data processing required in subsequent steps.
7. Generate the Template Image Masks
In this case, the target point images are used to extract non-uniformities; and the two patterned images are used as mask. The rough crop from step 6 can be used to only process the portion of the template image that contains the panel. Where the brightness in those template images is higher than threshold, the pixel is set to 1 (or another value) and where the brightness is lower than threshold it is set to zero. In this case, the pattern images will turn to bands of black and white. These bands can be used to identify the boundaries of bands in the target point images.
8. Apply Generated Templates to Curve-Fit Points
Either using the patterned images or the target point images, a value is created for each band based on the OMD output using a data/image processing tool (e.g.: MATLAB). The measured luminance values for each region is corrected for outliers (typically 2σ-3σ) and averaged.
9. Apply and Tune the Correction Factors
Using the overall panel average and the averages for each band, the created target points can be corrected by scaling each band by a fixed gain for each color and applying it to the original file. The gain required for each color of each level is determined by generating files with a range of gain factors, then displaying them on the panel.
In the case where the electrical measurement value is the grayscale required for each pixel to provide a fixed current, the target point is the measured data, although some correction may be applied to compensate for some of the non-idealities.
Low-Frequency Non-Uniformity Correction
Although low-frequency compensation can be applied to original target points or a raw panel, low-frequency uniformity compensation correction is generally applied once the other structural and high-frequency compensations procedure described above is completed for the panel. The following is one example of a detailed procedure:
1. Photograph the Structural Non-Uniformity Compensated Target Points
For each compensated target points, an image is captured for each of the sub-pixels (or combinations). For two target points, this will result in a total of 8 images. The exposure of OMD is then adjusted such that the histogram peak is approximately around 20%. This value can be different for different OMD devices and settings. To adjust, the target image is displayed with only the one sub-pixel enabled. The same settings are then used to image each of the remaining colors individually for a given level. However, one can use different setting for each sub-pixel.
2. Find the Corner Co-Ordinates
The same process as before can be applied to find the matching coordinate between images and display pixels using alignment marks. Also, if the display has not been moved, the same coordinates from previous setup can be used.
3. Correct the Image
Using the coordinates found in step 2, the image can be adjusted so that the resulting image matches the rectangular resolution of the display. In an effort to produce optimal correction, both the template images and curve-fit points should be corrected for artifacts introduced by the OMD. Image distortion and chromatic aberration are corrected using parameters specified by the OMD and applied using standard methods. If necessary a projective transform or other standard method can be used to square the image. Once square, the resolution can be scaled to match that of the panel. As a result, the images attained from the OMD can directly be matched to defects seen in electrically measured data for each curve-fit point.
4. Apply and Tune the Correction Factors
The images created from step 3 can be used to adjust the target points for global non-uniformity correction. Here, one method is to scale the extracted images and add them to the target points. In another method the extracted image can be scaled by a factor and then the target point images can be scaled by the modified images.
To extract the correction factors in any of the above methods, one can use sensors at few points in the panel and modified the factors till the variation in the reading of the sensors is within the specifications. In another method, one can use visual inspection to come up with correction factors. In both cases, the correction factor can be reused for other panels if the setup and the panel characteristics do not change.
While particular embodiments and applications of the present invention have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and compositions disclosed herein and that various modifications, changes, and variations can be apparent from the foregoing descriptions without departing from the spirit and scope of the invention as defined in the appended claims.
This application is a continuation of U.S. patent application Ser. No. 16/112,161, filed Aug. 24, 2018, now allowed, which is a continuation of U.S. patent application Ser. No. 14/255,132, filed Apr. 17, 2014, now U.S. Pat. No., which is a continuation-in-part of and claims priority to U.S. patent application Ser. No. 14/204,209, filed Mar. 11, 2014, Now U.S. Pat. No. 9,324,268, which claims the benefit of U.S. Provisional Application No. 61/787,397, filed Mar. 15, 2013, each of which is hereby incorporated by reference herein in its entirety. U.S. patent application Ser. No. 14/255,132, filed Apr. 17, 2014, is also a continuation-in-part of and claims priority to U.S. patent application Ser. No. 13/689,241, filed Nov. 29, 2012, now U.S. Pat. No. 9,385,169, which claims the benefit of U.S. Provisional Application No. 61/564,634 filed Nov. 29, 2011, each of which is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
10089924 | Soni | Oct 2018 | B2 |
10380944 | Soni | Aug 2019 | B2 |
Number | Date | Country | |
---|---|---|---|
20190318691 A1 | Oct 2019 | US |
Number | Date | Country | |
---|---|---|---|
61787397 | Mar 2013 | US | |
61564634 | Nov 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16112161 | Aug 2018 | US |
Child | 16456138 | US | |
Parent | 14255132 | Apr 2014 | US |
Child | 16112161 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14204209 | Mar 2014 | US |
Child | 14255132 | US | |
Parent | 13689241 | Nov 2012 | US |
Child | 14204209 | US |