This application is related to U.S. patent application Ser. No. 11/731,094, entitled “Reduction of mura effects,” filed on Mar. 29, 2007 and U.S. patent application Ser. No. 12/008,470, entitled “Correction of visible mura distortions in displays by use of flexible system for memory resources and mura characteristics,” filed on Jan. 11, 2008, and the above-listed U.S. Patent Applications are hereby incorporated herein by reference in their entirety.
Embodiments of the present invention comprise methods and systems for display correction, in particular, for compression of display non-uniformity correction data and use of compressed display non-uniformity correction data.
Inspection and testing of flat panel displays using a human operator may be costly, time consuming and based on the operator's perception. Therefore, the quality of human-operator-based inspections may be dependent on the individual operator and may yield subjective results that may be prone to error. Some automated inspection techniques may rely on a pixel-by-pixel measurement and correction of display non-uniformity. These techniques may require a prohibitive amount of memory for storage of the correction data, and methods and systems for reducing the storage requirements for the correction data may be desirable.
Some embodiments of the present invention comprise methods and systems for compressing display non-uniformity correction data, in particular, correction images.
In some embodiments of the present invention, a correction image may be compressed by fitting a data model to the correction image and encoding the model parameter values. In some embodiments of the present invention, a piecewise polynomial model may be used. In alternative embodiments of the present invention a B-spline may be used. In some embodiments of the present invention, the correction image may be decomposed into two images: an image containing the vertically and horizontally aligned structures of the correction image and a smoothly varying image. The smoothly varying image may be compressed by fitting a data model to the smoothly varying image.
In some embodiments of the present invention, multiple correction images may be compressed by determining eigenvectors which describe the distribution of the multiple correction images. Projection coefficients may be determined by projecting each correction image on the determined eigenvectors. In some embodiments of the present invention, an eigen-images associated with an eigenvector may be compressed according to the single correction image compression methods and systems of embodiments of the present invention.
Some embodiments of the present invention comprise methods and systems for using compressed display non-uniformity correction data. In some embodiments of the present invention, correction data for a display may be reconstructed from parameters stored on the display system. In some embodiments of the present invention, a plurality of correction images may be reconstructed from encoded eigen-images stored on the display system.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention taken in conjunction with the accompanying drawings.
Embodiments of the present invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The figures listed above are expressly incorporated as part of this detailed description.
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the methods and systems of the present invention is not intended to limit the scope of the invention, but it is merely representative of the presently preferred embodiments of the invention.
Elements of embodiments of the present invention may be embodied in hardware, firmware and/or software. While exemplary embodiments revealed herein may only describe one of these forms, it is to be understood that one skilled in the art would be able to effectuate these elements in any of these forms while resting within the scope of the present invention.
Inspection and testing of flat panel displays using a human operator may be costly, time consuming and based on the operator's perception. Therefore, the quality of human-operator-based inspections may be dependent on the individual operator and may yield subjective results that may be prone to error. Some automated inspection techniques may rely on a pixel-by-pixel measurement and correction of display non-uniformity. These techniques may require a prohibitive amount of memory for storage of the correction data, and methods and systems for reducing the storage requirements for the correction data may be desirable.
Mura defects are contrast-type defects where one, or more, pixels on a display is brighter, or darker, than the surrounding pixels when the display is driven at a constant gray level and should display uniform luminance. For example, when an intended flat region of color is displayed, various imperfections in the display components may result in undesirable modulations of the luminance. Mura defects may also be referred to as “Alluk” defects or, generally, non-uniformity distortions. Generically, such contrast-type defects may be identified as “blobs,” “bands,” “streaks,” and other terms indicative of non-uniformity. There are many stages in the manufacturing process that may result in mura defects on the display.
Mura correction on a display may require pixel-by-pixel correction using stored correction data for the display. In some embodiments of the present invention, stored correction data may comprise data associated with a correction image, which may be denoted Ic,l(i,j), associated with a color component and a gray level. Some of these embodiments may comprise three color components, which may be denoted c, 0≦c≦2. Some of these embodiments may comprise 256 gray levels, which may be denoted l, 0≦l≦255.
Some embodiments of the present invention may be described in relation to
Some embodiments of the present invention may be described in relation to
In some embodiments of the present invention described in relation to
Some embodiments of the present invention may comprise a piecewise polynomial model. In these embodiments, the input correction image 2 may be partitioned into one, or more, two-dimensional (2D) regions, also considered a patch. In some embodiments of the present invention, the partition grid may be spatially uniform. In alternative embodiments, the partition grid may be adaptive. In some embodiments of the present invention, the denseness of the partition grid may be related to the variation in the correction image. In some of these embodiments, the partition grid may be denser in areas of the correction image in which there is greater variation. In some embodiments of the present invention, information defining the partition grid may be stored with the encoded model parameters.
In embodiments of the present invention comprising a piecewise polynomial model, a region, which may be denoted Pp(i,j), of the input correction image 2 may be approximated by a planar model according to:
{tilde over (P)}p(i,j)=api+bpj+kp,
where ap, bp and kp denote the model parameters associated with the image region Pp(i,j) and {tilde over (P)}p(i,j) denotes the approximated region. The model fitting 4 may minimize a measure of the discrepancy between the image region Pp(i,j) and the approximated region {tilde over (P)}p(i,j). In some embodiments of the present invention, the image region may be a rectangular patch. In alternative embodiments of the present invention, the shape of the region may be non-rectangular.
The parameters ap, bp and kp for all regions in a partition may be encoded 8 and stored 12. A reconstructed correction image 26 may be reconstructed by decoding 20 the parameters 18 for each region and calculating 24 the approximated region.
Alternative embodiments of the present invention may comprise a model selecting between two planar fittings: {tilde over (P)}1,p(i,j)=api+k1,p and {tilde over (P)}2,p(i,j)=bpj+k2,p. In these embodiments, a binary mode indicator, which may be denoted dp, may indicate which of the two fittings is selected for a given region.
The parameter dp and the associated model parameters (ap,k1,p) or (bp,k2,p) for all regions in a partition may be encoded 8 and stored 12. A reconstructed correction image 26 may be reconstructed by decoding 20 the parameters 18 for each region and calculating 24 the approximated region using the appropriate model indicated by the binary mode indicator.
In some embodiments of the present invention, the mode decision for a region in the model fitting 4 may be made based on which planar fitting generates the best fit to the input region of the correction image.
Alternative embodiments of the present invention may comprise fitting a two-dimensional B-spline surface to the input correction image 2. In some of these embodiments, the location of the knot points may be equidistant, also considered uniform, spatially. In alternative embodiments, the location of the knot points may be adaptive, also considered non-uniform, spatially. In some embodiments of the present invention, the density of knot points may be related to the variation in the correction image. In some of these embodiments, the knot points may be denser in areas of the correction image in which there is greater variation. In some embodiments of the present invention, information defining the location of the knot points may be stored with the encoded knot values.
In some embodiments of the present invention, the basis B-splines for degree n may be shifted copies of each other. In these embodiments, given knot values, which may be denoted g(v,h), an approximated correction image, which may be denoted Ĩc,l(i,j), may be determined by up-sampling the knot samples and then convolving with a B-spline kernel, which may be denoted bn(i,j). The approximated correction image may be determined according to:
Ĩc,l(i,j)=[g]↑mv,mh*bmv,mhn,
where mv and mh are sub-sampling ratios in the vertical and horizontal spatial dimensions, respectively.
Some embodiments of the present invention may comprise a uniform B-spline of degree 1. These embodiments may be equivalent to bilinear sub-sampling where the knot values may be sub-sampled pixel intensity values.
In some embodiments of the present invention comprising fitting 4 a B-spline surface to the input correction image 2, the model fitting 4 may comprise determining knot values that minimize an error measure between the input correction image 2 Ic,l(i,j) and the correction image reconstructed using the model Ĩc,l(i,j). Exemplary error measures include mean-square error (MSE), mean absolute error (MAE), root mean-square error (MSE) and other error measures known in the art. In some embodiments of the present invention comprising the MSE, the spline approximation may be solved by recursive filtering. In alternative embodiments, the spline approximation may be solved by systems of linear equations.
In some embodiments of the present invention, predictive coding may used to encode 8 the model parameters representative of the correction image 2.
In some embodiments of the present invention described in relation to
Ic,l(i,j)=Sc,l(i,j)+Nc,l(i,j),
where Sc,l(i,j) denotes the vertically and horizontally aligned structures 35 and Nc,l(i,j) denotes the smoothly varying component 36. In some embodiments, the vertically and horizontally aligned structures 35 may be represented by a column vector and a row vector, which may be denoted Col1xW and RowHx1, respectively, where W and H refer to the width and height of the correction image 32, respectively. In some of these embodiments, the vertically and horizontally aligned structures Sc,l(i,j) may be determined according to:
Sc,l(i,j)=ColHx1*Row1xW.
In alternative embodiments, the vertically and horizontally aligned structures 35 Sc,l(i,j) may be determined according to:
Sc,l(i,j)=ColHx1*11xW+1Hx1*Row1xW,
where 11xW and 1Hx1 denote a row vector and a column vector of all “1” entries, respectively.
In some embodiments of the present invention, the column vector Col1xW and the row vector RowHx1 may be stored 46 directly as part of the encoded image information. In alternative embodiments, the column vector Col1xW and the row vector RowHx1 may be encoded prior to storage.
The above-described embodiments of the present invention relate to one correction image. In some embodiments of the present invention, multiple correction images may be encoded for storage or other use.
Some embodiments of the present invention may be described in relation to
A mean-adjusted correction-image vector, which may be denoted Φp, corresponding to each correction-image vector, Γp, may be determined by subtracting 56 the mean correction-image vector, Ψ, from each correction-image vector, Γp, according to:
Φp=Γp−Ψ.
The covariance matrix, which may be denoted Cov, may be formed 58 according to:
Principle Component Analysis (PCA) may be applied by determining 60 the eigenvalues and eigenvectors of the covariance matrix, Cov. An eigenvector and its associated eigenvalue may be denoted uq and λq, respectively, where 0≦q≦m−1. Each input correction-image vector, Γp, may be projected 62 to the eigenspace corresponding to the eigenvectors and eigenvalues according to:
ωp(q)=uqT·Φp,0≦q≦m−1.
The mean correction-image vector, Ψ, and the eigenvectors, uq, may be compressed 64, 66 according to any of the methods and systems described herein for compressing single image correction images by un-stacking the vectors back to image form. The un-stacked eigenvectors in image form may be referred to as eigen-images, and the un-stacked mean correction-image vector may be referred to as the mean correction-image. The encoded mean correction-image, eigen-images and projection coefficients, ωp(q), 0≦q≦m−1, 0≦p≦3K−1 may be stored 68. In some embodiments of the present invention, the projection coefficients may be stored 68 directly. In alternative embodiments of the present invention, the projection coefficients may be encoded and stored.
In some embodiments of the present invention described in relation to
for gray levels and color components corresponding to captured mura-correction images. In some embodiments of the present invention, missing gray level correction images may be determined by linear interpolation using the closest two neighboring levels. In some embodiments of the present invention, the interpolation may be applied on the projected coordinates in the eigenspace.
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalence of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.
Number | Name | Date | Kind |
---|---|---|---|
4805127 | Hata et al. | Feb 1989 | A |
4835607 | Keith | May 1989 | A |
5164992 | Turk et al. | Nov 1992 | A |
5422675 | Lim | Jun 1995 | A |
5497236 | Wolff et al. | Mar 1996 | A |
5577131 | Oddou | Nov 1996 | A |
5835099 | Marimont | Nov 1998 | A |
5978511 | Horiuchi et al. | Nov 1999 | A |
5995668 | Corset et al. | Nov 1999 | A |
6044168 | Tuceryan et al. | Mar 2000 | A |
6173084 | Aach et al. | Jan 2001 | B1 |
6282440 | Brodnick et al. | Aug 2001 | B1 |
6963670 | Avinash et al. | Nov 2005 | B2 |
7088863 | Averbuch et al. | Aug 2006 | B2 |
7103229 | Porikli | Sep 2006 | B2 |
7609293 | Faulkner et al. | Oct 2009 | B2 |
7639940 | Yoda et al. | Dec 2009 | B2 |
7668342 | Everett et al. | Feb 2010 | B2 |
7869649 | Watanabe et al. | Jan 2011 | B2 |
20010015835 | Aoki | Aug 2001 | A1 |
20050007392 | Kasai et al. | Jan 2005 | A1 |
20050168460 | Razdan et al. | Aug 2005 | A1 |
20050238244 | Uzawa | Oct 2005 | A1 |
20060092329 | Noji | May 2006 | A1 |
20070071336 | Pace | Mar 2007 | A1 |
20070074251 | Oguz et al. | Mar 2007 | A1 |
20070098271 | Kamata | May 2007 | A1 |
20070109245 | Hwang | May 2007 | A1 |
20070126758 | Hwang | Jun 2007 | A1 |
20070126975 | Choi et al. | Jun 2007 | A1 |
20070229480 | Ookawara | Oct 2007 | A1 |
20070248272 | Sun et al. | Oct 2007 | A1 |
20070262985 | Watanabe et al. | Nov 2007 | A1 |
20070273701 | Mizukoshi et al. | Nov 2007 | A1 |
20080063292 | Nose et al. | Mar 2008 | A1 |
20080298691 | Zhang et al. | Dec 2008 | A1 |
20090030676 | Xu et al. | Jan 2009 | A1 |
20090096729 | Ozawa et al. | Apr 2009 | A1 |
20090154778 | Lei et al. | Jun 2009 | A1 |
20090268810 | Dai | Oct 2009 | A1 |
20090299961 | Lo | Dec 2009 | A1 |
20100092028 | Huang et al. | Apr 2010 | A1 |
20100142786 | Degani et al. | Jun 2010 | A1 |
Number | Date | Country |
---|---|---|
101127191 | Feb 2008 | CN |
1650730 | Apr 2006 | EP |
62-131233 | Jun 1987 | JP |
2007097118 | Aug 2007 | WO |
Entry |
---|
Aujol, J.-F., et al., “Structure-texture image decomposition—modeling, algorithms, and parameter selection,” International Journal of Computer Vision, 2006. |
International Application No. PCT/JP2009/062851—International Search Report and Written Opinion—Mailing Date Oct. 20, 2009. |
Chinese Office Action—Application No. 200980126620.7—Dated Jun. 28, 2012. |
Number | Date | Country | |
---|---|---|---|
20100008591 A1 | Jan 2010 | US |