The present invention generally relates to brightness and color measurement. More particularly, several aspects of the present invention are related to methods and apparatuses for measuring and calibrating the output from large, visual display signs.
Visual display signs have become commonplace in sports stadiums, arenas, and public forums throughout the world. The signs are typically very large, often measuring several hundred feet in size. Because of their immense size, the signs must be assembled and installed on-site using a series of smaller panels, which are themselves further comprised of a series of modules. The modules are internally connected to each other by way of a bus system. A computer or central control unit sends graphic information to the different modules, which then display the graphic information as images and text on the sign.
Each module in turn is made up of hundreds of individual light-emitting elements, or “pixels.” In turn, each pixel is made up of a plurality of light-emitting points, e.g., one red, one green, and one blue. The light-emitting points are termed “subpixels.” During calibration of each module, the color and brightness of each pixel is adjusted so that the pixels can display a particular color. The adjustment to each pixel necessary to create a color is then stored in software or firmware that controls the module.
Although each module is calibrated before leaving the factory, the individual pixels often do not exactly match each other in terms of brightness or color because of manufacturing tolerances. Furthermore, the electronics powering the various modules have tolerances that affect the power and temperature of the subpixels, which in turn affect the color and brightness of the individual pixels. As the sign ages, the light output of each subpixel may degrade. Because the degradation is not uniform for each color of subpixel, or even for each subpixel of the same color, the uniformity and color point of the sign will degrade over time. This can cause color shifts, visible edges around individual screen modules, and pixel-to-pixel non-uniformity.
Accordingly, the assembled visual display sign needs to be recalibrated periodically to maintain the ability to display colors clearly, uniformly, and accurately. However, the immense size of most visual display signs makes recalibration of the sign in a testing center impossible. Likewise, it is not cost-effective or practical to disassemble the sign in the field and bring in the individual modules to a testing center for recalibration.
On-site measurement and calibration provides its own challenges. For example, at a typical American football field the scoreboard may be 200 meters from a suitable measurement location. The requirement to measure subpixels that may only be a few millimeters in size from a distance of 200 meters requires high-powered, specialized optics. Another problem with on-site measurement is the extraction and management of the massive amount of data that must be collected, stored, and used for calculation of new correction factors. A typical display sign will have well over two million subpixels that must each be measured and recorded.
In the following description, numerous specific details are provided, such as the identification of various system components, to provide a thorough understanding of embodiments of the invention. One skilled in the art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In still other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of various embodiments of the invention.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearance of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The imaging device 30 is positioned a distance from the sign 20 and configured to capture a series of images from an imaging area 22 on the sign 20. The captured image data is transferred from the imaging device 30 to an interface 50, which is operatively coupled to both the imaging device 30 and the sign 20. The interface 50 compiles and manages the image data from each imaging area 22, performs a series of calculations to determine the appropriate correction factors that should be made to the image data, and then stores the data. After capturing the data for imaging area 22, the imaging device 30 is repositioned to capture image data from a new imaging area on the sign 20. This process is repeated until images from the entire sign 20 have been obtained. After collection of all the necessary data, the processed correction data is then uploaded from the interface 50 to the sign 20 and used to recalibrate the display of the sign 20.
The imaging device 30 also incorporates specialized optics that are necessary for high-resolution long-distance imaging. In one embodiment, the imaging device 30 is capable of measuring subpixels, which are only a few millimeters in size, from a distance of more than 200 meters.
The interface 50, which is operably coupled to both the imaging device 30 and the sign 20, is configured to manage the data that is collected, stored, and used for calculation of new correction factors that will be used to recalibrate the sign 20. A typical XGA-resolution visual display sign will have well over two million subpixels, each of which must be measured and recorded. The interface 50 controls the sign 20, automates the operation of the imaging device 30, and writes all the data into a database. The software is flexible enough to properly find and measure each subpixel, even though alignment of the camera and screen is not ideal. Further, the software in the interface 50 is adaptable to various sizes and configurations of visual display signs.
It should be understood that the division of the on-site calibration system 10 into three components is for illustrative purposes only and should not be construed to limit the scope of the invention. Indeed, the various components may be further divided into subcomponents, or the various components and functions may be combined and integrated. A detailed discussion of the various components and features of the on-site visual display calibration system 10 follows.
In addition to the digital camera 40, the imaging device 30 can also include a lens 90. In one embodiment, the lens 90 comprises a reflecting telescope operably coupled to the digital camera to enable the camera 40 to have sufficient resolution to resolve the imaging area 22 on the sign 20. In further embodiments, a variety of lenses may be used, so long as the particular lens provides sufficient resolution for the digital camera 40 to adequately capture image data within the imaging area 22.
The imaging device 30 is positioned at a distance L to the sign 20. The distance between the imaging device 30 and the sign 20 will vary depending on the screen size. In one embodiment, the imaging device 30 is positioned at a distance that is similar to the typical viewing distance of the sign 20. For example, in a sports stadium, the imaging device 30 may be placed in a seating area that is directly facing toward the sign 20. In other embodiments, however, the distance L can vary.
The on-site calibration system 10 further includes the interface 50. The interface 50 comprises image software to control the imaging device 30 as well as measurement software to find each subpixel in an image and extract the brightness and color data from the subpixel. In one embodiment, the interface 50 can be a personal computer with software for camera control, image data acquisition, and image data analysis. Optionally, in other embodiments various devices capable of operating the software can be used, such as handheld computers. Suitable software for the interface 50, such as ProMetric™ v. 7.2, is commercially available from the assignee of the present invention, Radiant Imaging, 15321 Main St. NE, Suite 310, Duvall, Wash.
The interface 50 also includes a database. The database is used to store data for each subpixel, including brightness, color coordinates, and calculated correction factors. In one embodiment, the database is a Microsoft® Access database designed by the assignee of the present invention, Radiant Imaging, 15321 Main St. NE, Suite 310, Duvall, Wash. The stored correction data is then uploaded to the module control system, which sends module control commands to the sign 20.
The sign 20 is assembled using a series of smaller panels 80, which are themselves further comprised of a series of modules 85. Each module 85 is made up of hundreds of individual light-emitting elements 60, or “pixels.” In turn, each pixel 60 is made up of three light-emitting points, subpixels 70a-70c. In one embodiment, the subpixels 70a-70c are red, green, and blue respectively. In other embodiments, however, the number of subpixels may be more than three. For example, some pixels may have four subpixels, e.g., two green subpixels, one blue subpixel, and one red subpixel. Furthermore, in some embodiments, the red, green, and blue (RGB) color space may not be used. Rather, a different color space can serve as the basis for processing and display of color images on the sign 20. For example, the subpixels 70a-70c may be cyan, magenta, and yellow respectively.
The brightness level of each subpixel 70a-70c in the sign 20 can be varied. Accordingly, the additive primary colors represented by the red subpixel 70a, the green subpixel 70b, and the blue subpixel 70c, can be selectively combined to produce the colors within the color gamut defined by a color gamut triangle, as shown in
Calibration of the sign 20 requires highly accurate measurements of the color and brightness of each subpixel, often referred to as a light emitting diode (LED). Typically, the accuracy required for measurement of the individual subpixels can only be achieved with a spectral radiometer. Subpixels are particularly difficult to measure accurately with a colorimeter because they are narrow-band sources, and a small deviation in the filter response at the wavelength of a particular subpixel can result in significant measurement error. Colorimeters rely on color filters that can have small imperfection in spectral response. In the illustrated embodiment, however, the imaging device 30 utilizes a colorimeter. The problem with small measurement errors has been overcome by correcting for the errors using software in the interface 50 to match the results of a spectral radiometer. For a detailed overview of the software corrections, see “Digital Imaging Colorimeter for Fast Measurement of Chromaticity Coordinate and Luminance Uniformity of Displays”, Jenkins et al., Proc. SPIE Vol. 4295, Flat Panel Display Technology and Display Metrology II, Edward F. Kelley Ed., 2001. The article is incorporated herein by reference.
A two-stage Peltier cooling system using two back-to-back thermoelectric coolers 210 (TECs) operates to control the temperature of the CCD imaging array 200. The cooling of the CCD imaging array 200 within the camera 40 allows it to operate at 14-bits analog to digital conversion with approximately 2 bits of noise (i.e., 4 grayscale units of noise out of a possible 16,384 maximum dynamic range). A 14-bit CCD implies that up to 214 or 16,384 grayscale levels of dynamic range are available to characterize the amount of light incident on each pixel.
The CCD imaging array 200 comprises a plurality of light sensitive cells or pixels that are capable of producing an electrical charge proportional to the amount of light they receive. The pixels in the CCD imaging array 200 are arranged in a two-dimensional grid array. The number of pixels in the horizontal or x-direction, and the number of pixels in the vertical or y-direction, constitutes the resolution of the CCD imaging array 200. For example, in one embodiment the CCD imaging array 200 has 1536 pixels in the x-direction and 1024 pixels in the y-direction. Thus, the resolution of the CCD imaging array 200 is 1,572,864 pixels, or 1.6 megapixels.
The resolution of the CCD imaging array 200 must be sufficient to resolve the imaging area 22 (
The method of the present invention is shown in
After the image is captured, at box 404 the image data is sent to the interface. The interface is programmed to calculate a three by three matrix of values that indicate some fractional amount of power to turn on each subpixel for each primary color. A sample matrix is displayed below:
For example, when red is displayed on the screen, the screen will turn on each red subpixel at 60% power, the green subpixels at 10% power, and the blue subpixels at 5% power. The following discussion details how this matrix is determined.
The goal is to determine the relative luminance levels of three given light sources, e.g., red, green and blue subpixels, to produce specified target chromaticity coordinates Cx and Cy. The first step is to compute the luminance target for each color. This can be done using the following equations, where L1, L2, and L3 are set to 1 and the source chromaticity values are just the target chromaticity values for each primary color. The following equations are used to calculate tristimulus values for each light source:
Next, calculate tristimulus values for the target chromaticity coordinates:
where the target luminance Lt=L1+L2+L3.
The next step is to determine the fractional luminance levels of the three light sources. Colors can be produced by combining the three light sources at different illumination levels. This is represented by the following equations:
Where a, b, and c are the fractional values of luminance produced by the source measured in the first step. For example, if a=0.5, then light source 1 should be turned on at 50% of the intensity measured in the first step to produce the desired color.
We can write the above system of equations as
We can then solve for a, b and c as
where
(by Cramer's Rule) and Det(A)=X1·(Y2Z3−Y3Z2)−Y1·(X2Z3−X3Z2)+Z1·(X2Y3−X3Y2).
The calculated a, b and c fractions are the target luminance for each primary color.
At box 406, the next step is to compute the fractions for each primary color. Again, the same formulas as described above are applied. This time, however, the source luminance and chromaticity is that of each subpixel, as measured by the imaging device in box 402. The target is the chromaticity and luminance for each primary color, which was determined at box 404. The following equations are used to calculate tristimulus values for each light source:
Next, calculate tristimulus values for the target chromaticity coordinates:
where the target luminance Lt=L1+L2+L3.
The next step is to determine the fractional luminance levels of the three light sources. Colors can be produced by combining the three light sources at different illumination levels. This is represented by the following equations:
Where a, b, and c are the fractional values of luminance produced by the source measured in the first step. We can write the above system of equations as
We can then solve for a, b and c as
where
(by Cramer's Rule) and Det(A)−X1·(Y2Z3−Y3Z2)−Y1·(X2Z3−X3Z2)+Z1·(X2X3−X3Y2).
Now, a, b and c represent the fractional luminance levels of the three light sources needed to produce a target color of (Cx, Cy) at the maximum luminance possible. This calculation is repeated three times, once for each color. This provides three sets of three a, b and c fractions, which are the components of the three by three matrix discussed above.
Note that if any of the values a, b, or c are negative, the desired chromaticity coordinate cannot be produced by any combination of the three light sources since it is outside the color gamut. A negative value would indicate a negative amount of luminance for a given subpixel, which of course can not occur. The above formulas, however, do not take this into account. Accordingly, two other fractions are set at levels that produce more light than is needed to hit the target luminance, and they must be reduced. This is done as follows:
TotalLuminance=a*RedLuminance+b*GreenLuminance+c*BlueLuminance
ScaleFactor=TotalLuminance/(b*GreenLuminance+c*BlueLuminance)
b=b*ScaleFactor
c=c*ScaleFactor
a=0
Note that ScaleFactor will always be less than 1 because TotalLuminance includes the negative value. Also note that although we do achieve the target luminance, the target chromaticity is not quite achieved in this case.
At box 408, the calculated correction determined above is uploaded from the interface to the firmware or software controlling the visual display sign. The visual display sign is then recalibrated using the new data for each subpixel.
One advantage of the foregoing embodiments of the on-site visual display calibration system is the efficiency and cost-effectiveness in recalibrating large visual display signs. It is impractical to disassemble the visual display sign in the field because of the sign's immense size. The on-site visual sign calibration system provides an effective way of recalibrating the visual display sign on-site without disassembling or in any way moving the sign.
Another advantage of the embodiments described above is the capability of the CCD digital camera to capture large amounts of data in a single image. For example, the two-dimensional array of pixels on the CCD imaging array is capable of capturing a large number of data points from the visual display sign in a single captured image. By capturing thousands, or even millions, of data points at once, the process of recalibrating the visual display sign is accurate and cost-effective.
While the invention is described and illustrated here in the context of a limited number of embodiments, the invention may be embodied in many forms without departing from the spirit of the essential characteristics of the invention. The illustrated and described embodiments are therefore to be considered in all respects as illustrative and not restrictive. Thus, the scope of the invention is indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are intended to be embraced therein.
Number | Name | Date | Kind |
---|---|---|---|
4379292 | Minato et al. | Apr 1983 | A |
4825201 | Watanabe et al. | Apr 1989 | A |
4875032 | McManus et al. | Oct 1989 | A |
5479186 | McManus et al. | Dec 1995 | A |
5563621 | Silsby | Oct 1996 | A |
6020868 | Greene et al. | Feb 2000 | A |
6243059 | Greene et al. | Jun 2001 | B1 |
6459425 | Holub et al. | Oct 2002 | B1 |
6491412 | Bowman et al. | Dec 2002 | B1 |
6552706 | Ikeda et al. | Apr 2003 | B1 |
6559826 | Mendelson et al. | May 2003 | B1 |
6611241 | Firester et al. | Aug 2003 | B1 |
6677958 | Cottone et al. | Jan 2004 | B2 |
6704989 | Lutz et al. | Mar 2004 | B1 |
6822802 | Nakano et al. | Nov 2004 | B2 |
7012633 | Jenkins | Mar 2006 | B2 |
7161558 | Eidem et al. | Jan 2007 | B1 |
20030016198 | Nagai et al. | Jan 2003 | A1 |
20030156073 | Van Zon | Aug 2003 | A1 |
20040066515 | Ott | Apr 2004 | A1 |
20040179208 | Hsu | Sep 2004 | A1 |
Number | Date | Country |
---|---|---|
05-064103 | Mar 1993 | JP |
2001-222259 | Aug 2001 | JP |
2003-099003 | Apr 2003 | JP |
Number | Date | Country | |
---|---|---|---|
20040246273 A1 | Dec 2004 | US |