The present invention relates three dimensional (3D) display systems, and in particular, is related to color compensation for 3D displays with shutter/polarized glasses.
Three dimensional (3D) displays have become very popular in consumer electronics market in recent years. Most 3D displays in the market come with a pair of shutter glasses which end users or consumers wear to view the 3D program. The shutter glasses are able to synchronize with a 3D projection, such as from a 3D television (TV), to separate left and right views for the consumer's eyes so as to generate a “3D” viewing experience. Some 3D TVs use polarized glasses, which also act to separate left and right eye views for an end user, although polarized glasses work in a different manner.
In principle, 3D display systems should not distort the colors of the original content. The end users or consumers at home or in a theater should see exactly the same color as colorists (or artists) can see in the post-production house. However, this fidelity is not achieved for current 3D display systems.
Another common problem for 3D displays or 3D TVs is that the color distortion for the same content between their 2D and 3D mode changes, as shown in
This invention relates to color calibration of 3D display systems that use 3D glasses worn by a user. Typical 3D glasses may include shutter/polarized glasses. Embodiments of the invention include a method to compensate the color shifting that is caused by glasses in a 3D display system. The color shifting is modeled as different, gains applied to RGB channels. A calibration method is used to estimate the gains (color intensity) of each color channel, and a correction method is applied to compensate such color shifting. Since the calibration process does not require professional photometer equipments, it can be easily implemented and integrated in consumer devices.
In one embodiment, a method performed by an apparatus for color correction of a 3D display device includes generating a pixel of a grey color having three pixel color channels. The grey color has channels that are substantially equal color intensity values. The grey color is displayed on a display screen. A user or other input source provides an adjusted color intensity values for two color channels of the pixel color channels of the grey color. A corrected color intensity value for the two color channels is calculated using the adjusted color intensity values and the color intensity values of the original grey color. Multiplicities of different colors of grey are provided to the display for adjustment by the user. Look up tables are generated that can be used to convert the original grey pixel color intensity values to the corrected color intensity values of the grey color.
The look-up tables can then be used to convert, pixel by pixel, color intensity values for all pixels in an input frame to construct a corrected frame of pixels. The converted frame represents a pre-distorted input so that a user, viewing a display of the corrected pixel frames, can perceive the displayed image as intended by the original input signal.
Additional features and advantages of the invention will be made apparent from the following detailed description of illustrative embodiments which proceeds with reference to the accompanying figures.
The color Muffing introduced by 3D glasses may be modeled as gain factors applied on three different color channels. Let the color observed without glasses be [r, g, b] in the display's device-dependent color space. Then the linear color intensity from the 3D display can be written as:
[fr(r),fg(g),fb(b)] (1)
where fr(r), fg(g), fb(b) are the response curves of the display for red, green, and blue channel respectively. This is as described in functional block 110 of
[grfr(r),ggfg(g),gbfb(b)] (2)
where gr, gg, gb are the scaling factor representing the effect from the glasses. This is as described in functional block 120 of
The linear color intensity in (2) is different from the linear intensity in (1) as the three gain factors are usually not equal to each other. This models the color distortion introduced by the 3D glasses in a 3D display system.
The goal of color compensation is to find another color [r′, g′,b′] such that
[grfr(r′),ggfg(g′),gbfb(b′)]=C[fr(r),fg(g),fb(b)] (3)
Here, the scaling factor C is either 1 or some other constant. This means that after color compensation, the displayed color is the same as the desired color (C=1), or the displayed color is constantly brighter or darker (C˜=1). The compensation for brightness difference can be controlled via adjustment of backlighting, which is available in many modern LCD TVs. Here the focus is on maintaining the same color for an end user regardless if viewed through 3D glasses or not.
As mentioned above, the goal is to find a color value [r′, g′, b′] that satisfies equation (3). To simplify the problem, we let
g′=g (4)
In one embodiment, one need only adjust red and blue components; leaving the green component unadjusted. From equation (3), one can assert that
C=g
g (5)
Considering that red and blue components are quite similar in the following deduction, then r′ can be obtained as below and b′ can be obtained in similar fashion.
Substituting (5) into (3) the following equation results:
By calculating the logarithm of both side of equation (6) then:
To simplify the notion, let
Then equation (7) can be rewritten in a simple form as follows:
F(r′)=F(r)+C′ (9)
or
r′=F
−1(F(r)+C′) (10)
Equation (10) is a closed-form solution useful to derive the compensated color. However, equation (10) cannot be used directly because it contains several unknown factors. The unknown factors include the response curve of the display device, and gr, gg, gb, characterized by the spectrum response of the glasses, which are both usually unavailable to a user.
Normally, to help determine the unknown factors, special equipment such as photometers and associated calibration equipment is needed. But, these are not widely available or easy to use for common end users. According to aspects of the invention, the following technique is used for calibration of a 3D display that includes the use of 3D glasses.
Initially, a grey color pixel sample with a color intensity value of r1,g1,b1, where each color channel in the picture has the same color intensity value. Thus, r1=g1=b1 is displayed as a plurality of same colored pixels to end user on a display. A grey color is the normal result of the combination of r, g, and b where the intensity of the r, g, and b pixel color intensity values are substantially equal in a pixel. Different tones or shades of grey result when the intensity of the r,g,b channels are changed. A plurality of grey pixels can result in a grey display portion of a display device. Generally, the plurality of grey color pixels are presented to the end user may be shown on the entire display screen or just a portion of the display screen. Because of color shifting occurring in both the 3D display device and the 3D glasses, the grey color may not appear to the user as uniformly or completely grey in color. Then, while wearing the 3D glasses, end user adjusts the red and blue channels such that after adjustment, the colors perceived by the user wearing the 3D glasses are closest to the grey color. Note that the green channel is held constant. In practice, any one of the three channels may be held constant as long as the other two are adjustable by the user. After adjustment, the user-adjusted color intensity values for the grey level are recorded as [r1u, g1, b1u]. These user-adjusted color intensity values are associated with the original colors of [r1,g1,b1]
Next, a different shade of grey is presented to the user on the display where, r2=g2=b2 represents a second shade of gray color. Once again, the user adjusts the red and blue color intensity values of the specific red and blue channels to generate a uniform shade of grey and the user-adjusted color intensity values are recorded as [r2u, g2, b2u] which correspond to the input of [r2,g2,b2]. This step of presenting another new grey shade to the user and recording the user-adjusted color intensity values is repeated for at least N times where N≧3.
After the above process is done, a group of input color value set [ri, gi, bi] [ri,gi,bi] and their corresponding user-adjusted color intensity values [riu, gi, biu] provided by the user. As human beings are good at identifying grey colors versus non-grey ones, the above process can be done with a reasonably high quality of results for the user. The next step is to derive the unknown parameters in equation (10).
As the function F(x) in equation (10) is usually a smooth function, equation (10) can be modeled as a polynomial function. As an example, a second order polynomial function is given below:
F(x)=a1x2+a2x+a3 (11)
From the user-assisted calibration process, we have a group of color pairs (i.e. [ri, gi, bi] and [riu, gi, biu]) that satisfy equation (9). Substituting (11) into (9) the following equations result:
a
1
r
i
u2
+a
2
r
i
u
+a
3
=a
1
r
i
2
+a
2
r
i
2
+a
2
r
i
+a
3
+C
u (12)
By rearranging the terms:
a
1(riu2−ri2)+a2(riu−ri)−Cu=0 (13)
Since there are N (N≧3) color pairs, equation (13) forms an over-constrained linear equation system that can be solved efficiently with pseudo inverse or other regression methods. Note that from linear equations established based on (13), one cannot obtain the value for a3. However, as shown below, a3 can be ignored.
Introducing function G(x):
G(x)=F(x)−a3=a1x2+a2x (14)
Equation (9) can also be written as:
G(r′)=G(r)+C′ (15)
Thus:
r′=G
−1(G(r)+C′) (16)
Where r′ is the red correction or predistortion color intensity value for input color r. Stated another way, r′ is the predistortion color intensity value for an input r. In a 3D system, the value r′ can be substituted for the value of r that is intended to be displayed. By using the predistorted displayed value of r′ instead of the desired value of r, the end user, wearing 3D glasses, will see a color corresponding to the original desired color of r.
As all parameters in equation (16) are known, it can be used to compute the corrected red component. A closed-form solution to compensate colors for blue component is derived in the same fashion producing:
b′=G
−1(G(b)+C′ (17)
For practical applications, once we obtain all parameters in equation (16), a look-up table having color intensity values for the red channel and the blue channel can be generated and stored in the memory for fast access. If the precision for the red channel is 8 bit, the look-up table for the red channel has 256 entries. The same is true for the blue channel.
An example of a portion of a look up table for the red channel Tr would be as follows:
A blue look up table Tb would be similarly constructed.
After the user adjustment, the adjusted color intensity value of each of the two colors is recorded at step 320 as a color pair related to the constant third color. The number of adjusted color samples is tested at step 325 to determine if enough colors are sampled. In one embodiment, at least three different grey colors are displayed and corrected color pairs are recorded. If enough colors are not yet sampled, (i.e. N<3) then step 325 moves to step 305 where a different gray color is selected for presentation to the user. If enough samples are taken at step 325, then step 330 is invoked to estimate the parameters (i.e. coefficients a1 and a2, and C′) of equation 13 using the adjusted two color components. A CPU or other computing device within the TV, STB, or other equipment is employed to perform the coefficient determination. With the parameters of equation 13 determined, then values for the lookup tables for two color channels, such as red and blue channel color intensity values, can be generated in step 340 using equations 16 and 17. These tables can be placed into memory for subsequent fast lookup of corrected color intensity values for a given input or desired color intensity value. This completes 345 the color calibration procedure 300.
Possible variations of the invention described herein include modifying equation (11) to include a higher order polynomial, or using other functions to model the function of equation (10). In the calibration process of
In one aspect of the invention, the calibration and compensation processes described in
Thus, using aspects of the invention, a color calibration can be performed for a 2D image color correction and a 3D color calibration can be performed for a 3D image color correction. The TV, STB, or other equipment could sense when that broadcaster is sending 3D images and thus use the look-up tables for the 3D color correction during 3D image display. Likewise when the broadcast changed from 3D to 2D, the TV or STB or other equipment could utilize lookup tables for the 2D color correction during a 2D image broadcast. Naturally, subsequent detection of a 3D broadcast would switch the lookup tables used to the 3D tables for proper color correction of the 3D image program. Thus, the user would not have to remove his 3D glasses for proper color correction in instances where both 3D and 2D images were being presented.
Input video 505 is received by the device 500 and provides pixel information that may be stored in memory 520. CPU-Arithmetic unit 525 may serve as a controller and/or processing unit that performs the calibration method described herein, such as described with respect to
The device 500 may be used in the color correction mode by utilizing the look-up table information stored in memory 520 to correct incoming video pixels, frame by frame, into color corrected or pre-distorted pixels to be displayed. The CPU or equivalent video controller can accept the incoming pixels from the receiver 515, convert the pixels into color corrected pixels using look up tables in the memory 520, and populate a frame buffer in the output driver 530 for eventual display on display 550. According to another aspect of the invention, a 2D/3D detection control 507 could be used by the CPU 525 or equivalent video controller to determine whether to use the 3D look up tables for a detected 3D input video signal or to use the 2D look up tables for a detected 2D input video signal.
The implementations described herein may be implemented in, for example, a method or process, an apparatus, or a combination of hardware and software. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed may also be implemented in other forms (for example, a hardware apparatus, hardware and software apparatus, or a computer-readable media). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to any processing device, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processing devices also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.
Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions may be stored on a processor or computer-readable media such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette, a random access memory (“RAM”), a read-only memory (“ROM”) or any other magnetic, optical, or solid state media. The instructions may form an application program tangibly embodied on a computer-readable medium such as any of the media listed above. As should be clear, a processor may include, as part of the processor unit, a computer-readable media having, for example, instructions for carrying out a process. The instructions, corresponding to the method of the present invention, when executed, can transform a general purpose computer into a specific machine that performs the methods of the present invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2010/002304 | 8/19/2010 | WO | 00 | 2/12/2013 |