The present invention relates to display devices, and in particular, to reconfiguration of display devices according to their current environment.
Unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
A color appearance model (CAM, which may also be referred to as a “color model”) is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components. When this model is associated with a precise description of how the components are to be interpreted (viewing conditions, etc.), the resulting set of colors is called color space. Examples of color spaces include the tristimulus color space, the XYZ color space (developed by the International Commission on Illumination [CIE], and which may also be referred to as the “CIE 1931 color space”), the red-green-blue (RGB) color space, the hue-saturation-value (HSV) color space, the hue-saturation-lightness (HSL) color space, the long-medium-short (LMS) color space, and the cyan-magenta-yellow (CMY) color space.
CAMs are useful to match colors under different environment conditions that otherwise might be perceived to be different, according to the human visual system (HVS). In particular, a color captured (e.g., in an image) under one set of conditions may be perceived as a different color by an observer viewing that color in another set of conditions. The following are examples of factors that can contribute to perceptible color mismatches: the different chromacities and/or luminance levels of different illuminants, different types of devices used to display the color, the relative luminance of the background, different conditions of the surrounding environment, as well as other factors. Conventional CAMs aim to compensate for these factors by adjusting an image viewed with a destination set of conditions so that it appears to be the same color at which it was captured with a source set of conditions. Thus, CAMs can be used to convert a patch of color seen in one environment (e.g., the source environment) to an equivalent patch of color as it would be observed in a different environment (e.g., the target environment).
As an example, consider the most recent CAM ratified by CIE, which is referred to as CIECAM02. CIECAM02 provides a limited ability to modify a color appearance model based on the environment of the display device. Three surround conditions (namely Average, Dim and Dark) provide the parameters given in TABLE 1:
In TABLE 1, the surround ratio SR tests whether the surround luminance is darker or brighter than medium gray (0.2). The parameter F is a factor that determines a degree of adaptation. The parameter c is a factor that determines the impact of the surroundings. The parameter Nc is a chromatic induction factor. The color appearance model may be modified according to the parameters corresponding to the appropriate surround conditions.
An embodiment of the present invention improves a color appearance model beyond a basic color appearance model. As discussed above, many basic CAMs (such as the CIECAM02 model as understood) provide only a limited ability to modify the CAM based on the environment of the display device. Furthermore, many basic CAMs (such as the CIECAM02 model as understood) do not define how various sensor results may be used to determine which of the three surround conditions is appropriate for a particular environment. In addition, many basic CAMs (such as the CIECAM02 model as understood) do not consider the interaction between a back modulator and a front modulator in a dual modulator display device.
According to an embodiment, a method adjusts a display device according to a display environment. The method includes sensing the display environment of the display device and generating environment data that corresponds to the display environment. The environment data includes color data. The method further includes adjusting a color appearance model according to the color data, generating a control signal according to the color appearance model having been adjusted, and controlling a backlight of the display device according to the control signal. In this manner, a viewer perceives the images displayed by the display device in the manner intended by the content creator, because the adjustments to the color appearance model compensate for the viewer's physiological response to the display environment.
The color appearance model may be adjusted according to the luminance of the display environment. Various parameters of the color appearance model may be adjusted, including the whitepoint achromatic response (Aw), the degree of adaptation (D), the induction factor (n), and the luminance level adaptation factor (Fl).
The display environment may be sensed with more than one sensor, and the color appearance model may be adjusted according to a weighted distance to the sensors.
A front modulator may be controlled by input video data such that the backlight and the front modulator display an image corresponding to the input video data. The backlight may be a back modulator that is also controlled by the input video data.
According to an embodiment, an apparatus includes a control circuit that implements the above-described method.
According to an embodiment, a display device includes a backlight, a sensor, and a control circuit that work together to implement the above-described method.
The following detailed description and accompanying drawings provide a further understanding of the nature and advantages of the present invention.
Described herein are techniques for improving image quality based on the environment. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
In the following description, various methods, processes and procedures are detailed. Although particular steps may be described in a certain order, such order is mainly for convenience and clarity. A particular step may be repeated more than once, may occur before or after other steps (even if those steps are otherwise described in another order), and may occur in parallel with other steps. A second step is required to follow a first step only when the first step must be completed before the second step is begun. Such a situation will be specifically pointed out when not clear from the context.
The following description uses the term “display device.” In general, this term refers to device that displays visual information (such as video data or image data). An embodiment of the present invention is directed toward a display device that includes two elements that, in combination, control the display of the visual information. One example embodiment includes a backlight and a front panel. In general, the backlight may be implemented with LEDs, and the front panel may be implemented with LCDs. Another example embodiment includes a back modulator and a front modulator. In general, the back modulator may be implemented with LEDs, and the front modulator may be implemented with LCDs. Controlling the back modulator and front modulator together may be referred to as dual modulation. (When the distinction is unimportant, the terms backlight and back modulator may be used interchangeably, and the terms front panel and front modulator may be used interchangeably.)
The following description uses the term “backlight”. In general, this term refers to a light generating element that, in combination with the front panel, generates the output image.
In a dual modulation device, the term “back modulator” may be used to more precisely refer to the backlight.
Note that in the video display arts, the term “backlight” may be used to refer to a different feature than the term “backlight” is to be understood according to embodiments of the present invention. This different “backlight” refers to a light that illuminates the wall behind a display, to improve viewer depth perception, to reduce viewer eye strain, etc. This different “backlight” does not relate to the generation of the output image. This different “backlight” is not related to the CAM. This different “backlight” is to be understood to be excluded from the term “backlight” in the following description of embodiments of the present invention.
The sensor interface 102 connects to a sensor (not shown). The sensor interface 102 receives environment data 120 from the sensor. The environment data 120 corresponds to the display environment. The display environment may include information such as the color and brightness of the light in the display environment. Specific details of the environment data are provided in subsequent paragraphs.
The memory circuit 104 stores a color appearance model (CAM). In general, the CAM is used to modify the characteristics of the display device so that the output video appears as intended by the creator of the video data input into the display device. More specifically as related to an embodiment of the present invention, the CAM is used to control the color of the backlight of the display device according to the display environment, as further described below. As further detailed below, the CAM may be implemented as a memory that contains lookup tables that were generated according to environmental parameters, and circuitry (e.g., a processor) that manipulates the data in the lookup tables. According to a further embodiment, when the backlight is modulated according to the input video data, the display environment modifies the CAM.
According to an embodiment, the CAM corresponds to a modified CIECAM02 color appearance model (International Commission on Illumination 2002 CAM). Other embodiments may implement with modifications other CAMs as desired according to design preferences. Examples of such CAMs include CIECAM97 and a revised CIECAM97s by Mark Fairchild. In addition, embodiments of the present invention may also be applied to chromatic adaptation transforms (CATs) or lookup tables of color appearance information. Specific details of the CAMs are provided in subsequent paragraphs.
The second interface circuit 108 generates control signals 124. The control signals 124 control the display elements of the display device (see
The processor circuit 106 adjusts the CAM according to the color data. According to an embodiment, the data in the lookup tables used by the CAM is regenerated based on the color data. The processor circuit 106 generates the control signals 124 that control a back modulator (or backlight) of the display device (see
As an example, if the display environment is more orange than normal (e.g., sunset light via a window into a room with the display device), the color appearance model is adjusted to take this information into account. When images are displayed, their color is adjusted so that a viewer perceives the images as intended, and does not perceive them in an unintended manner due to the excess orange color in the viewing environment. As another example, artificial light and daylight produce different viewing environments; an embodiment adjusts the CAM so that the backlight takes the environment into account, and the viewer perceives the images as intended.
Although the sensor interface 102 and the video interface 108 are shown as separate interfaces, such separation is shown mainly for ease in understanding and explanation. According to another embodiment, the functions of these two interfaces may be implemented with a single interface. According to another embodiment, the functions of these interfaces may be implemented with more than two interfaces (e.g., a sensor control interface, a sensor input interface, a video input interface, and a video output interface). The number and type of interfaces may be made according to design considerations such as the speed and amount of data to be processed. According to an embodiment, the control circuit 100 may include additional interfaces to implement additional functionality beyond the functionality described in the present disclosure. According to an embodiment, the control circuit 100 may be arranged to follow the other processing elements of a display device (e.g., the upscaler, the deinterlacer, etc.).
The backlight 202a receives the control signals 124 and generates backlight output signals 210a. The backlight output signals 210a generally correspond to light having a color that has been adjusted according to the environment. The backlight 202a may be implemented by light emitting diodes (LEDs). Each LED element may be implemented as one or more LED devices; for example, each LED element may include a red LED, a green LED and a blue LED that are controlled together to generate a particular color of light. The LEDs may be organic LEDs (OLEDs). According to an embodiment, the backlight 202a may be implemented by a field emission display (FED). According to an embodiment, the backlight 202a may be implemented by a surface-conduction electron-emitter display (SED).
The front panel 204a further modifies the backlight output signals 210a according to the video input signal 122 to produce front panel output signals 212. The front panel output signals 212 generally correspond to the image that is displayed by the device 200a. As a more specific example, the front panel selectively blocks the backlight output signals 210a to produce the front panel output signals 212. The front panel 204a may be implemented by liquid crystal elements of a liquid crystal display (LCD).
The sensor 206 senses the display environment 220 and generates the environment data 120. As discussed above, the environment data 120 may include information such as the color and brightness of the light in the display environment 220. Additional details of the environment data 120 are provided in subsequent paragraphs.
The control circuit 100b receives the environment data 120 and input video data 122, and generates the control signals 124. The input video data 122 may be still image data (e.g., pictures) in various formats, such as JPEG (Joint Photographic Experts Group) data, GIF (graphics interchange format) data, etc. The input video data 122 may be moving image data (e.g., television) in various formats, such as MPEG (Moving Picture Experts Group) data, WMV (Windows media video) data, etc. The input video data 122 may include metadata, for example Exif (Exchangeable image file format) data.
More specifically, the control signals 124 are based on both the input video data 122 and the environment data 120. According to an embodiment, the color appearance model (which is adjusted according to the environment data 120; see
The back modulator 202b generates back modulator output signals 210b in response to the control signals 124 from the control circuit 100b. The back modulator output signals 210b generally correspond to low resolution images. The back modulator 202b may be implemented by light emitting diodes (LEDs). Each LED element may be implemented as one or more LED devices; for example, each LED element may include a red LED, a green LED and a blue LED that are controlled together to generate a particular color of light. The LEDs may be organic LEDs (OLEDs). According to an embodiment, the back modulator 202b may be implemented by a field emission display (FED). According to an embodiment, the back modulator 202b may be implemented by a surface-conduction electron-emitter display (SED).
The front modulator 204b further modifies the back modulator output signals 210b according to the control signals 124 to produce front modulator output signals 212. The front modulator output signals 212 generally correspond to high resolution images. As a more specific example, the front modulator 204b selectively blocks the back modulator output signals (low resolution image) 210b to produce the front modulator output signals (high resolution image) 212. The front modulator 204b may be implemented by liquid crystal elements of a liquid crystal display (LCD).
The sensor 206 senses the display environment 220 and generates the environment data 120. As discussed above, the environment data 120 may include information such as the color and brightness of the light in the display environment 220. Additional details of the environment data 120 are provided in subsequent paragraphs.
Comparing the embodiment of
At 302, the display environment is sensed. The display environment corresponds to the color, brightness, etc. of the light in the environment that the display device is located. The sensor 206 (see
At 304, environment data that corresponds to the display environment is generated. For example, the analog information sensed from the display environment (see 302) may be transformed into digital data for further processing by digital circuit components. The environment data includes color data. The sensor 206 (see
At 306, a color appearance model is adjusted according to the color data. More information regarding the specific adjustments performed is provided in subsequent paragraphs. According to an embodiment, the CAM may be implemented by lookup tables that store a set of initial values based on particular default assumptions regarding the source environment or the display environment. These initial values may be replaced according to changes in the source environment or the display environment. Changes to the source environment may be detected via the input video data, either directly or by metadata. Changes to the target environment may be detected by the sensor (see 302). The processor circuit 106 (see
At 308, the CAM information is provided to the backlight of the display device. The CAM information may include a target white point. Since the CAM has been adjusted according to the display environment (see 306), the target white point likewise depends upon the detected display environment (see 302). More specifically, the color of the target white point depends upon the color of the display environment. The video interface 108 (see
At 310, the backlight uses the CAM information (see 308) to generate its light. The color of the light generated by the backlight thus depends upon the detected display environment (see 302). The backlight 202a (see
At 312, the display device controls its front panel to generate an image corresponding to the input video data 122 (see
In summary, the method 300 is used to affect the viewer's perception of the input video data. By manipulation of the color of the light emitted by the backlight, the perception of the image is altered to match the environment. For example, if the environment has an orange color, the backlight light will be adjusted toward orange, making the image take into account the orange environment with respect to the senses of the viewer. This is to account for the fact that the viewer will adapt to the environment (e.g., an image of a white wall may be measured as orange because of the reflection of the orange light, however it will still appear white when the viewer is adapted to this environment). For on-screen colors to appear as intended by the content creator, the backlight is adjusted to match the environment.
According to another embodiment, the method 300 may be modified as follows for use with a dual modulation display device (e.g., the display device 200b of
The control signal generator 402 generally corresponds to the control circuit 100 (see
The preference adjustment circuit 426 receives the reference white point information 424 (or the metadata that contains replacement white point information) and interfaces with the user color preference GUI 404 to adjust the reference white point (or the replacement white point) according to user preference. For example, if the user prefers a different white point than the reference white point, the user may select it using the user color preference GUI 404; the preference adjustment circuit 426 then provides the different white point (instead of the reference white point) to the color appearance model 428. As another example, if the user prefers a different white point than the metadata white point (via, e.g., the Exif header 442), the user may select it using the user color preference GUI 404; the preference adjustment circuit 426 then provides the different white point (instead of the metadata white point) to the color appearance model 428.
The color appearance model 428 receives the reference environment information 422 and the white point information (which may be modified by the content metadata or user preference). The color appearance model 428 also implements a selected CAM for the display device 400, for example, the CIECAM02 color appearance model. The color appearance model 428 interfaces with the local sensor 406 in a manner similar to that described above with reference to
The chromatic adaptation LUT 430 stores chromatic adaptation information. Chromatic adaptation is useful because chromatic adaptation by the human visual system is not instantaneous; it takes some time to adapt to a change in environment lighting color. This change takes the form of a curve over time. For example, when a large change in lighting occurs, the human visual system quickly starts to adapt to the new color, however the rate of adaption slows does as a state of full adaption takes place. Based on the target white point 450, the adjustment circuit 432 selects the appropriate chromatic adaptation information (from the chromatic adaptation LUT 430) to generate the backlight control signals 452.
The BLU 414 receives the backlight control signals 452 and generate a backlight output. Generally the backlight output corresponds to the target white point 450, which is based on the color of the environment (note the CAM 428). According to another embodiment (see, e.g.,
The threshold memory 408 stores minimum backlight threshold information. The backlight threshold evaluator circuit 410 compares the backlight control signals 452 and the minimum backlight threshold information. If the backlight control signals 452 are below the minimum backlight threshold, the threshold evaluator circuit 410 provides the minimum backlight threshold to the front modulator scaling circuit 412; otherwise the threshold evaluator circuit 410 provides the backlight control signals 452 to the front modulator scaling circuit 412.
The front modulator scaling circuit 412 receives the content 440 and the backlight information from the threshold evaluator circuit 410, and generates control signals for the front modulator 416 that scale the display of the content correctly given the backlight information.
The display device 500 includes two sensors 406a and 406b. The sensors 406a and 406b may be mounted on opposing sides of the display device 500. The sensor 406a provides its environment information to the adjustment circuit 504, and the sensor 406b provides its environment information to the adjustment circuit 506. The adjustment circuit 504 generates dampened target backlight information according to the environment detected by the sensor 406a, and the adjustment circuit 506 generates dampened target backlight information according to the environment detected by the sensor 406b. The adjustment circuits 504 and 506 may be further configured by the user color preference GUI 404 in a manner similar to that described above in
The interpolation circuit 510 receives the dampened target backlight information from the adjustment circuits 504 and 506, interpolates the appropriate backlight settings across the backlight according to the dampened target backlight information, and generates the appropriate backlight control signals for the BLU 414. For example, for regions of the BLU 414 closer to the sensor 406a, the dampened target backlight information from the adjustment circuit 504 may be weighted more heavily than the dampened target backlight information from the adjustment circuit 506. As another example, for regions of the BLU 414 closer to the sensor 406b, the dampened target backlight information from the adjustment circuit 506 may be weighted more heavily than the dampened target backlight information from the adjustment circuit 504. The weighting can be a linear weighting based on the distance from the region to the respective sensors. For example, if a region is 10 inches from the sensor 406a and 40 inches from the sensor 406b, the dampened target backlight information corresponding to the sensor 406a is weighted at 0.8 (⅘) and that corresponding to the sensor 406b is weighted at 0.2 (⅕). The weighting can be a geometric weighting based on the square of the distance from the region to the respective sensors. For example, if a region is 10 inches from the sensor 406a and 40 inches from the sensor 406b, the dampened target backlight information corresponding to the sensor 406a is weighted at 0.96 ( 24/25) and that corresponding to the sensor 406b is weighted at 0.04 ( 1/25).
The averaging circuit 512 receives the dampened target backlight information from the adjustment circuits 504 and 506, averages the dampened target backlight information, and provides the average to the backlight threshold evaluator circuit 410. The front modulator scaling circuit 412 then generates the control signals for the front modulator 416 based on the information provided by the backlight threshold evaluator circuit 410 in a manner similar to that described above in
Environment Data Details
According to an embodiment, the environment data sensed corresponds to the whitepoint of the environment in absolute terms. The sensor (e.g., the sensor 206 of
CAM Details
According to an embodiment, the color appearance model implemented as the CAM 428 (see
The parameters shown in
In general, the source viewing conditions are very similar for the majority of content (e.g., color timing suites are for the most part very similar to each other); however, the source viewing environment information may be included with the content for more accurate rendition at the target viewing site. Target viewing conditions may be measured by a sensor as described above (see, e.g.,
According to an embodiment, the relative luminance (Yb, also referred to as the relative background luminance) and surround luminance (S) parameters have notably less impact than the other parameters on the CAM implemented (e.g., the modified CIECAM02 described above). In such an embodiment, the Yb and S parameters are not determined by the sensor. Instead, preset values are used, and the Yb and S parameters are kept static. According to another embodiment, the Yb and S parameters have more of an influence on the CAM implemented; in such case, the sensor may also be used to measure the Yb and S of the display environment in order to determine the Yb and S parameters.
The process flow for performing the calculations for the CAM is as follows (with reference to
At 608, the display device (e.g., the processor 106) processes the environment information into the various CAM parameters. This processing may implement the equations shown in
At 610, the display device (e.g., the memory 104) stores the CAM parameters corresponding to the environment information (e.g., as the CAM 428). Note that some of these parameters (e.g, z, Fl, etc.) depend upon further processing in 612.
At 612, the display device (e.g., the processor 106) performs processing on some of the CAM parameters in 610 to generate additional CAM parameters. For example, the whitepoint in HPE space is converted to the whitepoint sigma. As discussed above regarding 608, some of the parameters in 612 depend upon other parameters (e.g., SigmaRp depends upon SigmaR, etc.). The specifics of the equations implemented by the processing of 612 are shown in
According to an embodiment, instead of using the environment information sensed by the sensor as inputs to equations, the environment information is used as an index to access pre-calculated parameters stored in memory (e.g., the memory 104). For example, sixteen sets of CAM parameters may be stored in memory, corresponding to sixteen different color measurements. For example, the sixteen sets can correspond to a red environment, a red-orange environment, an orange environment, etc. The display device (e.g., the processor 106) then uses the environment information to select the most appropriate set of CAM parameters.
For example, the sets of CAM parameters may be indexed according to a range of colors in RGB (or XYZ, etc.) color space. The sensor senses the color in the display environment and generates the environment information as a single RGB color (e.g., as an average of all the information sensed) corresponding to the display environment. The display device (e.g., the processor 106) then selects the set of CAM parameters whose index range includes that single color.
As another example, the sets of CAM parameters may be indexed according to a single index color. The display device (e.g., the processor 106) then selects the set of CAM parameters whose index color is closest to the sensed color. The closeness may be based on the linear distance between the sensed color and the index colors. In the case where each index color includes a number of components (e.g., an index color in the RGB color space includes R, G and B components), the closeness may be based on the cumulative distance between each component of the sensed color and the index colors.
As discussed above regarding other embodiments, the sensor 206 senses the light in the environment 220 where the display device 900 is located and provides the environment data to the CAM processor 904. The CAM processor 904 also receives the default source environment data from the memory 902. The CAM processor 904 may also receive source environment data from the video content 440, for example as metadata in the content. (The display device 900 may use the default source environment data when the content does not provide the source environment data.) The CAM processor 904 builds the dynamic CAM lookup tables based on the environment data and the source environment data, as discussed above.
The memory 906 stores the dynamic CAM lookup tables generated by the CAM processor 904, and the memory 908 stores the static CAM lookup tables. The dynamic CAM lookup tables depend upon the environment data, and the static CAM lookup tables do not. Thus, the content of the lookup tables may vary depending upon the environmental parameters that are sensed. For example, as discussed above with reference to
The memory 910 stores the original color information, which the display device 900 determines according to the video content 440. The original color information may be in the form of a whitepoint that corresponds to the video content 440.
The CAM 912 uses the lookup tables in the memories 906 and 908, and the original color information in the memory 910, to generate the CAM used by the display device 900. The process the CAM 912 performs may be as described above regarding
Implementation Details
An embodiment of the invention may be implemented in hardware, executable modules stored on a computer readable medium, or a combination of both (e.g., programmable logic arrays). Unless otherwise specified, the steps included as part of the invention need not inherently be related to any particular computer or other apparatus, although they may be in certain embodiments. In particular, various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus (e.g., integrated circuits) to perform the required method steps. Thus, the invention may be implemented in one or more computer programs executing on one or more programmable computer systems each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.
Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein. (Software per se and intangible signals are excluded to the extent that they are unpatentable subject matter.)
The above description illustrates various embodiments of the present invention along with examples of how aspects of the present invention may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present invention as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the invention as defined by the claims.
This application claims priority to U.S. Patent Provisional Application No. 61/306,788, filed 22 Feb. 2010, hereby incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2011/025362 | 2/18/2011 | WO | 00 | 8/10/2012 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2011/103377 | 8/25/2011 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
3200193 | Biggs | Aug 1965 | A |
4451849 | Fuhrer | May 1984 | A |
4742387 | Oshima | May 1988 | A |
5270818 | Ottenstein | Dec 1993 | A |
5488434 | Jung | Jan 1996 | A |
5617112 | Yoshida | Apr 1997 | A |
5956015 | Hino | Sep 1999 | A |
6094185 | Shirriff | Jul 2000 | A |
6618045 | Lin | Sep 2003 | B1 |
6690351 | Wong | Feb 2004 | B1 |
6744416 | Mizutani et al. | Jun 2004 | B2 |
6870529 | Davis | Mar 2005 | B1 |
6947017 | Gettemy | Sep 2005 | B1 |
7019758 | Hendry et al. | Mar 2006 | B2 |
7046843 | Kanai | May 2006 | B2 |
7049575 | Hotelling | May 2006 | B2 |
7110002 | Wada | Sep 2006 | B2 |
7259769 | Diefenbaugh | Aug 2007 | B2 |
7301534 | Runolinna | Nov 2007 | B2 |
7312779 | Blevins | Dec 2007 | B1 |
7423705 | Len-Li | Sep 2008 | B2 |
7456829 | Fry | Nov 2008 | B2 |
7504612 | Yu | Mar 2009 | B2 |
20010020922 | Yamazaki | Sep 2001 | A1 |
20030117413 | Matsuda | Jun 2003 | A1 |
20050037815 | Besharat | Feb 2005 | A1 |
20060088275 | O'Dea | Apr 2006 | A1 |
20070139405 | Marcinkiewicz | Jun 2007 | A1 |
20070211049 | Kerofsky | Sep 2007 | A1 |
20080049005 | Okita | Feb 2008 | A1 |
20080165115 | Herz | Jul 2008 | A1 |
20080186260 | Lee | Aug 2008 | A1 |
20080186262 | Lee | Aug 2008 | A1 |
20080186707 | Ku | Aug 2008 | A1 |
20080266316 | Takahashi | Oct 2008 | A1 |
20080303687 | Sempel | Dec 2008 | A1 |
20080303918 | Keithley | Dec 2008 | A1 |
20090040205 | Scheibe | Feb 2009 | A1 |
20090109129 | Cheong | Apr 2009 | A1 |
20120293473 | Lee et al. | Nov 2012 | A1 |
20130222408 | Lee et al. | Aug 2013 | A1 |
Number | Date | Country |
---|---|---|
1719989 | Nov 2006 | EP |
2389730 | Dec 2003 | GB |
61-16691 | Jan 1986 | JP |
10-282474 | Oct 1998 | JP |
9701240 | Jan 1997 | WO |
2006003624 | Jan 2006 | WO |
Entry |
---|
Kim, Jong-Man, et al. “Illuminant-Adaptive Color Reproduction for Mobile Display” Proc. SPIE 6058, Color Imaging XI, from Conference vol. 6058, Jan. 15, 2006. |
Yoshida, Y. et al. “Ambient Light Adapted Imaging System with Reflective-Type LCD” SID Conference Record of the International Display Research Conference, published by Society for Information Display in Dec. 2001. |
Author: Unavailable; “Ambient Light Adaptation Method with Multi-Sensor” published by Kenneth Mason Publication Ltd in Dec. 2006. |
Maeda, K. et al. “Late-News Poster: The System-LCD with Monolithic Ambient-Light Sensor System” SID Symposium Digest of Technical Papers, vol. 36, Issue 1 pp. 356-359, May 2005. |
Number | Date | Country | |
---|---|---|---|
20120320014 A1 | Dec 2012 | US |
Number | Date | Country | |
---|---|---|---|
61306788 | Feb 2010 | US |