Not applicable.
The present invention relates generally to selecting a suitable brightness for a liquid crystal display.
Relatively low-contrast viewing conditions tend to negatively impact the viewing experience of a viewer of an liquid crystal display device. Examples of liquid crystal display devices include, for example, a LCD television, a LCD monitor, a LCD mobile device, among other devices including a liquid crystal display. The negative impacts for the viewer may include, for example, eyestrain and fatigue.
Low-contrast viewing conditions tend to arise when a device is used in an aggressive power-reduction mode, where the backlight power level of the liquid crystal device (and thus the illumination provided by the backlight) is significantly reduced making the image content (e.g., still image content and video image content) appears generally dark and the details of which are difficult to determine by the viewer. The contrast of the image content may be vastly reduced, or in some cases, pegged at black, resulting in many image features to fall below the visible threshold.
Low-contrast viewing conditions tend to also arise when an LCD display is viewed under high ambient light, for example, direct sunlight. In these situations, the minimum display brightness that a viewer may perceive may be elevated due to the high ambient light in the surroundings. The image content may appear “washed out” where it is intended to be bright, and the image content may appear generally featureless in darker regions of the image.
For either of the above-described low-contrast viewing conditions, and other low-contrast viewing conditions, the tonal dynamic range of the image content tends to be compressed and the image contrast is substantially reduced, thereby degrading the viewing experience of the user. Due to increasing consumer concern for reduced energy costs and demand for device mobility, it may be desirable to provide improved image content to enhance the viewing experience under low-contrast viewing conditions.
What is desired is a display system that provides a suitable enhancement for a particular image.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention, taken in conjunction with the accompanying drawings.
Referring to
A reference ambient value 120 is predetermined by the display or otherwise selected by the user based upon their preferences. The reference ambient value 120 provides a value to compare against the signal indicative of the ambient lighting level. A peak brightening selection 130 compares the reference ambient value 120 to the signal indicative of the ambient lighting level to determine the strength of the ambient lighting. For example, if the reference ambient value 120 is greater than the signal indicative of the ambient lighting level then the lighting conditions are generally dim. For example, if the reference ambient value 120 is less than the signal indicative of the ambient lighting level then the lighting conditions are generally bright. The magnitude of the difference between the signals provides an indication of the amount of brightness change of the backlight of the liquid crystal display for a suitable viewing condition.
The display includes a set of brightening candidates 140. The brightening candidates preferably includes a set of different functions that may be applied to the image content. The brightening candidates may be in any suitable form, such as a single function, a plurality of functions, or a look up table. Based upon the peak brightening selection 130 and the brightening candidates 140 a set of weight functions 150 are constructed. The weight construction 150 determines a set of errors, typically a set of errors is determined for each of the brightness candidates. For example, an error measure may be determined for each pixel of the image that is above the maximum brightness of the display for each of the brightness candidates 140.
An input image content 160 is received by the display. A histogram 170, or any other characteristics of the image content, is determined based upon the image content. 160. Each of the calculated weights 150 is separately applied 180 to the histogram 170 to determine a resulting error measure with respect to the particular input image. Since each input image (or series of images) 160 is different, the results of the weight construction, even for the same ambient brightness level, will be different. The lowest resulting error measure from the weight construction 150 and the histogram 170 is selected by an optimization process 190. A temporal filter 200 may be applied to the optimization process 190 to smooth out the results in time to reduce variability.
The output of the temporal filter 200 is a slope 210 which is representative of a scale factor, a curve, a graph, a function(s), or otherwise which should be applied to the input image 160 to brighten (or reduce) the image, for the particular ambient lighting conditions. In addition, a reflection suppression 220 based upon a reference minimum 230, may be applied to the temporally filtered 110 output of the ambient light sensor 100. This provides a lower limit 240 for the image.
A tone design 250 receives the slope 210, together with the lower limit 240, and determines a corresponding tone scale 260. The tone scale 260 is applied to the original image 160 by a color persevering brightening process 270. In this manner, based upon the ambient lighting conditions and the particular image content, the system determines a suitably brightened image 280.
An exemplary set of equations and graphs are described below to further illustrate an exemplary technique previously described. The ambient sensor 100 may use a model that is adaptive to the visual response of the human visual system, such as shown by equation 1.
The response to an input stimulus Y at two different ambient light levels may be represented as shown in
Analysis shows the adaptation model used above predicts the retinal response is a function of the ratio of stimulus luminance and the ratio of ambient level to a reference ambient light level.
The response depends on the ratio of the stimulus luminance and a power of the relative ambient level. As a consequence, the response will remain constant when the relative ambient level changes if the stimulus is brightened accordingly. A visual model based ambient adaptation may be used where the image is brightened in accordance with a visual adaptation model. Three examples of brightness versus ambient light level are shown in
Brightening is achieved by tonescale operation applied to the image prior to being displayed. In general, given a desired brightening level, a full brightening tonescale can be developed which is limited by the LCD output. A set of candidate tone scales may consist of a linear brightening with clipping at the display maximum as illustrated in
A content dependant measure may be used to select from among the candidate brightening tonescales. One metric is based on the contrast achieved by the candidate tonescale and the contrast achieved by the full brightening tonescale.
The slope of each candidate tonescale may be computed, for example, as illustrated in
The difference between the slope of each candidate tone curve and the slope of the full brightening tone curve is calculated for each input digital count. This difference is used to calculate an error vector for each tone curve. For example, the square of the error at each digital count may be used to produce
A histogram of digital counts of the input image is computed and each error vector is used to compute a weighted sum, such as illustrated by equation 2.
This may be computed for a range of brightening slopes tracing out a curve defining an objective function for each brightening level. Sample objective functions for several input images are shown in
While this process selects a suitable brightness level and image content modification, the result for many images with aspects that are difficult to see. For example, thin edges for small parts are more difficult to discern or otherwise not readily observable. Thus a temporal edge based technique may be used to temporally align edge pixels with motion estimation and then smooth the edge pixel at the current frame with the support of its temporal correspondences to the other frames. This reduce temporal edge flickering and results in an improved viewing experience.
Referring to
Pixels identified as being part of an edge are identified 520. At the identified edge pixel locations of the current image from the edge point process 520, the current gray image 530 and previous images 540, are temporally aligned 550. Referring also to
A temporal smoothing process 560 temporally smoothes the edge pixels based upon the current image gradient 570 and previous image gradients 580. The temporal smoothing may use an IIR filtering. At time t, the gradient magnitude of an edge pixel at (i, j,t) is a weighted combination of corresponding pixel at (i+u(i,j,Δt), j+v(i, j,Δt), t−Δt) of previous frame which have already been temporal smoothed. The result is a temporally smooth gradient image 590.
The temporal alignment process 550 reduces temporal edge flickering by temporally aligning the edge pixels, without the needs to temporally align the entire image. The temporal alignment of edge pixels may be treated as a sparse feature tracking technique where the edge pixels are the sparse features, and are tracked from time t to time t−1 with Lucas-Kanade optical flow. The sparse feature tracking dramatically increases the computational efficiency.
Where fx(n, m) and fy(n, m) is the spatial gradient at pixels (n, m) in window Ωi,j. ft(n, m) is the temporal gradient at pixels (n, m). w(n, m) is data adaptive weight for pixels (n, m), it is computed as
w(n,m)=SIEVE(|f(i,j)−f(n,m)|) Equation 6
where SIEVE represents a Sieve filter.
The temporal smoothing of the edge pixels 560 may be based upon the temporal correspondences for edge pixel (i, j, t), which are used to perform temporal smoothing using the equation 7, 8, 9, and 10:
In equations 7-10, G(i,j,t) represents the gradient magnitude at position (i,j,t). The temporal filtering takes places in the gradient domain rather than the gray-scale domain. However, the motion vector may be found in the gray-scale domain.
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.
Number | Name | Date | Kind |
---|---|---|---|
7301545 | Park et al. | Nov 2007 | B2 |
7352410 | Chou | Apr 2008 | B2 |
7501771 | Kawano | Mar 2009 | B2 |
7504612 | Yu et al. | Mar 2009 | B2 |
7573533 | Moldvai | Aug 2009 | B2 |
7746317 | Fu et al. | Jun 2010 | B2 |
8223117 | Ferguson | Jul 2012 | B2 |
20050190142 | Ferguson | Sep 2005 | A1 |
20050212824 | Marchinkiewicz et al. | Sep 2005 | A1 |
20070236438 | Sung | Oct 2007 | A1 |
20090161020 | Barnhoefer et al. | Jun 2009 | A1 |
20100039414 | Bell | Feb 2010 | A1 |
20110012937 | Onishi et al. | Jan 2011 | A1 |
Number | Date | Country |
---|---|---|
63-38989 | Feb 1988 | JP |
05-094156 | Apr 1993 | JP |
06-331962 | Dec 1994 | JP |
2004-325748 | Nov 2004 | JP |
Entry |
---|
International Search Report, dated Dec. 13, 2011, in International App. No. PCT/JP2011/075649, filed Nov. 1, 2011 by Sharp Kabushiki Kaisha, 7 pgs. |
Number | Date | Country | |
---|---|---|---|
20120162245 A1 | Jun 2012 | US |