Embodiments of the present invention relate generally to image enhancement and, in particular, to methods and systems for improving content visibility on a liquid crystal display (LCD) under low-contrast viewing conditions.
Low-contrast viewing conditions may negatively impact, for example, through eyestrain and fatigue, the viewing experience of a user of an LCD device, for example, an LCD television, an LCD mobile device and other devices comprising an LCD display.
Low-contrast viewing conditions may arise when a device is used in an aggressive power-reduction mode, wherein the LCD backlight power level may be dramatically reduced making the image/video content appear dark and less visible to a viewer. The contrast of the image/video may be vastly reduced, or in some cases, pegged at black, and many image features that may convey important scene content may fall below the visible threshold.
Low-contrast viewing conditions may also arise when an LCD display is viewed under high ambient light, for example, direct sunlight. In these situations, the minimum display brightness that a viewer may perceive may be elevated due to the high ambient light in the surroundings. The image/video may appear “washed out” where it is intended to be bright, and the image/video may appear featureless in darker regions.
For both of the above-described low-contrast viewing scenarios, and other low-contrast viewing scenarios, the tonal dynamic range of the image/video may be compressed and the image contrast may be greatly reduced, thereby degrading the viewing experience of the user. Due to increasing consumer concern for reduced energy costs and demand for device mobility, it may be desirable to provide improved digital imagery and video quality to enhance the viewing experience under low-contrast viewing conditions.
Some embodiments of the present invention comprise methods and systems for improving content visibility on a liquid crystal display (LCD) under low-contrast viewing conditions.
According to one aspect of the present invention, a key-feature estimator may estimate a key-feature image, also referred to as a key-feature map, associated with an input image, a brightness booster may generate a brightened image associated with the input image and a combiner may combine the key-feature image and the brightened image to form an enhanced image that may exhibit improved content visibility when displayed on an LCD display and viewed under low-contrast viewing conditions. The key-feature image may identify pixels, in the input image, at which there is a large gradient and a well-defined object contour.
According to another aspect of the present invention, the key-feature estimator may estimate the gradient at pixels in a grayscale image associated with the input image using a large-spatial-support gradient calculator.
According to another aspect of the present invention, the brightness booster may determine a boosting factor based on at least one of a power level associated with the LCD display, an ambient-light level associated with the LCD display and a measure of the input-image content.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention taken in conjunction with the accompanying drawings.
Embodiments of the present invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The figures listed above are expressly incorporated as part of this detailed description.
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the methods and systems of the present invention is not intended to limit the scope of the invention but it is merely representative of the presently preferred embodiments of the invention.
Elements of embodiments of the present invention may be embodied in hardware, firmware and/or software. While exemplary embodiments revealed herein may only describe one of these forms, it is to be understood that one skilled in the art would be able to effectuate these elements in any of these forms while resting within the scope of the present invention.
Low-contrast viewing conditions may negatively impact, for example, through eyestrain and fatigue, the viewing experience of a user of an LCD device, for example, an LCD television, an LCD mobile device and other devices comprising an LCD display.
Low-contrast viewing conditions may arise when a device is used in an aggressive power-reduction mode, wherein the LCD backlight power level may be dramatically reduced making the image/video content appear dark and less visible to a viewer. The contrast of the image/video may be vastly reduced, or in some cases, pegged at black, and many image features that may convey important scene content may fall below the visible threshold.
Low-contrast viewing conditions may also arise when an LCD display is viewed under high ambient light, for example, direct sunlight. In these situations, the minimum display brightness that a viewer may perceive may be elevated due to the high ambient light in the surroundings. The image/video may appear “washed out” where it is intended to be bright, and the image/video may appear featureless in darker regions.
For both of the above-described low-contrast viewing scenarios, and other low-contrast viewing scenarios, the tonal dynamic range of the image/video may be compressed and the image contrast may be greatly reduced, thereby degrading the viewing experience of the user. Due to increasing consumer concern for reduced energy costs and demand for device mobility, it may be desirable to provide improved digital imagery and video quality to enhance the viewing experience under low-contrast viewing conditions.
Some embodiments of the present invention described in relation to
The brightness booster 32 may boost the brightness of the input image 31 using a brightness preservation technique, and the brightness booster 32 may generate a brightened image 33 that may be made available to the combiner 36. In some embodiments of the present invention, the brightness booster 32 may boost the brightness of the input image 31 based on information related to an LCD backlight associated with an LCD display on which the enhanced image may be displayed.
The key-feature estimator 34 may estimate a key-feature image 35, also referred to as a key-feature map, from the input image 31 and may make the key-feature image 35 available to the combiner 36.
The combiner 36 may blend the brightened image 33 and the key-feature image 35 to form a blended image 37 which may be made available to the code-value mapper 38. The code-value mapper 38 may form a key-feature-highlighted (KFH) image 39 by mapping the code-values generated by the combiner 36 into code values appropriate for an LCD, for example, to the range of [0,255]. In some embodiments, the KFH image 39 may be made directly available to the LCD for display. The KFH image 39 may also be referred to as an NPR image.
In some embodiments of the present invention described in relation to
The down-sampled image 43 may be made available to a bilateral filter 44 which may smooth less-textured areas. Major contours of objects within an image may convey important image information, while less-textured areas may be perceptually less important to a viewer. Thus bilateral filtering may be used to remove unnecessary gradient information, while retaining key edge information corresponding to object contours.
The results 45 of the bilateral filtering may be converted to gray-scale values by a gray-scale converter 46, and gradient estimation may be performed on the gray-scale image 47 by a large-spatial-support gradient calculator 48. Commonly used edge detectors, for example, the Sobel operator, the Canny edge detector and the Laplacian operator, may not effectively detect edges associated with major contours. Use of these common edge detectors may result in broken lines on major object contours. Additionally, minor edges may be detected in less-textured image areas, which may not be desirable in KFH rendering. Further, object boundaries in a gradient map generated using one of the commonly used edge detectors may not be well defined. Embodiments of the present invention may compute image gradients using a large spatial support and may retain, as edge pixels, only pixels with a large gradient value.
In some embodiments of the present invention, the large-spatial-support gradient calculator 48 may comprise a horizontal-gradient calculator and a vertical-gradient calculator. At each pixel in the gray-scale image 47, a horizontal-gradient value may be determined by the horizontal-gradient calculator and a vertical-gradient value may be determined by the vertical-gradient calculator. A gradient value may be assigned to a pixel based on the determined horizontal-gradient value and the determined vertical-gradient value associated with the pixel. In some embodiments, the gradient value assigned to a pixel may be the largest of the horizontal-gradient value and the vertical-gradient value associated with the pixel.
In some embodiments of the present invention, the horizontal-gradient value associated with a pixel may be determined by computing a first-order derivative at the pixel with respect to several horizontal neighbors in each direction, to the left and to the right, of the pixel. The largest derivative value in each direction may be added together to form the horizontal-gradient value associated with the pixel. Similarly, the vertical-gradient value associated with a pixel may be determined by computing a first-order derivative at the pixel with respect to several vertical neighbors in each direction, above and below, the pixel. The largest derivative value in each direction may be added together to form the vertical-gradient value associated with the pixel. In some embodiments of the present invention the size of the one-dimensional search window associated with a direction (left, right, above, below) may be three pixels.
grad
H(p0)=max[D1(p0,ph1),D1(p0,ph2),D1(p0,ph3)]+max[D1(p0,ph−1),D1(p0,ph−2),D1(p0,ph−3)]
and the vertical-gradient value, gradV(p0), may be determined according to:
grad
V(p0)=max[D1(p0,pv1),D1(p0pv2),D1(p0pv3)]+max[D1(p0,pv−1),D1(p0,pv−2),D1(p0pv−3)]
where D1(•, •) may denote the first-order derivative and ph1 81, ph2 82 and ph3 83 are the pixels in the one-dimensional search window to the right of the pixel p0 80, ph−1 84, ph−2 85 and ph−3 86 are the pixels in the one-dimensional search window to the left of the pixel p0 80, pv1 87, pv2 88 and pv3 89 are the pixels in the one-dimensional search window below the pixel p0 80 and pv−1 90, pv−2 91 and pv−3 92 are the pixels in the one-dimensional search window above the pixel p0 80. The final raw gradient value, grad (p0), associated with the pixel p0 80 may be determined according to:
grad(p0)=max[gradH(p0),gradV(p0)],
thereby producing a raw gradient map 49.
The raw gradient map 49 may contain noisy details. Therefore, the raw gradient map 49 may be made available to a low-amplitude gradient suppressor 50 which may remove low-amplitude gradients. In some embodiments of the present invention, the low-amplitude gradient suppressor 50 may comprise a comparator that compares the gradient amplitude to a threshold according to:
where T may denote a threshold and gradsuppress (p0) may denote the low-amplitude-gradient-suppressed gradient map. In some embodiments, the threshold may be set to T=5.0. In alternative embodiments, the low-amplitude gradient suppressor 50 may comprise a zero-crossing detector, and pixel locations associated with zero-crossings may be retained in the gradient map, while non-zero-crossings may be suppressed.
The low-amplitude-gradient-suppressed gradient map 51 may be made available to a gradient-map polarity reverser 52 that may reverse the gradient polarity according to:
grad
rev(p0)=offset−gradsuppress(p0),
where offset may denote an offset parameter that may be associated with white background and gradrev(p0) may denote the reversed gradient map. In some embodiments, the parameter offset may be determined empirically. In some embodiments, offset=120.
The reversed gradient map 53 may be made available to a gradient-contrast enhancer 54 that may improve the contrast of the reversed gradient map 53 and may map the gradient values to the range of 0 to 255. In some embodiments, the gradient-contrast enhancer 54 may map the reversed gradient values according to:
where shift may denote a contrast shift and gradenhanced (p0) may denote the contrast-enhanced gradient map. In some embodiments of the present invention, the parameter shift may be determined empirically. In some embodiments, shift=120.
In some embodiments of the present invention, the gradient-contrast enhancer 54 may produce a binary gradient map according to:
The contrasted-enhanced gradient map 55 may be made available to a gradient smoother 56 that may blur the boundary between foreground edges and white background and may link broken lines. In some embodiments of the present invention, the gradient smoother 56 may comprise a Gaussian low-pass filter. In some embodiments, the kernel size of the Gaussian low-pass filter may be 3×3.
The smoothed gradient map 57 may be made available to an up-scaler 58 that may scale the smoothed gradient map 57 to the original input image resolution. The up-scaled gradient map 59 may be made available to a gradient-map shifter 60 that may shift the background of the gradient map to zero. In some embodiments, the gradient-map shifter 60 may subtract 255 from the up-scaled gradient values to shift the background to zero. The resulting key-feature map 61 may be made available from the key-feature estimator 34 to the combiner 36.
In some embodiments of the present invention described in relation to
where S may denote the scaling factor, BLreduced may denote the percentage of backlight dimming and γ may denote the LCD system gamma. In some embodiments, BLreduced may be a predetermined fixed percentage, for example, 15 percent. In alternative embodiments, the scaling factor, S, may be determined adaptively based on image content. In some of these embodiments, the scaling factor, S, may be computed using the color histogram of the input image. As will be appreciated by a person of ordinary skill in the art, the percentage of backlight dimming, BLreduced, may be determined any of the methods and systems known in the art. For example, the percentage of backlight dimming, BLreduced, may be determined according to the methods and systems disclosed in U.S. patent application Ser. No. 11/465,436, entitled “Systems and Methods for Selecting a Display Source Light Illumination Level,” filed Aug. 17, 2006, which is hereby incorporated by reference herein in its entirety.
In some embodiments of the present invention, to avoid a clipping problem, the brightness boosting may comprise per-pixel processing described in relation to
V=max(max(R,G),B).
The largest color-component value, V, may be scaled by the boosting factor, S, and the scaled value may be compared 170 to the maximum code value. In some embodiments of the present invention, the maximum code value may be 255. If the scaled value is less than or equal to 171 the maximum code value, the color value associated with the current pixel may be brightness boosted using the scale value, S, and the brightness-boosted color value may be output 172 for the current pixel. A determination 162 may be made as to whether or not there are unprocessed pixels, and the process may continue. If the scaled value is greater than 173 the maximum code value, then the boosting factor may be re-computed according to:
where S′ may denote the re-computed boosting factor. The color value associated with the current pixel may be brightness boosted using the re-computed boosting factor, S′, and the brightness-boosted color value may be output 176 for the current pixel. A determination 162 may be made as to whether or not there are unprocessed pixels, and the process may continue. In these embodiments, the color ratio across the three color channels is maintained when clipping occurs, and thus color fidelity is maintained.
In the above-described brightness-boosting methods and systems, a common brightening factor, S, may be used at each pixel, with the exception of pixels for which clipping occurs. In alternative embodiments of the present invention, the brightening factor, S, may be spatially varying according to image content. In some embodiments, the brightening factor, S, may be determined according to:
where f(x, y) may be the image brightness at location (x, y), α may be a parameter that controls the range of the brightening factor and σ may be a factor that controls the shape of the Gaussian weighting function. For f(x, y) with a range of [0,255], exemplary parameter values of α and σ are 1.6 and 100, respectively. In these embodiments, the Gaussian weighting function may produce a larger boosting factor, S(x, y), when the brightness f(x, y) is low. Therefore, a pixel with a low-brightness value may be more heavily brightened than a pixel with a larger brightness value.
In alternative embodiments of the present invention, the image brightness values may be quantized into a plurality of brightness-value bins, and a brightening factor may be associated with each brightness-value bin. Pixels with brightness values within the same brightness-value bin may be brightened by the same factor, the brightening factor associated with the respective bin. In some embodiments, the quantization may be based on a histogram of the brightness values.
In some embodiments of the present invention, RGB input values may be converted to an alternative color space, for example, a luminance-chrominance-chrominance color space. Exemplary luminance-chrominance-chrominance color spaces may include YCbCr, YUV, Lab and other luminance-chrominance-chrominance color spaces. In these embodiments, the luminance channel may be brightness boosted while the chrominance channels remain unchanged.
The brightened image 33 generated by the brightness booster 32 and the key-feature image 35 generated by the key-feature estimator 34 may be combined by the combiner 36. In some embodiments of the present invention, the combiner 36 may combine the brightened image 33 and the key-feature image 35 by adding the two images. In alternative embodiments of the present invention, the combiner 36 may blend the images using a weighted average of the two images according to:
I
KFH
=βI
boosted+(1−β)IKFM,
where β may denote a blending factor, also referred to as a blending parameter, IKFH may denote the blended image 37, Iboosted may denote the brightened image 33 generated by the brightness booster 32 and IKFM may denote the key-feature image 35 generated by the key-feature estimator 34. In some embodiments of the present invention, the blending factor, β, may be a user selected parameter. In alternative embodiments of the present invention, the blending factor, β, may be a predefined value.
The blended image 37 values may be mapped by a code-value mapper 38 to the range of display code values. In some embodiments of the present invention, the range of display code values is [0,255]. In some embodiments, the resulting KFH image 39 may be made available from the image-enhancement system 30 to an LCD display.
Some embodiments of the present invention, described in relation to
The key-feature estimator 262 may produce a key-feature image 263, also considered a key-feature map, associated with the input image 252. In some embodiments of the present invention, the key-feature estimator 262 may generate the key-feature map 263 according to previously described embodiments of the present invention.
The brightness booster 260 may generate a brightened image 261 based on the input image 252 content, the backlight power level 254 and the ambient-light level 256.
The blending-parameter selector 264 may determine the blending parameter 265 used by the combiner 266 to blend the brightened image 261 and the gradient map 263. A user-selected blending parameter 270 may be provided to the blending-parameter selector 264. In some embodiments of the present invention, the user-selected blending parameter 270 may correspond directly to the blending parameter 265. In alternative embodiments, the user-selected blending parameter 270 may be an image-quality setting selected by a user and associated with a blending parameter 265 value by the blending-parameter selector 264. In some embodiments of the present invention, the blending-parameter selector 264 may select a default value for the blending parameter 265 when a user-selected blending parameter 270 is not available.
The combiner 266 may combine the key-feature image 263 and the brightened image 261 based on the blending parameter 265. In some embodiments of the present invention, the combiner 266 may linearly blend the key-feature image 263 and the brightened image 261 using the blending parameter 265 as a weighting factor according to:
I
KFH
=βI
boosted+(1−β)IKFM,
where β may denote the blending parameter 265, IKFH may denote the blended image 267, Iboosted may denote the brightened image 261 generated by the brightness booster 260 and IKFM may denote the key-feature image 263 generated by the key-feature estimator 262. In alternative embodiments, the combiner 266 may combine the key-feature image 263 and the brightened image 261 according to:
I
KFH
=I
boosted
+I
KFM.
The blended image 267 values may be mapped by a code-value mapper 268 to the range of display code values. In some embodiments of the present invention, the range of display code values is [0,255]. In some embodiments, the resulting KFH image 269 may be made available from the image-enhancement system 250 to an LCD display.
Some embodiments of the present invention may comprise an LCD display. Some embodiments of the present invention may comprise an ambient-light sensor.
Some embodiments of the present invention may comprise a computer program product that is a computer-readable storage medium, and/or media, having instructions stored thereon, and/or therein, that may be used to program a computer to perform any of the features presented herein.
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalence of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.