This disclosure describes a system and method by which arbitrary images produced by a high dynamic range camera are displayed on a dynamic range-limited display device. The system and method process image data in such a way as to maintain or enhance local contrast while limiting overall image dynamic range.
The output of sensors with a high dynamic range can be difficult to render on typical imaging displays. The need to render large differences in intensity and the need to achieve sufficient contrast in areas of relatively uniform intensity compete with each other, given the limited dynamic range of available display devices. An example is a thermal infrared image of a warm airport runway that includes areas of clear sky. The runway will be hot relative to non-runway ground areas, and very hot relative to the sky areas. In this case, reliably representing the relative thermal differences among these three areas along with minute thermal differences within each of these areas would be impossible without use of some sort of non-linear, spatially sensitive intensity transform.
In addition to dynamic range limitations, typical display devices have characteristic intensity curves that result in differences in perceived intensity with varying input intensity. A particular intensity difference may, for example, be more perceptible if the average intensity is in the middle, as opposed to the extreme low end or high end, of the output range of the display.
The problem to be solved is the stabilizing of global intensity levels of the displayed image while optimizing local area detail. There exist a number of approaches to solving the problem, many of them under the category of histogram equalization (HE) (or “histogram stretching”) reviewed in Pier, Stephen M.; Amburn, E. Philip; Cromartie, Robert; et al., Adaptive histogram equalization and its variations, Computer Vision, Graphics, and Image Processing, vol. 39, issue 3, pp. 355-68, September 1987, and in Reza, Ali M., Realization of the Contrast Limited Adaptive Histogram Equalization (CLAHE) for Real-Time Image Enhancement, Journal of VLSI Signal Processing Systems, vol. 38, no. 1, pp. 35-44, August 2004. In this technique, the brightness values of image pixels are reassigned based on a histogram of their input intensities. In the simplest “flattening” approach, the broad goal is to assign an equal number of pixels to each possible brightness level. In more sophisticated processors, the process is “adaptive” in that the nonlinear transformations are applied on a local area or multi-scale basis. Other operations such as minimum, maximum, and median filters, as well as clipping, may be applied.
Another approach is gradient domain dynamic range compression (GDDRC), which is described in Fattal, Raanan; Lischinski, Dani; Werman, Michael, Gradient domain high dynamic range compression, ACM Transactions on Graphics (TOG), vol. 21 no. 3, July 2002. The GDDRC technique works in the logarithmic domain to shrink large intensity gradients more aggressively than small gradients. This serves to reduce the global contrast ratio, while preserving local detail.
Histogram manipulations are effective for the particular problem of fine details in dark image regions. However, good quality image details can actually be degraded by naïve implementation of these algorithms. Neither HE nor GDDRC constitutes a seamless, esthetically pleasing and information-preserving solution to widely varying levels and contrasts over arbitrary image scenes.
The preferred embodiments disclosed achieve mapping of high dynamic range image data to render on a lower dynamic range display device a corresponding image characterized by stable global intensity levels and visually perceptible local area detail. The high dynamic range image data include representations of relatively low intensity contrast, high spatial frequency details and relatively low spatial frequency intensities. Data derived from the high dynamic range image data are applied to a nonlinear intensity transform. The nonlinear intensity transform preserves or enhances the low intensity contrast, high spatial frequency details and maintains a visually perceptible representation of the relatively low spatial frequency intensities to thereby provide visually perceptible local area detail. Saturation of image detail is avoided, as is the formation of artifacts such as “halos” around high spatial frequency image features. The computations are relatively simple and hence may be implemented on an economical processing platform. While the airborne application described above is of interest, the approach is appropriate across a wide range of thermal imaging systems in which mapping from a high dynamic range camera to a much lower dynamic range display is a challenge. The preferred embodiments implement an elegant and practical solution to the global/local dynamic range compression problem, while correcting for artifacts arising from spatial frequency manipulations.
Additional aspects and advantages will be apparent from the following detailed description of preferred embodiments, which proceeds with reference to the accompanying drawings.
The preferred embodiments include a number of modular processing units existing as computer algorithms implemented in a general processing unit or as hardware constructs in, for instance, a field programmable gate array (FPGA), as arranged in a system 10 shown in
In a first embodiment, HDR waveform 14 is applied to the inputs of a blurring spatial filter 30, a summing unit 32, a statistics unit 34, and a clamping unit 36. In an alternative, second embodiment HDR waveform 14 is applied to the inputs of blurring spatial filter 30, summing unit 32, and statistics unit 34; and the output of blurring spatial filter 30 is applied to the input of clamping unit 36 (as shown in dashed lines in
Blurring spatial filter 30, signal inverting unit 40, and summing unit 32 combine to form a high pass filter to process the incoming high bandwidth data represented by HDR waveform 14. Summing unit 32 adds the raw image data of HDR waveform 14 and the blurred and inverted image data of waveforms 42 and 44 and divides the result by two to maintain the same dynamic range as that of the raw image data. The desired effective kernel size of the high pass filter is fixed and is dependent upon the high dynamic range imaging device.
The output of summing unit 32 is delivered to a dynamic look-up table (LUT) 52, which applies an intensity transform to the high-pass filtered image data produced by summing unit 32. This transform is designed to minimize visible artifacts of the high pass filter, most specifically spatial halos around imaged objects of very high or low intensity relative to their surroundings. A typical transform curve is shown in
The actual values of this transform depend upon the input image data of HDR waveform 14 characteristics. LUT 52 has a control signal input 53 that determines, from a library, which transform curve to apply. This curve is chosen based on the dynamic range of the raw image input data of HDR 14. If that dynamic range is low, then a curve or look-up table with a higher output to input ratio (gain) may be selected. The subjective goal is to produce an output image, the dynamic range of which covers at least one-fourth of the dynamic range of an output display device. The maximum output value of LUT 52 is preferably no more than one-half of the dynamic range of the output display device. The gain implicit in LUT 52 is partly determined by the characteristic response of the high dynamic range imaging device and is, therefore, determined experimentally. The transform curve selected from LUT 52 may be changed between successive images. Generally, the most common stimuli are represented by input values that fall below the asymptotic limit, which is approximately 255 for the example of LUT 52, shown in
Statistics unit 34 calculates the mean of the high-dynamic range input image data and transmits that mean value to clamping unit 36. Clamping unit 36 limits the intensity extent of the high-dynamic range image data to a certain amount around the mean value calculated by statistics unit 34.
A dynamic gain and level unit 60 determines and applies a gain and level intensity transform to the clamped image data produced by clamping unit 36. This transform determines the minimum and maximum intensity extent of the incoming image data. These limits, along with the mean calculated by statistics unit 34, are used to calculate a gain that is then applied to the incoming image data. The gain is preferably determined as follows:
where ‘Gain’ is the gain applied to the incoming image data intensity values, ‘low-range’ is the number of possible low-dynamic range output intensities, ‘mean’ is the mean input intensity value calculated by statistics unit 34, ‘min’ is the minimum input intensity observed by dynamic gain and level unit 60, and ‘max’ is the maximum input intensity observed by dynamic gain and level unit 60.
A variable summing unit 64 combines the high frequency data from LUT 52 with the low frequency data from gain and level unit 60. Variable summing unit 64 has a control signal input 66 that determines the ratio of high-frequency to low-frequency data. This is a subjective measure that may be determined by an observer. The outputs of LUT 52, dynamic gain and level unit 60, and variable summing unit 64 produce waveforms representing low dynamic range (LDR) image data.
An alternative determination of the gain is as follows:
Gain=low-range/(max−min).
The difference between the alternative method and the preferred method is that the former does not perform the “centering” of the output image intensity.
It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the invention. The scope of the present invention should, therefore, be determined only by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5799111 | Guissin | Aug 1998 | A |
5978518 | Oliyide et al. | Nov 1999 | A |
6850642 | Wang | Feb 2005 | B1 |
7340162 | Terre et al. | Mar 2008 | B2 |
20040136605 | Seger et al. | Jul 2004 | A1 |
20040239798 | Nayar et al. | Dec 2004 | A1 |
20050104900 | Toyama et al. | May 2005 | A1 |
20060104508 | Daly et al. | May 2006 | A1 |
20060104533 | Daly et al. | May 2006 | A1 |
20060239582 | Hyoudou | Oct 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20080019608 A1 | Jan 2008 | US |