Digital image filters and related methods for image contrast enhancement

Information

  • Patent Grant
  • 10055827
  • Patent Number
    10,055,827
  • Date Filed
    Wednesday, September 16, 2009
    14 years ago
  • Date Issued
    Tuesday, August 21, 2018
    5 years ago
Abstract
Digital image filters and related methods for image contrast enhancement are disclosed. According to one aspect of the method, an invariant brightness level is initially determined. For each pixel of an input image, the invariant brightness level is subtracted from the input brightness of the pixel. The resulting value is multiplied with a contrast adjustment constant. After that, the invariant brightness level is added. Further aspects of the method can involve histogram equalization.
Description
FIELD

The present disclosure relates to digital image filters. More in particular, it relates to digital image filters and related methods for image contrast enhancement.


SUMMARY

According to a first aspect, a method for image contrast enhancement of a digital image comprising a plurality of pixels is provided, each pixel having an input brightness, the method comprising: determining an invariant brightness level; for each pixel, subtracting the invariant brightness level from the input brightness of the pixel, thus obtaining a first intermediate brightness value for the pixel; for each pixel, multiplying the first intermediate brightness value with a contrast adjustment constant, thus obtaining a second intermediate brightness value for the pixel; and for each pixel, adding the invariant brightness level to the second intermediate brightness value, thus obtaining an output brightness value for that pixel.


According to a second aspect, an image contrast enhancement filter to filter an input digital image and produce an output digital image is provided, the filter processing each pixel of the input digital image through a transfer function Iout=(Iin−L)C+L, wherein Iin is the input pixel brightness, Iout is the output pixel brightness, C is a contrast adjustment constant, and L is an invariant brightness level.


According to a third aspect, a visual prosthesis comprising an image contrast enhancement filter is disclosed. The image contrast enhancement filter can be based on image histogram equalization, so that the an output image output by the image contrast enhancement filter has an output image histogram which is an equalized version of the input image histogram.


Further embodiments are shown in the written specification, drawings and claims of the present application.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 shows a flow chart of an embodiment of the present disclosure in accordance with expression (1) below.



FIG. 2 shows a comparative diagram of histograms before and after the filtering of the present disclosure.



FIG. 3 shows a schematic representation of the main components of a visual prosthesis.



FIG. 4 schematically shows a possible location of the filter in accordance with the disclosure.



FIGS. 5A and 5B shows a further embodiment of the present disclosure implementing histogram equalization. The cluster of luminances around mid-gray in FIG. 5A is redistributed more evenly across the entire range in FIG. 5B.



FIGS. 6A-6E show examples of images where histogram equalization has been implemented.





DETAILED DESCRIPTION

The present disclosure provides digital image filters and related methods for image contrast enhancement, especially in the field of visual prosthetic devices.


According to an embodiment of the present disclosure, the digital image filters and related methods for image contrast enhancement are based on detecting an invariant brightness level and stretching the image histogram. An invariant brightness level (L) is defined as the brightness level which does not change before and after the contrast enhancement process. The histogram stretching can be done by multiplying image pixel values with a contrast setting C.


This process can be formulated as:

Iout=(Iin−L)C+L  (1)

in which Iin is the input brightness, Iout is the output pixel brightness, C is the contrast adjustment constant and L is the invariant brightness level. In other words, the following four operations are performed: 1) determining an invariant brightness level (L); 2) for each pixel, subtracting such invariant brightness level (L) from the input brightness (Iin) of the pixel; 3) multiplying the resulting brightness with a contrast setting (C); and 4) adding the brightness level (L) to such multiplied value. FIG. 1 shows a flow chart of this method.


Using the above transformation (1), if Iin=L, the output will be Iout=L. This states that an input pixel of brightness L does not change its brightness level after the contrast enhancement. However, pixels brighter than L will be brighter (because Iin−L>0), while darker pixels will be darker (because Iin−L<0). In addition, the more different Iin is from L, the more Iin will be enhanced by the multiplication process.


The main difference between the method according to the present disclosure and the known contrast enhancement method through multiplication is the presence of the invariant level L, which helps to keep a certain brightness level invariant before and after the enhancement process.


Many methods can be used to detect the invariant brightness level L. A first method is that of finding the darkest pixel in the scene, so that a dark object will be kept dark after the transformation. Other brighter pixels, in this method, will be enhanced and become brighter.


In this way, the overall contrast of the images is enhanced without artificially increasing the brightness level of the dark objects.


Alternatively, instead of picking the darkest pixel, L can be determined by finding the Nth darkest pixel where N is a small number, so that the choice is less affected by image noise. Such filter parameter N can be determined empirically. For example, N can be defined as the 10th percentile of the pixels in a scene. If such formula is applied to a completely dark image (where, e.g., 90% of the pixels have a 10 value while the remaining 10% of the pixels have a 12 value on a scale 0-255), since L is determined going up on the histogram and the brightness values are discrete, the algorithm will use L=10. Such result is interesting, because it means that 90% of the pixels have a brightness less than or equal to 10. This is where the method according to the present disclosure will help reduce the noise artifacts in a completely dark image.


In other words, assuming that the camera generating the digital image is pointed at an uniformly bright scene, the L chosen according to the above method will be very close to the actual brightness level (not identical due to possible noisy pixels in the scene). As a consequence, the output pixels will all be very close to their original brightness level, even when enhanced by the multiplying factor C, since the variations from L are very small.


The following FIG. 2 shows an example of the input image histogram (10) and output image histogram (20) of the method according to the present disclosure, where the invariant level L is determined using the method mentioned above, i.e. by choosing one of the darkest pixels of the input image. FIG. 2 explains the method according to the present disclosure, i.e. detecting L and converting an input image histogram to an output image histogram. Therefore, starting with the solid line input histogram (10), L is found based on such histogram, and then the above formula is used to compute the output. The output image will have the histogram (20) shown in the dashed line.


As also mentioned above, L can be chosen in different ways. Ideally, L represents what a dark object will look like in the image. Therefore, image segmentation of the scene can be performed and an average brightness be calculated for each of the objects. Once this is done, the darkest among the average brightness values can be used as L.


The contrast adjustment constant C can be either fixed, or determined based on the scene information. If C is fixed, it can be determined empirically by going through a collection of representative images pertaining to the environment for which the filter is designed. If, on the other hand, C is based on scene information, the more objects there are in the image, the more likely the use of a high contrast gain setting C. There are many ways to determine the amount of objects in the presence of image noise. For example, a contour detection technique will find more contour pixels in an image with more objects. Thus, C can be determined as an increasing function of the number of contour pixels.


The method according to the present disclosure is easy to implement, easy to verify and takes very little time to run. In particular, according to one of the embodiments of the present disclosure, a standard histogram is built for every frame. Then L is computed by counting an N number of pixels starting from the darkest pixel value. A preset C is used for enhancement. All these operations are deterministic in nature. The total run time will be O(n) with two passes of each image frame.


The method according to the present disclosure can be used in combination with other filters. In particular, it should be noted that the output of the filter is in the same image domain as the input, i.e. it is of the same size and depth. Therefore, it can be stacked up with other processing methods.


In particular, FIG. 3 shows a schematic representation of the main components of a visual prosthesis taken from U.S. published patent application 2005/0288735, incorporated herein by reference in its entirety. In particular, the external portion (30) of the visual prosthesis comprises an imager (40), e.g., a video camera to capture video in real time, and a video data processing unit (50) comprising a Digital Signal Processor (DSP) to process video data and then send output commands (60) for a retinal stimulator (70) to be implanted on the retina (80) of a patient. The filter and filtering method according to the present disclosure can be contained in the DSP of the video data processing unit (50).



FIG. 4 shows a possible structure of the video data processing unit (50), which comprises a video input handler (120) to acquire a raw video input and a video filter processor (130), the latter performing filtering to produce a video output image, e.g. a 6×10 image, that can be used by a telemetry engine. As shown in FIG. 4, the filter (140) according to the present disclosure can be located between the video input handler (120) and the video filter processor (130).


However, use in a visual prosthesis is just one of the applications of the filter and method according to the disclosure. In particular, it can be used in any video-based medical devices.


The method according to the present disclosure can work under a variety of lighting conditions, in particular low-light and low-contrast environments.


A further embodiment of the present disclosure allows the contrast to be changed in accordance with this equation:

Pout=(Pin−AvgBr)C+AvgBr

where C=Contrast Level


AvgBr=Average luminance Value


Pin=Pixel value of input image


Pout=Pixel value of output image


In other words, the invariant brightness level L of equation (1) is chosen to be the average luminance value of the image.


Another embodiment of the present disclosure adopts a contrast scaling (normalization) algorithm that sets a linear scaling function to utilize the full brightness range of the camera.

Pout=((Pin−c)/(d−c))×(brightness range)+minimum camera brightness


Where ‘d’ can be maximum intensity value from the input image intensity histogram or the 95th percentile etc, ‘c’ can be the minimum intensity value from the input image intensity histogram or the 5th percentile etc, and the brightness range could be either the full range (0 to 255 for a grey scaled image) or a specified subset of that depending on the minimum camera brightness.


According to a further embodiment of the present disclosure, another method to systematically increasing small differences in luminance can be via histogram equalization. This embodiment can use a non-linear monotonic scaling function to make the brightness distribution function (the probability distribution of the luminance values) of the filtered image to follow a uniform density function (luminance values are distributed more evenly). Filtering in the time domain can also be incorporated in a similar fashion as mentioned above.


Each of the above discussed methods can be made settable by a personal computer, be applied automatically on every video frame or can be made adaptive depending on the luminance distribution of the images at any instant of time.


If desired, the method can incorporate the luminance “history” (a record of the luminance ranges) over several video frames.


Subjects cannot distinguish fine brightness differences (when asked to identify or rate brightness they can only reliably distinguish about 5 levels of brightness), so small differences in luminance will be imperceptible. A method of systematically increasing small differences in luminance can be performed via histogram equalization.


Reference will now be made to FIG. 5. Panel A shows the actual frequency histogram for the electrode outputs (based on the unsealed mean) of luminances found in one scene, it can be seen that midrange gray luminances are very common, and light and dark values are relatively infrequent. Panel B shows an example of a desired histogram, where electrode outputs can take 5 possible luminances, and the luminances are distributed more evenly across the range of luminance values. Examples of histogram equalization are shown in FIGS. 6A-6E. The full scene is shown in Panel A, and a scaled image patch is shown in Panel B. Panel C shows electrode output based on a scaled mean. Panel D shows electrode output based on local histogram equalization within the image patch.


Histogram equalization involves ordering each scaled mean electrode output in terms of its luminance value, and then reassigning it a luminance based on the new histogram—so the darkest ⅕th of electrodes will be assigned a luminance of 25, the second darkest ⅕th of electrodes will be assigned a luminance of 75, and so on. Note that this histogram does not permit small differences in luminance. In Panel D where histogram equalization is carried out locally for each single image patch we see some problems (e.g. the second row) with over-scaling as a consequence of the histogram equalization being carried out over a small number of data points. As a result essentially identical luminance values can end up being assigned very different luminance values. To avoid these problems with inappropriate scaling histogram equalization should be carried out globally using either the entire field of view of the camera, and/or based on “history” over several seconds (as with second-stage luminance scaling see above). Examples of global histogram equalization scaling are shown in Panel E of FIG. 6.


It should be noted that histogram equalization should be carried out subsequent to the sub-setting for each electrode, and that, like second-stage luminance scaling, it will override the effects of first-stage luminance scaling.


In summary, histogram equalization may provide further benefits beyond those of simple second stage luminance scaling. In view of the small field of view of the image, at this stage normalization should be based on a temporal average over several seconds and as wide a field of view as possible.


Accordingly, what has been shown are digital image filters and related methods for image contrast enhancement. While these filters and methods have been described by means of specific embodiments and applications thereof, it is understood that numerous modifications and variations could be made thereto by those skilled in the art without departing from the spirit and scope of the disclosure. It is therefore to be understood that within the scope of the claims, the disclosure may be practiced otherwise than as specifically described herein.

Claims
  • 1. A visual prosthesis comprising: a video input device for providing video data;a video processing unit, including a digital signal processor, receiving the video data from the video input device and converting the video data into a video output image, including an array of pixels, the video processing unit including: an image contrast enhancement filter based on an invariant brightness level, and the formula Iout=(Iin−L) C+L, where Iout is output intensity, Iin is input intensity, L is the invariant brightness level, and C is a predetermined multiplier, and each pixel's difference from the invariant brightness level, where L is determined by segmentating the image to find objects, calculating an average brightness for each object, and setting L to be the darkest average brightness;a video filter processor producing the video output image;a telemetry engine transmitting the video output image; and
  • 2. The visual prosthesis of claim 1, wherein the image contrast enhancement filter is based on image histogram equalization, so that an output image output by the image contrast enhancement filter has an output image histogram which is an equalized version of the input image histogram.
  • 3. The visual prosthesis of claim 2, wherein image histogram equalization operates through a nonlinear monotonic scaling function.
  • 4. The visual prosthesis of claim 2, wherein the image contrast enhancement filter operates in the time domain.
  • 5. The visual prosthesis of claim 1, wherein filtering through the contrast enhancement filter is automatically set.
  • 6. The visual prosthesis of claim 2, wherein filtering through the contrast enhancement filter is applied automatically on every video frame of the video data.
  • 7. The visual prosthesis of claim 2, wherein filtering through the contrast enhancement filter is an adaptive filtering.
  • 8. The visual prosthesis of claim 7, wherein the adaptive filtering depends on a luminance distribution of input images at any instant of time.
  • 9. The visual prosthesis of claim 2, wherein the image histogram equalization is based on a temporal average.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application claims priority to U.S. Provisional Application No. 61/097,481 filed on Sep. 16, 2008 and incorporated herein by reference in its entirety.

US Referenced Citations (44)
Number Name Date Kind
4573070 Cooper Feb 1986 A
5109844 de Juan et al. May 1992 A
5674263 Yamamoto et al. Oct 1997 A
5796874 Woolfe Aug 1998 A
5862254 Kim et al. Jan 1999 A
5930402 Kim Jul 1999 A
5935155 Humayun et al. Aug 1999 A
6400989 Eckmiller Jun 2002 B1
6458157 Suaning Oct 2002 B1
6580835 Gallagher et al. Jun 2003 B1
6873742 Schu Mar 2005 B2
6920358 Greenberg et al. Jul 2005 B2
7266413 Greenberg et al. Sep 2007 B2
7424148 Goh Sep 2008 B2
7457434 Azar Nov 2008 B2
7668599 Greenberg et al. Feb 2010 B2
7755598 Yang et al. Jul 2010 B2
7925354 Greenberg et al. Apr 2011 B2
8000554 Li et al. Aug 2011 B2
8019428 Greenberg et al. Sep 2011 B2
8160383 Manabe Apr 2012 B2
8417032 Mizuno et al. Apr 2013 B2
8428741 Greenberg et al. Apr 2013 B2
8606037 Ali Dec 2013 B2
20020010496 Greenberg et al. Jan 2002 A1
20020136464 Schu Sep 2002 A1
20030161549 Lei et al. Aug 2003 A1
20050036071 Kim Feb 2005 A1
20050104974 Watanabe May 2005 A1
20050288735 Greenberg et al. Dec 2005 A1
20060013503 Kim Jan 2006 A1
20060106432 Sawan et al. May 2006 A1
20060147105 Lee et al. Jul 2006 A1
20070024638 Hoppe et al. Feb 2007 A1
20070171310 Arici et al. Jul 2007 A1
20070216813 Arici et al. Sep 2007 A1
20070299484 Greenberg et al. Dec 2007 A1
20080021516 Greenberg et al. Jan 2008 A1
20090268973 Majewicz Oct 2009 A1
20100036457 Sarpeshkar et al. Feb 2010 A1
20100067825 Zhou et al. Mar 2010 A1
20110181787 Wang et al. Jul 2011 A1
20110288612 Greenberg et al. Nov 2011 A1
20120002890 Mathew Jan 2012 A1
Foreign Referenced Citations (1)
Number Date Country
10210324 Aug 1998 JP
Non-Patent Literature Citations (9)
Entry
Tae Keun Kim; Joon Ki Paik; Bong Soon Kang; , “Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering,” Consumer Electronics, IEEE Transactions on , vol. 44, No. 1, pp. 82-87, Feb. 1998.
Wentai Liu et al., “Image Processing and Interface for retinal Visual Prostheses”, 2005, IEEE International Symposium on Circuits and Systems, p. 2927-2930.
Takasugi, Hiroshi, JP 10210324 Image Processing Unit(English Translation), 1998, p. 1-11.
Arici, T. et al, “Image Local Contrast Enhancement Using Adaptive Non-Linear Filters”, 2006, IEEE International Conference on Image Processing 2006, p. 1-9.
“Digital Image Enhancement and Noise Filitering by Use of Local Statistics”, J. Lee, IEEE Transactions on Pattern Analysis and Machine Intelligence. vol. PAMI-2, No. 2, Mar. 1980; pp. 165-168.
“Natural Scenes Classification for Color Enhancement”, F. Naccari, S. Battiato, A. Bruna, A. Capra, and A. Castorina, IEEE Transactions on Computer Electronics. vol. 51, No. 1, Feb. 2005; pp. 234-239.
“A Fast and Adaptive Mehtod for Image Contrast Enhancement”, Z. Yu and C. Bajaj, Department of Computer Sciences, University of Texas at Austin, Austin, TX 78712-1188 USA; 2004 International Conference on Image Processing (ICIP). vol. 2, Oct. 2004; pp. 1001-1004.
“Contrast Stretching”, R. Fisher, S. Perkins, A. Walker, and E. Wolfart, http://homepages.inf.ed.ac.uk/rbf/HIPR2/stretch.htm, 2003; pp. 1-5.
“Single-Chip CMOS Image Sensors for a Retina Implant System”, M. Schwarz, R. Hauschild, B. Hosticka, J. Huppertz, T. Kneip, S. Kolnsberg, L. Ewe, and H. K. Trieu, IEEE Transactions on Circuits and Systems—II: Analog and Digital Signal Processing. vol. 46, No. 7. Jul. 1999; pp. 870-876.
Related Publications (1)
Number Date Country
20100067825 A1 Mar 2010 US
Provisional Applications (1)
Number Date Country
61097481 Sep 2008 US