This application relates to image processing in digital cameras and other electronic digital image acquisition devices, and particularly to techniques of improving noise reduction techniques for such images.
Prior to the image data being received at the DRC block 117, it frequently undergoes a noise reduction process. The block labeled NR (Noise Reduction) 119 is interposed between the DRC element 117 and, here, the color interpolation unit 115. The NR block 119 receives the input pixel data Pin, subjects it to a noise reduction process, and outputs the result Pnr, which in turn is the input to DRC 117, whose pixel data output is labeled Pout. These algorithms are well known in the art and are well documented. When applying local Dynamic Range Compensation (DRC) on noisy images, that is, brightening dark areas of a noisy image, or enhancing contrast in bright areas of a noisy image, there is a side effect of varying noise levels in the image. This is due to the fact that DRC enhances noise as well as details. In effect, areas that went through DRC can appear much noisier than areas that did not go through DRC. Of course, if the NR unit 119 were to remove the noise completely, this would solve the problem; however, this would also remove much of the image detail and, in order to preserve texture and granularity in the image, it is common practice to reduce the amount of noise in the image, but not to remove it completely.
To provide additional background,
Pavg(x,y)=(Σ1−1Σ1−1Pin(x,y))/9
where x and v are the column and row indices of the image and the sum in this embodiment is over the 3-by-3 pixel neighborhood centered on the pixel at (x,y). Finally the current pixel is multiplied (at 205) by the blend factor, (1−α), and added (at 209) to the average of its neighbors multiplied (at 207) by α. This procedure is repeated for each pixel in the image, thereby reducing the noise of the image.
Subsequently, the dynamic range of the image is enhanced as shown in a DRC block, such as that shown as that of 117 as in
Pout(x,y)=G(x,y)*Pnr,
where in the above equation and following discussion ‘*’ stands for multiplication.
As shown in
l(x,y)=max(Rin(x,y),Gin(x,y),Bin(x,y)).
Other methods, such as those that use the luminance (Y), as in YcbCr or YUV, or other combination (e.g., Y=0.3R+0.6G+0.1B) could also be used.
In block 303, the log of l(x,y) is formed by means of a look-up table or by other computational means:
L(x,y)=Log(l(x,y)).
L(x,y) then serves as the measure of the image strength or amplitude that is input into block 305 where low pass filtering and non-linear combining are performed to produce an image, B(x,y), where details of this process in the basic embodiment are given with respect to
These various prior art methods tend to have a number of shortcomings when it comes to implementation in digital cameras, video, and other imaging systems. Previous implementations of digital cameras often reduce noise in an image, then subsequently enhance the dynamic range of the image by increasing the gain of the image in areas of shadow, or reducing the gain of the image in regions of brightness. However, the dynamic range compensation enhances noise as well as details. Further, given that the development of sensors with higher pixel densities will tend to make pixels noisier, and as the trend is to push the ISO level higher while allowing DRC at the same time while maintaining image quality, techniques to treat these problems will likely grow in importance.
Methods and corresponding apparatus are presented for a processing of image data that includes both dynamic range compensation and noise reduction. A dynamic range compensation process and a noise reduction process are performed, where the noise reduction is responsive to the dynamic range compensation process. In an exemplary embodiment, the dynamic range compensation process and a noise reduction process are performed concurrently and on a pixel-by-pixel basis, the noise reduction factor used on a given pixel from the image data being responsive to a gain factor for the given pixel determined by the dynamic range compensation process. In other embodiments, the dynamic range compensation operation is performed prior to the noise reduction operation.
In an exemplary embodiment, a noise reduction module receives the values of a given pixel and the pixels in the neighborhood of this pixel. It also receives a gain or other factor determined for the given pixel by, for example, a dynamic range compensation process. From the values of the pixels of the neighborhood, an average image strength for the neighborhood is formed. The output signal is then formed from a combination of the value of the given pixel and the average image strength of its neighborhood, where the combination is responsive to the gain factor of the given pixel.
Various aspects, advantages, features and embodiments of the present invention are included in the following description of exemplary examples thereof, which description should be taken in conjunction with the accompanying drawings. All patents, patent applications, articles, other publications, documents and things referenced herein are hereby incorporated herein by this reference in their entirety for all purposes. To the extent of any inconsistency or conflict in the definition or use of terms between any of the incorporated publications, documents or things and the present application, those of the present application shall prevail.
The methods and corresponding apparatus presented below use techniques that perform dynamic range compensation (DRC) and noise reduction (NR) together on a pixel-by-pixel basis, adjusting the noise reduction parameters in response to the dynamic range compensation decisions. By such a modification of image noise reduction parameters based on the dynamic range compensation gain, these techniques make it possible to perform DRC on noisy images, achieving an image with low and, importantly, uniform noise levels. This allows camera manufacturers, for example, to apply DRC to high-ISO images (images containing high noise levels due high gain amplification between the sensor output and the input to the digital camera's A/D converter) and still meet desired image quality standards.
These techniques are fundamentally different from that which is usually found in the prior art in that the dynamic range enhancement and noise reduction are performed interactively, with the noise reduction is responsive to the dynamic range compensation process. In the main exemplary embodiment, the dynamic range enhancement and the noise reduction are performed concurrently. In other examples, the DRC process can be performed before the noise reduction. Furthermore, the presented technology uses the uniformity or “flatness” of the given region being processed to reduce or disable noise reduction so that detail and texture are not lost in the noise reduction process.
Although the following description describes a process that continuously modifies the noise reduction amount based on the pixel-by-pixel gain to be applied in the dynamic range correction process, the process may be more widely employed for gain or other factors from other processes. More generally, the DRC operation is not necessarily implemented by applying a gain on the pixel components: other DRC implementations can change the pixel value using other transformations. Also, an image processing unit may also perform other corrections on a set of image data that introduce gain factors, such as light balance, color mapping, or lens shading, where each pixel is multiplied by a gain that depends on the pixel coordinates according to the lens geometry in order to compensate for vignetting, for example. These processes may similarly amplify the noise of the image. The present techniques can similarly be applied in these cases by performing these other operations to provide a corresponding compensation factor. Additionally, although the discussion is presented in the context of a process performed on a pixel-by-pixel basis, more generally it can be performed on sets of pixels, such as on block-by-block basis for N-by-N blocks, where N is some integer that could be larger than one, in addition to the exemplary N=1 embodiment. For all of the embodiments presented here, it will be appreciated that the various modules may be implemented in hardware, software, firmware, or a combination of these, and that image data may originate from digital video, camera, or other imaging systems. Additionally, the process is not limited to the ROB format. Most of the following discussion is based upon an exemplary embodiment where the digital range compensation and the noise reduction are performed concurrently to provide a corresponding compensation factor on a pixel by pixel basis.
Dynamic Range Compensation and Noise Reduction in Conjunction
In
A number of implementations are possible for the Noise Reduction block 419, with one advantageous embodiment being that presented in a U.S. patent application Ser. No. 11/754,202, entitled “Advanced Noise Reduction in Digital Cameras” by Dudi Vakrat, Noam Korem, and Victor Pinto filed May 25, 2007.
More specifically, the effective gain, G(x,y), or other such factor is utilized in the calculation of the blending factor, α′, in element 401, where the prime (′) is used to distinguish this blending factor from that of
In order to understand how the blending factor, α′, is calculated, and how the noise reduction filter operates as a whole, refer to the pseudo-code example below. In this example, the manufacturer can determine the constants “A” and “T” as a calibration step. The constant “Factor” can be determined by the manufacturer, or may be user controllable.
“T” is a threshold that indicates the noise levels of the camera's sensor and changes according to the ISO sensitivity for differing exposure (shutter speed and lens aperture size combination) settings. For a given camera and a given ISO setting, the camera manufacturer can, for example, calculate T by capturing a color chart and then determining the maximum value of abs(Pin[x,y]−Pin[x+n,y+m]) over all the uniform (or “flat”) areas in the image (areas without edges or transitions).
“A” is a threshold that is used to detect whether or not the current region of the image is flat and uniform, or whether it contains edges and details that should be preserved. It can arbitrarily selected to be approximately, say, 0.75. If the variable “flatness[x,y]” in the example below is above this threshold, the current region is deemed to be sufficiently uniform and flat to allow noise reduction to occur; otherwise, noise reduction is disabled in order to preserve detail.
The parameter “Factor” is in the range of [0,1] and determines the uniform noise level, relative to the uniform noise level on the original image before NR and DRC, that will remain on the output image after Noise Reduction and Dynamic Range Compensation. If Factor=0 then the noise will be removed almost completely; however textures and fine details which are below the noise level (indicated by threshold ‘T’) will be eliminated as well. On the other hand, if Factor=1, then all textures and details will be preserved, however the noise will not be reduced at all. In other words, Factor controls the trade-off between preserving fine details and removing noise.
Noise Reduction Filter Example
This section presents an exemplary implementation of the noise reduction filter 419. The exemplary filter operates on an environment of N×N pixels around the current pixel, where N is an odd number. As noted above, this illustrative example is just one of many possible embodiments by which the effective gain from the dynamic range compensation section can be incorporated into the noise reduction process. More detail on filtering methods that can be incorporated within the present invention are presented in the U.S. patent application Ser. No. 11/754,202, entitled “Advanced Noise Reduction in Digital Cameras” by Dudi Vakrat, Noam Korem, and Victor Pinto filed May 25, 2007. As noted above, the gain or other factor could be from another module (such as lens shading, light balance, color mapping, chromaticity, shading, spatial frequency, gradient, chromaticity variations, or some combination of these) that concurrently produces a compensation factor on a pixel-by-pixel basis as well as, or in addition to, that from a dynamic range compensation module. As also noted above, although the pixel-by-pixel implementation is discussed in detail, a block-by-block or other multi-pixel implementations can also be used.
According to this particular embodiment, for each input image pixel in (x,y) coordinates, Pin[x,y], perform
Note that in this embodiment, when Flatness[x,y]=1, then (1−α′)=factor/G(x,y). In the pseudo-code of this example, the variable Sum[x,y] is the total of the values of all the pixels in the calculation neighborhood. Thus when it is divided by N2, it is the average of all the pixels in the region. The variable Count[x,y] is the total number of occurrences in the region when the difference between the current pixel and one of its neighbors is below the threshold of noise, T. When divided by N2, it becomes a variable between 0 and 1, called Flatness[x,y], that indicates the relative uniformity or flatness of the region. If this variable is below the threshold A, no noise reduction is performed in order to preserve detail. If Flatness[x,y] is above the threshold A, it is used to calculate the blending variable α′, Alpha[x,y], such that the more uniform the region, the more blending that is allowed. Finally the noise reduced pixel data is calculated by blending the current pixel with the average of its neighbors using Alpha[x,y], as indicated. Although the exemplary embodiment forms a linear combination of the Pin and the average strength for the neighborhood, other gain dependent functions or combinations may also be employed.
In regions containing edges and fine detail, Flatness [x,y] will tend to 0 and therefore the exemplary filter actually performs little or nothing:
Pnr[x,y]=Pin[x,y]
This means that the filter will avoid averaging the current pixel with the neighboring pixels, thereby preserving the edges and detail.
In areas of great uniformity, Flatness [x,y] will tend to 1 and therefore the filter actually performs:
Pnr[x,y]=(1−Factor/Gain[x,y])*Average[x,y]+Factor/Gain[x,y]*Pin[x,y]
This means that the filter will perform a weighted average of the current pixel with the neighboring pixels in order to remove noise. The higher the gain, the higher the weight of the average relative to the weight of the current pixel, and noise filtering becomes more aggressive.
Assuming that N is big enough so that the noise variance on Average[x,y] is very close to 0, and assuming the noise variance of the noise on Pin[x,y] is σ, then the noise variance of Pnr[x,y], σnr, will be:
σnr=Factor/Gain[x,y]*σ
The noise variance of the noise on Pout[x,y], σPout, will be:
σPout=Gain[x,y]*Factor/Gain[x,y]*σ=Factor*σ
This means that the noise levels on the output image will be uniform, and controllable by changing the parameter Factor.
The camera manufacturer can choose a different Factor per camera and per ISO setting, according to the desired performance level. Alternatively, the camera manufacturer can let the user control Factor (through a control on the camera, for example) so that the user will be able to control the aggressiveness of noise reduction per camera mode or even per image. The following is one example of a calibration procedure that the camera manufacturer or the user can use to find the appropriate value of Factor to match the desired performance level:
Consequently, as described, the techniques described here continuously modify the noise reduction amount based on the pixel-by-pixel gain to be applied in the dynamic range correction process, using a technology in which dynamic range enhancement and noise reduction are performed simultaneously and interactively. Furthermore, it uses the uniformity (or “flatness”) of the given region being processed to reduce or disable noise reduction so that detail and texture are not lost in the noise reduction process.
More generally, the noise reduction process and the digital range compensation need not be performed concurrently. As can be seen from the above, it is only needed that the result of the DRC gain calculation be made available to the noise reduction process. Consequently, the gain calculation can be performed prior to the noise reduction operation. Referring to
Although the exemplary embodiment implements the DRC operation by applying a gain on the pixel components, this may also be generalized. Other implementations of dynamic range compensation can change the pixel value using other transformations. For example, a look up table can hold the RGB values of the output pixel according to the luma (Y) value of the input pixel. There can also be other image process operations that apply a “local gain”, where the gain is not the same for all pixels in the image, on the pixel value, as in local white balance or lens shading, where each pixel is multiplied by a gain dependent on the pixel coordinates according to the lens geometry.
Although the various aspects of the present invention have been described with respect to exemplary embodiments thereof, it will be understood that the present invention is entitled to protection within the full scope of the appended claims.
This application is a continuation-in-part of application Ser. No. 11/754,170, filed on May 25, 2007, which application is incorporated herein in its entirety by this reference.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5012333 | Lee et al. | Apr 1991 | A |
| 5442462 | Guissin | Aug 1995 | A |
| 5461655 | Vuylsteke et al. | Oct 1995 | A |
| 5923775 | Snyder et al. | Jul 1999 | A |
| 5978518 | Oliyide et al. | Nov 1999 | A |
| 6580835 | Gallagher et al. | Jun 2003 | B1 |
| 6625325 | Gindele et al. | Sep 2003 | B2 |
| 6681054 | Gindele | Jan 2004 | B1 |
| 6718068 | Gindele et al. | Apr 2004 | B1 |
| 6738494 | Savakis et al. | May 2004 | B1 |
| 6757442 | Avinash | Jun 2004 | B1 |
| 6804393 | Gindele et al. | Oct 2004 | B2 |
| 6807300 | Gindele et al. | Oct 2004 | B1 |
| 6813389 | Gindele et al. | Nov 2004 | B1 |
| 6856704 | Gallagher et al. | Feb 2005 | B1 |
| 6931160 | Gindele et al. | Aug 2005 | B2 |
| 6937772 | Gindele | Aug 2005 | B2 |
| 6937775 | Gindele et al. | Aug 2005 | B2 |
| 6973218 | Alderson et al. | Dec 2005 | B2 |
| 7054501 | Gindele et al. | May 2006 | B1 |
| 7065255 | Chen et al. | Jun 2006 | B2 |
| 7092579 | Serrano et al. | Aug 2006 | B2 |
| 7116838 | Gindele et al. | Oct 2006 | B2 |
| 7324701 | Nakami | Jan 2008 | B2 |
| 7596280 | Bilbrey et al. | Sep 2009 | B2 |
| 7876974 | Brajovic | Jan 2011 | B2 |
| 20020028025 | Hong | Mar 2002 | A1 |
| 20050089239 | Brajovic | Apr 2005 | A1 |
| 20050129330 | Shyshkin | Jun 2005 | A1 |
| 20060220935 | Hughes et al. | Oct 2006 | A1 |
| 20080292202 | Vakrat | Nov 2008 | A1 |
| Number | Date | Country | |
|---|---|---|---|
| 20080292209 A1 | Nov 2008 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 11754170 | May 2007 | US |
| Child | 12121183 | US |