The embodiments described herein relate generally to the field of solid state imager devices, and more particularly to methods and apparatuses for noise reduction in a solid state imager device.
Solid state imagers, including charge coupled devices (CCD), CMOS imagers and others, have been used in photo imaging applications. A solid state imager circuit includes a focal plane array of pixels, each one of the pixels including a photosensor, which may be a photogate, photoconductor or a photodiode having a doped region for accumulating photo-generated charge.
One of the most challenging problems for solid state imagers is noise reduction, especially for imagers with a small pixel size. The effect of noise on image quality increases as pixel sizes continue to decrease and may have a severe impact on image quality. Specifically, noise impacts image quality in smaller pixels because of reduced dynamic range. One of the ways of solving this problem is by improving fabrication processes; the costs associated with such improvements, however, are high. Accordingly, engineers often focus on other methods of noise reduction. One such solution applies noise filters during image processing. There are many complicated noise reduction algorithms which reduce noise in the picture without edge blurring, however, they require huge calculating resources and cannot be easily implemented in a system-on-a-chip application. Most simple noise reduction algorithms which can be implemented in system-on-a-chip applications blur the edges of the images.
Two known methods that may be used for image denoising are briefly now discussed. The first method includes the use of local smoothing filters, which work by applying a local low-pass filter to reduce the noise component in the image. Typical examples of such filters include averaging, medium and Gaussian filters. One problem associated with local smoothing filters is that they do not distinguish between high frequency components that are part of the image and those created due to noise. As a result, these filters not only remove noise but also blur the edges of the image.
A second group of denoising methods work in the spatial frequency domain. These methods typically first convert the image data into a frequency space (forward transform), then filter the transformed image and finally convert the image back into the image space (reverse transform). Typical examples of such filters include DFT filters and wavelength transform filters. The utilization of these filters for image denoising, however, is impeded by the large volume of calculations required to process the image data. Additionally, block artifacts and oscillations may result from the use of these filters to reduce noise. Further, these filters are best implemented in a YUV color space (Y is the luminance component and U and V are the chrominance components). Accordingly, there is a need and desire for an efficient image denoising method and apparatus which does not significantly blur the edges of the image.
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof and show by way of illustration specific embodiments that may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to make and use them, and it is to be understood that other embodiments may be utilized, and that structural, logical, procedural, and electrical changes may be made to the specific embodiments disclosed. The progression of processing steps described is an example of the embodiments; however, the sequence of steps is not limited to that set forth herein and may be changed as is known in the art, with the exception of steps necessarily occurring in a certain order.
The term “pixel,” as used herein, refers to a photo-element unit cell containing a photosensor device and associated structures for converting photons to an electrical signal. For purposes of illustration, a small representative three-color pixel array is illustrated in the figures and description herein. However, the embodiments may be applied to monochromatic imagers as well as to imagers for sensing fewer than three or more than three color components in an array. Accordingly, the following detailed description is not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
In the illustrated examples, it is assumed that the pixel array 100 is associated with a Bayer pattern color filter array 82 (
To denoise identified target pixel 32a, 32b, embodiments utilize signal values of the nearest neighboring pixels of the identified target pixel 32a, 32b. The identified target pixel 32a, 32b is the pixel currently being processed. The neighboring pixels are collectively referred to herein as a correction kernel, shown in
In
As described in detail below, the embodiments described herein may be used to denoise images while preserving edges. Rather than outputting the actual pixel signal value for the target pixel, the target pixel's signal value (“value”) is averaged with the signal values of pixels in the correction kernel. This averaging is done to minimize the effect noise has on an individual pixel. For example, in a flat-field image, an array of ideal pixels would output the same signal value for every pixel in the array; however, because of noise the pixels of the array do not output the same signal for every pixel in the array. By averaging the signal values from the surrounding pixels having the same color as the target pixel, the effect of noise on the target pixel is reduced.
In order to preserve edges, it is desirable to set a threshold such that averaging is only performed if the difference between the target pixel signal value and the signal values of pixels in the correction kernel is below a threshold. Only noise that has amplitude of dispersion (the difference between the average maximum and minimum value) lower than a noise amplitude threshold (TH) will be averaged and reduced. Therefore, the threshold should be set such that noise is reduced, but pixels along edges will be subjected to less (or no) averaging thereby preserving edges. An embodiment described herein sets a noise amplitude threshold (TH), which may be a function of analog and digital gains that may have been applied to amplify the original signal. It should be appreciated that the threshold TH can be varied based on, for example, pixel color. An embodiment described herein accomplishes this by processing a central target pixel by averaging it with all its like color neighbors that produce a signal difference less than the set threshold. Another embodiment described herein accomplishes this by processing a central target pixel by averaging it with a selected subset of its like color neighbors that produce a signal difference less than the set threshold. Further, the exemplary noise filter could be applied either to each color separately in Bayer, Red/Green/Blue (RGB), Cyan/Magenta/Yellow/Key (CMYK), luminance/chrominance (YUV), or other color space.
With reference to
It should be understood that each pixel has a value that represents an amount of light received at the pixel. Although representative of a readout signal from the pixel, the value is a digitized representation of the readout analog signal. These values are represented in the following description as P(pixel) where “P” is the value and “(pixel)” is the pixel number shown in
Initially, at step 201, the target pixel 32a being processed is identified. Next, at step 202, the kernel 101a associated with the target pixel 32a is selected/identified. After the associated kernel 101a is selected, at step 203, the difference in values P(pixel) of the central (processed) pixel 32a and each neighboring pixel 10, 12, 14, 30, 34, 50, 52, 54 in kernel 101a are compared with a threshold value TH. The threshold value TH may be preselected, for example, using noise levels from current gain settings, or using other appropriate methods. In the illustrated example, at step 203, neighboring pixels that have a difference in value P(pixel) less than or equal to the threshold value TH are selected. Alternatively, at step 203, a subset of the neighboring pixels that have a difference in value P(pixel) less than or equal to the threshold value TH are selected. For example purposes only, the value could be the red value if target pixel 32a is a red pixel.
Next, at step 204, a value P(pixel) for each of the kernel pixels located around the target pixel 32a, which were selected in step 203, are added to a corresponding value for the target pixel 32a and an average value A(pixel) is calculated. For example, for target pixel 32a, the average value A32=(P10+P12+P14+P30+P32a+P34+P50+P52+P54)/9 is calculated, if all eight neighboring pixels were selected in step 203. At step 205, the calculated value A(pixel), which is, in this example, A32, replaces the original target pixel value P32a.
The methods described herein may be carried out on each pixel signal as it is processed. As pixels values are denoised, the values of previously denoised pixels may be used to denoise other pixel values. Thereby, when the method described herein and the values of previously denoised pixels are used to denoise other pixels, the method and apparatus is implemented in a partially recursive manner (pixels are denoised using values from previously denoised pixels). However, the embodiments are not limited to this implementation and may be implemented in a fully recursive (pixels are denoised using values from other denoised pixels) or non-recursive manner (no pixels having been denoised are used to denoise subsequent pixels).
The method 200 described above may also be implemented and carried out, as discussed above, on target pixel 32b and associated image correction kernel 101b (
The methods described above provide good denoising. It may be desirable, however, to limit the number of pixels utilized in the averaging of the target pixel signal value and the correction kernel signal values to decrease implementation time and/or decrease die size. For example, as illustrated in the flowchart of
Initially, at step 2010, a target pixel p having a signal value psig is selected/identified, for example, pixel 32a (
In step 2070, a determination is made to see if the absolute value of the difference between nsig and Psig is less than a threshold TH. The threshold value TH may be preselected, for example, using noise levels from current gain settings, or using other appropriate methods. Additionally, the threshold value TH can be preselected based on the color of the target pixel p. If the determined difference is greater than the threshold TH (step 2070), nsig is not included in the averaging and the method 2000 then determines if all of the pixels in group g have been assessed (step 2130). However, if the determined difference is less than the threshold TH (step 2070), a new value for Pixelsum is determined by adding nsig to Pixelsum (step 2080) and a new value for Pixelcount is determined by incrementing Pixelcount (step 2090). The method 2000 then compares the value of Pixelcount to a set of at least one predetermined number (step 2100). For example, it may be desirable to compare the value of Pixelcount to a set of values comprised of integer powers of the number two. As described below in more detail, division by Pixelcount is required in step 2150 and when implementing division in hardware, division by a power of two can be accomplished with register shifts, thereby making the operation faster and able to be implemented in a smaller die area. If Pixelcount is contained in the set of at least one predetermined number, for example, if Pixelcount is 4 and the set of at least one predetermined number includes 1, 2, 4, and 8, Pixelcount
Then the method 2000 determines if all pixels in group g have been assessed (step 2130). If not, then the method returns to step 2060 and selects a next pixel n. If all of the pixels in group g have been assessed (step 2130), the method 2000 determines if all groups g have been assessed (step 2140). If all groups g have not been assessed, the method 2000 continues at step 2050 and selects a next group g. If all groups g have been assessed, then psig
The method 2000 described above may also be implemented and carried out, as discussed above, on target pixel 32b (
The above described embodiments may not provide sufficient denoising to remove spurious noise (i.e., noise greater than 6 standard deviations). Accordingly, embodiments of the invention are better utilized when implemented after the image data has been processed by a filter which will remove spurious noise.
In addition to the above described embodiments, a program for operating a processor embodying the methods may be stored on a carrier medium which may include RAM, floppy disk, data transmission, compact disk, etc. which can be executed by an associated processor. For example, embodiments may be implemented as a plug-in for existing software applications or may be used on their own. The embodiments are not limited to the carrier mediums specified herein and may be implemented using any carrier medium as known in the art or hereinafter developed.
A sample and hold circuit 261 associated with the column driver 260 reads a pixel reset signal Vrst and a pixel image signal Vsig for selected pixels of the array 240. A differential signal (Vrst−Vsig) is produced by differential amplifier 262 for each pixel and is digitized by analog-to-digital converter 275 (ADC). The analog-to-digital converter 275 supplies the digitized pixel signals to an image processor circuit 280 which forms and may output a digital image. The method 200 (
System 1100, for example a camera system, may comprise a central processing unit (CPU) 1102, such as a microprocessor, that communicates with one or more input/output (I/O) devices 1106 over a bus 1104. Imager 300 also communicates with the CPU 1102 over the bus 1104. The processor-based system 1100 also includes random access memory (RAM) 1110, and can include removable memory 1115, such as flash memory, which also communicates with the CPU 1102 over the bus 1104. The imager 300 may be combined with a processor, such as a CPU, digital signal processor, or microprocessor, with or without memory storage on a single integrated circuit or on a different chip than the processor. As described above, raw image data from the pixel array 240 (
While the embodiments have been described in detail in connection with preferred embodiments known at the time, it should be readily understood that the claimed invention is not limited to the disclosed embodiments. Rather, the embodiments can be modified to incorporate any number of variations, alterations, substitutions, or equivalent arrangements not heretofore described. For example, the methods can be used with pixels in other patterns than the described Bayer pattern, and the correction kernels would be adjusted accordingly. While the embodiments are described in connection with a CMOS imager, they can be practiced with other types of imagers. Thus, the claimed invention is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims.
This application is a continuation-in-part of U.S. patent application Ser. No. 11/295,445, filed on Dec. 7, 2005, the subject matter of which is incorporated in its entirety by reference herein.
Number | Date | Country | |
---|---|---|---|
Parent | 11295445 | Dec 2005 | US |
Child | 11898909 | Sep 2007 | US |