Embodiments of the invention are directed to methods and apparatuses of noise reduction in digital images.
Digital images are now widely available for various types of imager devices that may use CCD, CMOS or other types of pixel arrays and associated readout circuits. Most digital images are constructed from the analog signals output from the pixel array, typically, though not exclusively, output as Bayer pattern analog signals. The analog pixel signals, each of which represents one color, e.g., red, green or blue, are converted to digital signals that are processed to produce demosaiced pixel signals forming a digital image, which may be stored, transmitted, and/or further processed.
Noise is an image distorting feature that may be present in a stored, transmitted or processed digital image and may come from different sources. For example, there may be light noise produced by the pixel array and associated readout circuitry, such noise is intensified by continued decreases in pixel size, and under low light conditions. In addition, quantization errors may occur in the analog-to-digital converter that digitizes the signals. Noise may also be introduced by a recording medium that stores the digital image or by a transmission medium.
As a consequence, digital image denoising has become an important part of digital image processing. In many instances, digital images are processed for noise content in the later stages of digital image processing after a demosaicing operation on the original pixel array signals. In such cases, noise present in the initial digitized image may be further intensified by early stage digital processing and as a result may become more difficult to remove, requiring a higher degree of noise processing, which may excessively distort or blur the digital image. In addition, performing a denoising operation after demosaicing requires a large line buffer memory, which increases the size of a chip containing an image processor.
What is desired, then, is a method and apparatus for denoising prior to or during demosaicing.
Embodiments described herein process the digitized pixel signals moving from the pixel array to an image processor at an early stage in the digitized processing chain to remove noise, which reduces the amount of noise present in the constructed digital image.
In first method and apparatus embodiments, noise reduction occurs before demosaicing, which, as known, constructs the digital image from the digitized Bayer (or other color) pattern pixel signals received from the pixel array. In other method and apparatus embodiments, noise reduction occurs as part of the demosaicing process.
Referring now to
Still referring to
Referring back to
Referring back to
Referring again to step 104, if not all of the differences are greater than threshold σ1, the next step is to count those differences that are less than threshold σ0 (step 106). Step 106 is performed because only those neighboring pixels that are close in value to the center pixel (i.e., difference <σ0) are used to determine a noise reduced value for the center pixel. Where a large difference exists between a surrounding pixel and the center pixel, an edge may exist, and thus, that surrounding pixel should not be used for noise reduction.
An optional step that sorts the differences that are less than threshold σ0 in ascending order is performed (step 107). By sorting the differences, those pixels closest in value to the center pixel (i.e., smallest difference) are given a greater weight in a later step.
At step 108 the pixel values are weighted according to the kth row in a weight matrix, where k is the number of differences counted (step 106) to be less than threshold σ0. To weight the pixel values, each surrounding pixel value is multiplied by an entry in the kth row of the weight matrix to create a weighted value for each surrounding pixel. The weight matrix has n+2 rows and n+1 columns where n is the number of surrounding pixels being used to process the center pixel. The total value of each row is n. The entries in each row are in descending order. Each successive row of the weight matrix has one more entry equal to zero than the row before it, with the exception of the last row. The first entry in each row is the weight for the center pixel, which is used to determine the noise reduced value of the center pixel. The last row in the weight matrix has a first entry equal to zero, the next four entries each equal to n/4 and the remaining entries equal to zero. When a defective pixel is found at step 105, the last row of the weight matrix is used to average the values of the four immediate surrounding pixels.
Referring again to
At step 110, it is determined whether all pixels in the row have been processed. If all pixels in the row have not been processed, the process continues at step 102 to pick another pixel to be the center pixel. If all pixels in the row have been processed, the next step is to input the next line of values (step 111) and repeat the process by returning to step 100.
Referring now to
Referring again to
Referring again to
Referring back to
Referring again to
Referring back to
Referring again to
One example of a demosaicing algorithm that may be used in steps 203, 205, and 207 is described in U.S. patent application Ser. No. 11/873,123, entitled Method and Apparatus for Anisotropic Demosaicing of Image Data. While steps 203-208 show green processed first, blue processed next, and red processed last, the colors can be processed in any order. In addition, the number of surrounding pixels n may be different at each step 203, 205, 207.
At step 110, it is determined whether all pixels in the row have been processed. If all pixels in the row have not been processed, the process continues at step 102 to pick another pixel to be the center pixel. If all pixels in the row have been processed, the next step is to input the next line of values (step 111) and repeat the
Referring now to
Referring again to
Referring again to
Referring again to
Step 306 calculates the difference ΔG between the center pixel's green value after demosaicing at step 203 G203 and the center pixel's green value after noise reduction at step 305 G305 by using the following:
ΔG=G305−G203
The next step estimates blue and red values Best, Rest for the center pixel using a demosaicing algorithm (step 307). One example of a demosaicing algorithm that may be used is described in U.S. patent application Ser. No. 11/873,123, entitled Method and Apparatus for Anisotropic Demosaicing of Image Data. The next step 308 applies the difference ΔG to the blue and red values of the center pixel using the following:
Bcenter=Best+ΔG
RCenter=Rest+ΔG
At step 110, it is determined whether all pixels in the row have been, processed. If all pixels in the row have not been processed, the process continues at step 102 to pick another pixel to be the center pixel. If all pixels in the row have been processed, the next step is to input the next line of values (step 111) and repeat the
System 800, for example a digital still or video camera system, generally comprises a central processing unit (CPU) 802, such as a control circuit or microprocessor for conducting camera functions, that communicates with one or more input/output (I/O) devices 806 over a bus 804. Imager 10/10′ also communicates with the CPU 802 over the bus 804. The system 800 also includes random access memory (RAM) 810, and can include removable memory 815, such as flash memory, which also communicates with the CPU 802 over the bus 804. The imager 10/10′ may be combined with the CPU processor with or without memory storage on a single integrated circuit or on a different chip than the CPU processor. In a camera system, a lens 820 is used to focus light onto the pixel array 11 of the imager 10/10′ when a shutter release button 822 is pressed.
The above description and drawings are only to be considered illustrative of specific embodiments, which achieve the features and advantages described herein. Modification and substitutions to specific structures can be made. Accordingly, the claimed invention is not to be considered as being limited by the foregoing description and drawings, but is only limited by the scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5475769 | Wober et al. | Dec 1995 | A |
6724945 | Yen et al. | Apr 2004 | B1 |
6757442 | Avinash | Jun 2004 | B1 |
6813389 | Gindele et al. | Nov 2004 | B1 |
6816197 | Keshet et al. | Nov 2004 | B2 |
6836289 | Koshiba et al. | Dec 2004 | B2 |
6868190 | Morton | Mar 2005 | B1 |
6970597 | Olding et al. | Nov 2005 | B1 |
7030917 | Taubman | Apr 2006 | B2 |
7129976 | Jaspers | Oct 2006 | B2 |
7199348 | Olsen et al. | Apr 2007 | B2 |
7206101 | Avinash | Apr 2007 | B2 |
7366347 | Song et al. | Apr 2008 | B2 |
7369165 | Bosco et al. | May 2008 | B2 |
7373013 | Anderson | May 2008 | B2 |
7391904 | Embler | Jun 2008 | B2 |
7580589 | Bosco et al. | Aug 2009 | B2 |
7701496 | Hains et al. | Apr 2010 | B2 |
20020167602 | Nguyen | Nov 2002 | A1 |
20020186309 | Keshet et al. | Dec 2002 | A1 |
20030052981 | Kakarala et al. | Mar 2003 | A1 |
20040085458 | Yanof et al. | May 2004 | A1 |
20040085475 | Skow et al. | May 2004 | A1 |
20050134713 | Keshet et al. | Jun 2005 | A1 |
20060222269 | Ohno | Oct 2006 | A1 |
20070153335 | Hosaka | Jul 2007 | A1 |
20080075394 | Huang et al. | Mar 2008 | A1 |
Number | Date | Country |
---|---|---|
WO 2006112814 | Oct 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20090214129 A1 | Aug 2009 | US |