The embodiments described herein relate generally to the field of digital image processing, and more specifically to a methods and apparatuses for noise reduction in digital image processing.
Solid state imaging devices, including charge coupled devices (CCD), complementary metal oxide semiconductor (CMOS) imaging devices, and others, have been used in photo imaging applications. A solid state imaging device circuit includes a focal plane array of pixel cells or pixels as an image sensor, each cell including a photosensor, which may be a photogate, photoconductor, a photodiode, or other photosensor having a doped region for accumulating photo-generated charge. For CMOS imaging devices, each pixel has a charge storage region, formed on or in the substrate, which is connected to the gate of an output transistor that is part of a readout circuit. The charge storage region may be constructed as a floating diffusion region. In some CMOS imaging devices, each pixel may further include at least one electronic device such as a transistor for transferring charge from the photosensor to the storage region and one device, also typically a transistor, for resetting the storage region to a predetermined charge level prior to charge transference.
In a CMOS imaging device, the active elements of a pixel perform the necessary functions of: (1) photon to charge conversion; (2) accumulation of image charge; (3) resetting the storage region to a known state; (4) transfer of charge to the storage region; (5) selection of a pixel for readout; and (6) output and amplification of a signal representing pixel charge. Photo charge may be amplified when it moves from the initial charge accumulation region to the storage region. The charge at the storage region is typically converted to a pixel output voltage by a source follower output transistor.
CMOS imaging devices of the type discussed above are generally known as discussed, for example, in U.S. Pat. Nos. 6,140,630, 6,376,868, 6,310,366, 6,326,652, 6,204,524, and 6,333,205, assigned to Micron Technology, Inc.
One issue in the field of solid state imaging devices is noise reduction, particularly for devices with a small pixel size. As pixel size decreases, the effect of noise on image quality increases. Noise reduction techniques are used to improve the appearance of captured images. One method of noise reduction is to improve the fabrication process, however, such improvements are often cost prohibitive. An alternative solution is to apply noise filters during image processing. One noise reduction technique detects areas devoid of features (i.e., flat-field areas) and averages pixel signals in those areas of the image, as described in U.S. patent application Ser. No. 11/601,390, filed Nov. 17, 2006, which is incorporated herein by reference. This method can be implemented at a low cost in hardware logic, but can produce an unwanted side-effect of creating zipper artifacts, which are often conspicuous to the viewer. Accordingly, there is a need to provide quality, artifact-free flat-field noise reduction while maintaining a low hardware logic cost.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific embodiments that may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to make and use them, and it is to be understood that structural, logical, or procedural changes may be made to the specific embodiments disclosed.
Raw imaging data from an imaging device that uses a red, green, blue (RGB) Bayer pattern color filter array (CFA) consists of a mosaic of red, green, and blue pixel values and is often referred to as Bayer RGB data.
The demosaiced outputs of the demosaic module 10 and the flat-field noise reduction module 30 are processed in the noise reduction module 40 to produce a noise reduced output RGBNR[i,j] (shown generally as signal RGBNR). The noise reduction module 40 may select the RGBD[i,j] signal when a sufficiently large edge value (as compared to a predetermined threshold) is output from the edge detect module 20; may select the RGBFFNR[i,j] signal when a sufficiently small edge value (as compared to a predetermined threshold) is output from the edge detect module 20 (e.g., in flat-field regions devoid of edges); or may blend the RGBD[i,j] and RGBFFNR[i,j] signals as a weighted sum of the two signals based on the output of the edge detect module 20 (e.g., with smaller edge values, RGBFFNR[i,j] would be given a larger weight than RGBD[i,j] and with larger edge values, RGBD[i,j] would be given a larger weight than RGBFFNR[i,j]). In a desired embodiment, the noise reduced signal, RGBNR[i,j], output from the noise reduction module 40 is derived using the following formulas:
RNR[i,j]=wFFNR[i,j]·RD[i,j]+(1−wFFNR[i,j])·RFFNR[i,j] (1)
GNR[i,j]=wFFNR[i,j]·GD[i,j]+(1−wFFNR[i,j])·GFFNR[i,j] (2)
BNR[i,j]=wFFNR[i,j]·BD[i,j]+(1−wFFNR[i,j])·BFFNR[i,j] 3)
If the weight wFFNR[i,j] is 0, then the noise reduction module 40 selects the output RGBFFNR[i,j] from the flat-field noise reduction module 30. If the weight wFFNR[i,j] is 1, then the noise reduction module 40 selects the output RGBD[i,j] from the demosaic module 10. If the weight wFFNR[i,j] is between 0 and 1, then the noise reduction module 40 outputs a signal RGBNR[i,j] derived from the outputs of both the flat-field noise reduction module 30 and the demosaic module 10. A desired implementation utilizes a pixel noise weight module 60 to calculate the weight wFFNR[i,j] according to:
where vFFNR[i,j] is a weight calculated according to:
vFFNR[i,j]=(Ei,j−TFFNR[i,j])·GFFNR,r[i,j] (5)
where Ei,j is the edge value from edge detect module 20, GFFNR,r[i,j] is a predetermined rate of transition from flat-field noise reduction averaging to no averaging as a function of pixel brightness (e.g., GFFNR,r[i,j]=20), and TFFNR[i,j] is a threshold value based on an offset TFFNR,min[i,j], a constant specifying a pixel signal value such that any lesser value will always result in full-strength flat-field noise reduction averaging (e.g., TFFNR,min[i,j]=0). TFFNR[i,j] is calculated according to:
TFFNR[i,j]=TFFNR,min[i,j]+YFFNR[i,j]·GFFNR,p[i,j] (6)
where GFFNR,p[i,j] is a pixel noise model coefficient subjecting brighter pixels to more or less averaging than darker ones (e.g., GFFNR,p[i,j]=0.33). Constants TFFNR,min and GFFNR,p are selected based on sensor response characterization such that image features having amplitudes approximately equal to or less than sensor noise are subjected to noise reduction. YFFNR[i,j] is the pixel luminance (i.e., a weighted sum of the pixel signal's red, green, and blue components) of the output of the flat-field noise reduction module 30. YFFNR[i,j] can be calculated using an applicable luminance formula known in the art or derived from sensor colorimetric properties using techniques known in the art, for example, the standard red, green, blue color space (sRGB) or the International Telecommunication Union Radiocommunication Sector (ITU-R) standard ITU-R BT.601. A desired implementation of pixel luminance utilizes the ITU Radiocommunication Sector standard ITU-R BT.601 such that:
YFFNR[i,j]=0.299·RFFNR[i,j]+0.587·GFFNR[i,j]+0.114·BFFNR[i,j] (7)
Equation (7) can be simplified (for example, for ease of implementation in hardware) to:
YFFNR[i,j]=(5*RFFNR[i,j]+9*GFFNR[i,j]+2*BFFNR[i,j])/16 (8)
An edge detection method used by module 20 examines pairs of neighboring pixels and outputs the maximum absolute value of the difference between pixel signal values for one of the pairs (edge value), Ei,j. Referring now to
d1[i,j]=|pi+1,j+1−pi−1,j−1| (9)
d2[i,j]=|pi−1,j−pi+1,j+2| (10)
d3[i,j]=|pi,j−1−pi+2,j+1| (11)
d4[i,j]=|pi+1,j−pi−1,j+2| (12)
d5[i,j]=|pi+2,j−1−pi,j+1| (13)
d6[i,j]=|pi,j−1−pi−2,j+1| (14)
d7[i,j]=|pi,j+1−pi−2,j−1| (15)
d8[i,j]=|pi+1,j+2−pi−1,j| (16)
d9[i,j]=|pi+1,j−pi−1,j−2| (17)
Ei,j=max(d1[i,j],d2[i,j],d3[i,j],d4[i,j],d5[i,j],d6[i,j],d7[i,j],d8[i,j],d9[i,j]) (18)
Referring to
d1[i,j]=|pi+1,j+1−pi−1,j−1| (19)
d2[i,j]=|pi,j+1−pi,j−1| (20)
d3[i,j]=|pi+1,j−pi−1,j| (21)
d4[i,j]=|pi,j−pi,j−2| (22)
d5[i,j]=|pi,j−pi,j+2| (23)
d6[i,j]=|pi,j−pi+2,j| (24)
d7[i,j]=|pi,j−pi−2,j| (25)
Ei,j=max(d1[i,j],d2[i,j],d3[i,j],d4[i,j],d5[i,j],d6[i,j],d7[i,j]) (26)
The output of the noise reduction module 40 may be sharpened to produce a sharpened final output RGBi,j (shown generally as signal RGB). Sharpening can be produced by a sharpening module 80 as a function of the output (RGBNR) of the noise reduction module 40 and a signal from the sharpening signal generator module 50 (ΔY). The sharpening module 80 takes the output of the sharpening signal generator 50, a high-frequency sharpening signal ΔYi,j (functioning as a high-pass filter), which is multiplied by a gain, GA[i,j], and then added to the output of the noise reduction module 40 RGBNR[i,j] to produce a sharpened, noise reduced signal RGBi,j. The sharpening module 80 emulates an unsharp mask operation, without requiring additional memory and logic to calculate luminance signals in pixels surrounding the pixel being demosaiced (the pixel located at i,j) as required by conventional unsharp mask operations. The sharpened final output RGBi,j of the sharpening module 80 is calculated according to:
Ri,j=RNR[i,j]+GA[i,j]·ΔYi,j (27)
Gi,j=GNR[i,j]+GA[i,j]·ΔYi,j (28)
Bi,j=BNR[i,j]+GA[i,j]·ΔYi,j (29)
where GA[i,j] is the sharpening gain that controls the amount of sharpening applied to the image, ΔYi,j, an aperture correction value, is calculated by the sharpening signal generator 50 as follows:
ΔYi,j=max(|YD[i,j]−YFFNR[i,j]|−TA[i,j], 0)·sgn(YD[i,j]−YFFNR[i,j]) (30)
where sgn(YD[i,j]−YFFNR[i,j]) is the sign of YD[i,j]−YFFNR[i,j] (i.e., −1 when YD[i,j]−YFFNR[i,j]<0, 1 when YD[i,j]−YFFNR[i,j]>0, and 0 when YD[i,j]−YFFNR[i,j]=0) and YD[i,j] and YFFNR[i,j] are pixel luminance values output by demosaicing module 10 and the flat-field noise reduction module 30, respectively. TA[i,j] specifies the minimum luminance difference value necessary to produce a sharpening effect to a pixel with luminance YD[i,j] as calculated by the pixel noise luminance module 70 as follows:
TA[i,j]=TA,min[i,j]+GA,T[i,j]·YD[i,j] (31)
TA,min[i,j] is the minimum value for TA[i,j] and GA,T[i,j] is a pixel noise model coefficient allowing for sharpening of brighter pixels. TA[i,j] and GA,T[i,j] are selected based on image sensor response such that image features having amplitudes of approximately equal to or less than sensor noise are not amplified by sharpening. It should be appreciated that YD[i,j] and YFFNR[i,j] may be calculated using techniques known in the art, such as a formula appropriate to the color space corresponding to the pre-color-correction sensor responses. One implementation calculates YD[i,j] using the same coefficients as YFFNR[i,j] in Equations (7) and (8) above, such that:
YD[i,j]=0.299·RD[i,j]+0.587˜GD[i,j]+0.114·BD[i,j] (32)
Equation (32) can be simplified (for example, for ease of implementation in hardware) to:
YD[i,j]=(5*RD[i,j]+9*GD[i,j]+2*BD[i,j])/16 (33)
While a desired implementation allows for the variance of all coefficients as a function of pixel position (i,j) in the image sensor array (for example, to compensate for higher noise at an image sensor's periphery due to digital gain employed by additional modules in the image sensor, such as, for example, a lens vignetting module), it should be appreciated that these values may be constant and not vary as a function of pixel position (i,j), for example, GA,T, GA, GFFNR,p, and GFFNR,r.
In one embodiment, the flat-field noise reduction module 30 (
Pixel weights may be selected such that the flat-field noise reduction module 30 (
To derive pixel weights for kernels of various shapes and colors to produce zipper-free noise reduction, viable kernel types may be considered; edges crossing those kernel types at various locations may be considered; restrictions and dependencies on pixel weights may be generated such that the output is not kernel shape dependent; restrictions and dependencies may be expressed as a system of equations; and the equations may be solved to minimize noise. Accordingly, for kernel type A, the green component GFFNR[i,j] can be expressed as:
GFFNR[i,j]=a·(pi−1,j−1+pi+1,j+1+pi+1,j−1+pi−1,j+1)+b·pi,j (34)
and the green component GFFNR[i,j] for kernel type B can be expressed as:
GFFNR[i,j]=c·(pi,j−1+pi,j+1pi+1,j+pi−1,j) (35)
For reasons of symmetry, pixels marked “a” in kernel type A must have equal pixel weights, therefore:
4a+b=1 (36)
and pixels marked “c” in kernel type B must have equal pixel weights, therefore:
4c=1(i.e., therefore) c=0.25 (37)
Additional constraints can be derived when edges crossing kernel types A-D at various locations are considered. Since flat-field noise reduction module 30 (
Green kernel types remains identical along 45/135 degree diagonal edges (i.e., kernel type A is identical to kernel type D and kernel type B is identical to kernel type C), thus, no additional constraints on kernel pixel weights are imposed for the diagonal movement cases. When considering moving between kernel types along horizontal and vertical edges, kernel types alternate as shown in
a+a=c2a=0.25a=0.125. (38)
Solving equation (36) for b, b=1−4ab=1−4*0.125b=0.5. Therefore, a=0.125, b=0.5, and c=0.25. Equations (34)-(35) can be rewritten in terms of the calculated values for pixel weights a-c. Accordingly, for kernel type A, the green component GFFNR[i,j] can be expressed as:
GFFNR[i,j]=0.125·(pi−1,j−1+pi+1,j+1pi+1,j−1+pi−1,j+1)+0.5·pi,j (39)
and the green component GFFNR[i,j] for kernel type B can be expressed as:
GFFNR[i,j]=0.25·(pi,j−1+pi,j+1+pi+1,j+pi−1,j) (40)
There are three distinct red and blue kernel types, E, F, and H. Kernel type G is identical to kernel type F rotated 90 degrees. For kernel type E, the output of the flat-field noise reduction module 30 (
RFFNR[i,j]=h·(pi−1,j−1+pi−1,j+1+pi+1,j−1+pi+1,j+1) (41)
BFFNR[i,j]=h·(pi−1,j−1+pi−1,j+1+pi+1,j−1+pi+1,j+1) (42)
The output of kernel type F is calculated as:
RFFNR[i,j]=d·(pi−1,j−2+pi−1,j+2+pi+1,j−2+pi+1,j+2)+e·(pi−1,j+pi+1,j) (43)
BFFNR[i,j]=d·(pi−1,k−2+pi−1,j+2+pi+1,j−2+pi+1,j+2)+e·(pi−1,j+pi+1,j) (44)
The output of kernel type G is calculated as:
RFFNR[i,j]=d·(pi−2,j−1+pi−2,j+1+pi+2,j−1+pi+2,j+1)+e·(pi,j−1+pi,j+1) (45)
BFFNR[i,j]=d·(pi−2,j−1+pi−2,j+1+pi+2,j−1+pi+2,j+1)+e·(pi,j−1+pi,j+1) (46)
The output of kernel type H is:
RFFNR[i,j]=g·(pi,j−2+pi,j+2+pi−2,j+pi+2,j)+f·pi,j (47)
BFFNR[i,j]=g·(pi,j−2+pi,j+2+pi−2,j+pi+2,j)+f·pi,j (48)
For kernel type E:
4h=1h=0.25 (49)
For kernel types F and G:
2e+4d=1 (50)
For kernel type H:
4g+f=1 (51)
As with the green kernel types A-D, additional constraints can be derived when edges crossing kernel types E-H at various locations are considered. For a horizontal edge passing between row i−1 and i of kernel types E and F, kernel types E and F will alternate as shown in
2h=2d+e (52)
For a horizontal edge passing between row i−2 and i−1 of kernel types E and F, kernel types E and F will alternate as shown in
2d=g (53)
Solving equation (51) for f and substituting from equations (50) and (53) yields:
f=1−4gf=1−4(2d)f=1−8df=1−2(1−2e)f=1−2+4ef=4e−1 (54)
Kernel types F and G alternate along a diagonal edge, as shown in
Kernel types E and H alternate along a diagonal edge as shown in
h=2g0.25=2g (substituting h from Eqn. (49))g=0.125. (55)
It should be appreciated that while the above pixel weights (pixel weights a-c for green kernels and pixel weights d-h for red and blue kernels) have been selected to reduce zipper effect, pixel weights can also be selected for noise reduction or for a combination of reduced zipper effect and improved noise reduction. In a desired embodiment, the output of kernel types F, G, and H (
α=f (64)
1−α=4g (65)
β=2e (66)
1−β=4d (67)
α=2β−1 (because f=4e−1) (68)
β=(α+1)/2 (69)
The output of kernel type F can be expressed in terms of alpha and beta as:
The output of kernel type G can be expressed in terms of alpha and beta as:
The output of kernel type H can be expressed in terms of alpha and beta as:
Alpha and beta have been defined in terms of pixel weights f and e which were derived to reduce the zipper effect in images. Accordingly, to maximize zipper effect reduction:
α=0.5 (76)
β=0.75 (77)
However, it may be desirable to calculate alpha and beta for noise optimization. For example, assuming for a flat-field gray (i.e., red equals green equals blue) area all pixel responses have a standard deviation of one. Noise values for kernel type H and kernel types F and G can be derived as a function of their pixel weight mixing assignments:
σα=√{square root over ((α·1)2+4·((1−α)/4·1)2)}{square root over ((α·1)2+4·((1−α)/4·1)2)} (78)
σβ=√{square root over (4·((1−β)/4·1)2+2·(β/2·1)2)}{square root over (4·((1−β)/4·1)2+2·(β/2·1)2)} (79)
σ=√{square root over (σα2+σβ2)} (80)
Noise reduction is optimized at approximately:
α=0.15 (81)
β=0.575 (82)
Alpha and beta may then be selected to maximize noise reduction and zipper effect (0.15<=α<=0.5 and 0.575<=β<=0.75), with ideal optimization selection in a preferred embodiment occurring at:
α=84/256=0.328125 (83)
β=172/256=0.671875 (84)
A pixel signal p for a pixel at location i,j can be demosaiced in the flat-field noise reduction module 30 (
Connected to, or as part of, the imaging sensor 802 are row and column decoders 811, 809 and row and column driver circuitry 812, 810 that are controlled by a timing and control circuit 840. The timing and control circuit 840 uses control registers 842 to determine how the imaging sensor 802 and other components are controlled, for example, controlling the mode of operation of the imaging sensor 802. As set forth above, the PLL 844 serves as a clock for the components in the core 805.
The imaging sensor 802 comprises a plurality of pixel circuits arranged in a predetermined number of columns and rows. In operation, the pixel circuits of each row in imaging sensor 802 are all turned on at the same time by a row select line and the pixel circuits of each column are selectively output onto column output lines by a column select line. A plurality of row and column lines are provided for the entire imaging sensor 802. The row lines are selectively activated by row driver circuitry 812 in response to the row address decoder 811 and the column select lines are selectively activated by a column driver 810 in response to the column address decoder 809. Thus, a row and column address is provided for each pixel circuit. The timing and control circuit 840 controls the address decoders 811, 809 for selecting the appropriate row and column lines for pixel readout, and the row and column driver circuitry 812, 810, which apply driving voltage to the drive transistors of the selected row and column lines.
Each column contains sampling capacitors and switches in the analog processing circuit 808 that read a pixel reset signal Vrst and a pixel image signal Vsig for selected pixel circuits. Because the core 805 uses greenred/greenblue channel 804 and a separate red/blue channel 806, circuitry 808 will have the capacity to store Vrst and Vsig signals for greenred, greenblue, red, and blue pixel signals. A differential signal (Vrst-Vsig) is produced by differential amplifiers contained in the circuitry 808 for each pixel. Thus, the signals G1/G2 and R/B are differential signals that are then digitized by a respective analog-to-digital converter 814, 816. The analog-to-digital converters 814, 816 supply digitized G1/G2, R/B pixel signals to the digital processor 830, which forms a digital image output (e.g., a 10-bit digital output). The digital processor 830 performs pixel processing operations. The output is sent to the image flow processor 910 (
Although the sensor core 805 has been described with reference to use with a CMOS imaging sensor, this is merely one example sensor core that may be used. Embodiments of the invention may also be used with other sensor cores having a different readout architecture. While the imaging device 900 (
System 600, for example, a camera system, includes a lens 680 for focusing an image on the imaging device 900 when a shutter release button 682 is pressed. System 600 generally comprises a central processing unit (CPU) 610, such as a microprocessor that controls camera functions and image flow, and communicates with an input/output (I/O) device 640 over a bus 660. The imaging device 900 also communicates with the CPU 610 over the bus 660. The system 600 also includes random access memory (RAM) 620, and can include removable memory 650, such as flash memory, which also communicates with the CPU 610 over the bus 660. The imaging device 900 may be combined with the CPU 610, with or without memory storage on a single integrated circuit, such as, for example, a system-on-a-chip, or on a different chip than the CPU 610. As described above, raw RGB image data from the imaging sensor 802 (
Some of the advantages of the demosaicing and noise reduction methods disclosed herein include minimizing undesirable artifacts while maximizing noise suppression. Additionally, the disclosed demosaicing and noise reduction methods are simple to implement in hardware or software at a low cost. That is, the methods described above can be implemented in a pixel processing circuit, which can be part of the pixel processing pipeline 920 (
While the embodiments have been described in detail in connection with desired embodiments known at the time, it should be readily understood that the claimed invention is not limited to the disclosed embodiments. Rather, the embodiments can be modified to incorporate any number of variations, alterations, substitutions, or equivalent arrangements not heretofore described. For example, while the embodiments are described in connection with a CMOS imaging sensor, they can be practiced with image data from other types of imaging sensors, for example, CCD imagers and others. Additionally, three or five channels, or any number of color channels may be used, rather than four, for example, and they may comprise additional or different colors/channels than greenred, red, blue, and greenblue, such as e.g., cyan, magenta, yellow (CMY); cyan, magenta, yellow, black (CMYK); or red, green, blue, indigo (RGBI).
Number | Name | Date | Kind |
---|---|---|---|
6140630 | Rhodes | Oct 2000 | A |
6204524 | Rhodes | Mar 2001 | B1 |
6304678 | Yang et al. | Oct 2001 | B1 |
6310366 | Rhodes et al. | Oct 2001 | B1 |
6326652 | Rhodes | Dec 2001 | B1 |
6333205 | Rhodes | Dec 2001 | B1 |
6376868 | Rhodes | Apr 2002 | B1 |
6681054 | Gindele | Jan 2004 | B1 |
7015962 | Acharya | Mar 2006 | B2 |
7079705 | Zhang et al. | Jul 2006 | B2 |
7164807 | Morton | Jan 2007 | B2 |
7173663 | Skow et al. | Feb 2007 | B2 |
7236190 | Yanof et al. | Jun 2007 | B2 |
20020156364 | Madore | Oct 2002 | A1 |
20050052541 | Kondo | Mar 2005 | A1 |
20050093982 | Kuroki | May 2005 | A1 |
20060050783 | Le Dinh et al. | Mar 2006 | A1 |
Number | Date | Country |
---|---|---|
WO0070548 | Nov 2000 | WO |
WO2005124689 | Dec 2005 | WO |
WO2006010276 | Feb 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20080298708 A1 | Dec 2008 | US |