The present disclosure generally relates to video signal processing, and in particular to processing video signals to remove artifacts caused by low-light noise.
Low-light images are especially susceptible to corruption from noise caused by light-detecting sensors (i.e., low-light artifacts). For example, a video or still camera may capture undesirable grains or discolorations in low-light conditions. This noise may lead to uncorrelated pixels and, as a result, reduced compression efficiency for video coding algorithms (e.g., MPEG4 and H.264). Many applications, such as security cameras, capture low-light images and require a large amount of storage space for retaining those images, and any decrease in the required storage space may lead to a more cost-effective application, an increase in the number of images or frames of video stored, or reduced network traffic for transporting the images. Thus, efforts have been made to detect and eliminate low-light noise.
Previous efforts (such as transform-domain methods, DCT, wavelet, or other statistical methods), however, suffer from drawbacks. These methods are computationally intensive and require a significant amount of computing resources, which may not be available on low-power, portable, or other devices. Furthermore, these methods are not adjustable based on available resources or the complexity of the source image, further wasting resources on simple images or during high-load conditions in which the additional resources may not be necessary or available.
Various systems, methods, and non-transitory media for removing noise from an image are disclosed herein. An exemplary system includes an edge-detection-based adaptive filter that identifies edge pixels and non-edge pixels in the image and selects a filtering technique for at least one non-edge pixel based on a comparison of the at least one non-edge pixel to a neighboring pixel region, wherein such comparison indicates whether the at least one non-edge pixel is a result of low-light noise. The edge-detection-based adaptive filter can include an edge-difference filter that divides pixels of the image into the edge pixels and the non-edge pixels and/or a dilation-based filter that expands a region of edge pixels to include non-edge pixels. In various implementations, the edge-detection-based adaptive filter includes a dilation-based filter for modifying an output of the edge-difference filter by distributing results of edge detection to neighboring pixels.
In various implementations, the system can further include a Gaussian distribution engine that computes a mean and a variance of the Gaussian distribution of the neighboring pixel region. The Gaussian distribution engine can compare the at least one non-edge pixel to the neighboring pixel region by determining a difference between a value of the at least one non-edge pixel and the mean of the neighboring pixel region, and selecting the filtering technique based on a comparison of the difference and the variance of the neighboring pixel region. In various implementations, the system further includes a median-filter that determines a median value for the neighboring pixel region and replaces an original value of the at least one non-edge pixel with the median value when the comparison is greater than a first threshold; and a low-pass filter that determines a low-pass filter value for the neighboring pixel region and replaces the original value of the at least one non-edge pixel with the low-pass filter value when the comparison is less than the first threshold and greater than a second threshold. The edge-detection-based adaptive filter can output the low-pass filter value, the median value, or the original value.
An exemplary method includes identifying edge pixels and non-edge pixels in the image; and selecting a filtering technique for at least one non-edge pixel based on a comparison of the at least one non-edge pixel to a neighboring pixel region, wherein such comparison indicates whether the at least one non-edge pixel is a result of low-light noise. Identifying the edge pixels and non-edge pixels can include expanding a region of edge pixels to include non-edge pixels.
In various implementations, selecting the filtering technique includes determining a mean of the neighboring pixel region; determining a variance of the neighboring pixel region; determining a difference between an original value of the at least one non-edge pixel and the mean of the neighboring pixel region; and determining an assigned value of the at least one non-edge pixel based on a comparison of the difference and the variance of the neighboring pixel region. Determining the assigned value of the at least one non-edge pixel can include when the comparison is greater than a first threshold, determining a median value for the neighboring pixel region and assigning the at least one non-edge pixel with the median value; when the comparison is less than the first threshold and greater than a second threshold, determining a low-pass filter value for the neighboring pixel region and assigning the at least one non-edge pixel with the low-pass filter value; and when the comparison is less than the second threshold, assigning the at least one non-edge pixel the original value. In various implementations, determining the mean and the variance can include determining a mean and a variance of a Gaussian distribution of the neighboring pixel region. In various implementations, the method further includes determining a median value for the neighboring pixel region and replacing the at least one non-edge pixel with the median value when the comparison is greater than the first threshold; and determining a low-pass filter value for the neighboring pixel region and replacing the at least one non-edge pixel with the low-pass filter value when the comparison is less than the first threshold and greater than the second threshold.
In various implementations, the method includes defining a first threshold (N), a second threshold (M), and a third threshold (P), wherein P≦M≦N; and outputting a value for the at least one non-edge pixel based on the comparison of the at least one non-edge pixel to the neighboring pixel region.
These and other objects, along with advantages and features herein disclosed, will become more apparent through reference to the following description, the accompanying drawings, and the claims. Furthermore, it is to be understood that the features of the various embodiments described herein are not mutually exclusive and may exist in various combinations and permutations.
In the drawings, like reference characters generally refer to the same parts throughout the different views. In the following description, various embodiments are described with reference to the following drawings, in which:
A network of switches 108 selects one of three filters 110, 112, 114 for the brightness component 104 of the image 102. The system 100 may include any number of brightness-component filters, however, including a single filter, and the present disclosure is not limited to any particular number or type of filter. In one embodiment, a low-pass averaging filter 110 may be selected by the switches 108 if the source image 102 is simple, if only a small degree of filtering is required, and/or if system resources are limited. The low-pass averaging filter 110 attenuates high-frequency signals in the brightness component 104, while allowing low-frequency signals to pass. In one embodiment, the low-pass averaging filter 110 performs a blur function on the brightness component 104.
A median filter 112 may be used to filter the brightness component 104 for images of medium complexity, if a medium amount of filtering is desired, and/or if an average amount of system resources is available. As one of skill in the art will understand, the median filter 112 processes the brightness component 104 pixel by pixel and replaces each pixel with the median of it and surrounding pixels. For example, the median filter 112 may consider a 3×3 window of pixels surrounding a pixel of interest (i.e., nine total pixels). The median filter 112 sorts the nine pixels by their brightness values, selects the value in the middle (i.e., fifth) position, and replaces the pixel of interest with the selected value. In one embodiment, the filter 112 is a rank or rank-median filter, and may select a pixel in any position in the sorted list of pixels (e.g., the third or sixth position). In one embodiment, if the absolute difference between the selected value and the original value is larger than the threshold, the original value is kept; if the difference is smaller than or equal to the threshold, the ranked value is assigned.
An adaptive filter 114 may be used to filter the brightness component 104 for images of high complexity, if a large amount of filtering is desired, and/or if a large amount of system resources is available. The adaptive filter 114 selects a filtering technique based on the dynamically determined characteristics of the brightness component 104, as explained in greater detail below.
A low-pass averaging filter 116 (e.g., a 5×5 low-pass averaging filter) may be used to filter the color component 106. In one embodiment, the color component 106 is less complex than the brightness component and/or is less affected by low-light noise and thus requires less filtering. The filter 116 may be a temporal-averaging filter with sum-of-absolute-differences or any other type of similar filter. The system 100 may include more than one color-component filter 116, and one of the plurality of color-component filters 116 may be selected based on the complexity of the color component 106, the availability of system resources, and/or a desired level of filtering quality.
A dilation-based filter 304 modifies the output of the edge-difference filter 302 by distributing the results of the edge detection to neighboring pixels. The dilation-based filter may be modified to ease implementation on, for example, embedded and/or DSP platforms. For example, if four pixels in a row are dilated, the four pixels may be shifted, depending on the pixel location, to align with a word boundary. In various embodiments, the dilation-based filter 304 is a morphology filter, a 3×4 dilation filter, or a 4×3 dilation filter. The dilation-based filter 304 may expand, or dilate, regions of pixels designated as edge pixels to incorporate other, nearby pixels. For example, a pixel having an intensity different from its neighbors may be the result of low-light noise; but, if the location of the pixel is near a detected edge, the pixel may instead be the result of a real physical feature of the captured image. The dilation-based filter 304, by correlating such pixels occurring near detected edges to edge pixels, prevents their erroneous designation as noise-produced pixels.
Each non-edge pixel in the dilated luma component 104 is then analyzed against a neighboring region of pixels (e.g., a neighboring 3×3 block of pixels). Depending on the differences between the analyzed pixel and its neighbors, as computed by a Gaussian distribution engine 306, the pixel is assigned a new value according to assignment units 308-312 and output by an output unit 314.
In greater detail, the Gaussian distribution engine 306 computes a mean and a variance of the Gaussian distribution of the block or window surrounding the analyzed pixel. The deviation of the pixel from the mean of the block is computed and compared with the variance. If the difference between the pixel and the variance is much greater than the mean (e.g., greater than three times the standard deviation), the pixel is likely the result of low-light noise. In this case, the median block 308 replaces the pixel with the median of the block of pixels. If the difference between the pixel and the variance is near the mean, the low-pass filter 310 replaces the analyzed pixel with the result of low-pass filtering the block of pixels. If the difference between the pixel and the variance is less than the mean, the pixel block 312 passes the analyzed pixel to the output block 314 unchanged.
In general, the algorithm utilized by the assignment units 308-312 may be generalized by the following equations:
If {(Analyzed Pixel)−(Mean of Block of Pixels)}>N×(Variance of Block of Pixels):Output=Median of Block of Pixels (1)
If {(Analyzed Pixel)−(Mean of Block of Pixels)}>M×(Variance of Block of Pixels):Output=Result of Low-Pass Filter of Block of Pixels (2)
If {(Analyzed Pixel)−(Mean of Block of Pixels)}>P×(Variance of Block of Pixels): Output=Original Analyzed Pixel (3)
wherein P≦M≦N. That is, the output 314 is assigned the median 308 for large differences, the low-pass filter 310 for medium differences, and the original pixel 312 for small differences. In one embodiment, the operations performed by the above equations (1)-(3) are executed by specially allocated hardware. In another embodiment, the median operation is performed by the median filter 112 and low-pass filtering is performed by the low-pass averaging filter 110, as shown in
In another example, another pixel 414 is analyzed and compared to its surrounding pixels 416. Here, because the difference between the analyzed pixel 414 and the mean of the block of pixels 412 is less than the first threshold N but greater than a second threshold M when compared to the variance of the block of pixels 412, the pixel 414 is replaced with the result of low-pass filtering the block 416. Finally, because the difference between a third analyzed pixel 418 and the mean of its surrounding block of pixels 420 is much less than a threshold P when compared to the variance of the block of pixels 420, the pixel 418 remains unchanged.
In one embodiment, the above-described system 300 analyzes every pixel in the luma component 104. In other embodiments, the system 300 analyzes only a subset of the total pixels in the luma component 104. For example, the system 300 may analyze only even-numbered pixels (e.g., every second pixel) in the luma component 104. The result of analyzing an even-numbered pixel may be applied not only to that pixel itself, but also to a neighboring odd-numbered pixel (e.g., a pixel adjacent to the analyzed even-numbered pixel in the same row). Because the two pixels are neighbors, the result computed for one pixel is likely to be similar to the uncomputed result of the neighboring pixel, and applying the analyzed pixel's result to both pixels may produce only a small error. Other subsets of pixels may be chosen for analysis, such as odd pixels, every Nth pixel, diagonal pixels, or rows/columns of pixels. The analyzed pixels may constitute 50% of the total pixels, as in the example above, or any other percentage of total pixels.
In one embodiment, the system 600 may be used to divide an image into a number of regions that corresponds to a number of available filter circuits 604. Each filter circuit 604 may include a system 100, as illustrated in
In another embodiment, only one filter circuit 604 is used to process each image region in series. In this embodiment, the size of the image region may be defined by an amount of memory or other storage space available and/or the capabilities of the filter circuit 604. The size of the region may be adjusted to consume more or fewer resources, depending on the constraints of a particular application. For example, an application having very limited memory may require a small region. History information for rows and columns of the regions or image may be stored and managed to ease data movement when switching and/or combining image regions.
Applying the first filter may include low-pass filtering the region, median filtering the region, and/or adaptively filtering the region, as described above with reference to
Embodiments disclosed herein may be provided as hardware, software, and/or firmware. For example, the systems 100, 300, 600 may be implemented on an embedded device, such as an ASIC, FPGA, microcontroller, or other similar device, and included in a video or still camera. In other embodiments, elements of the systems 100, 300, 600 may be implemented in software and included on a desktop, notebook, netbook, or handheld computer. In these embodiments, a webcam, cellular-phone camera, or other similar device may capture images or video, and the systems 100, 300, 600 may remove low-light noise therefrom. The embodiment disclosed herein may further be provided as one or more computer-readable programs embodied on or in one or more articles of manufacture. The article of manufacture may be any suitable hardware apparatus, such as, for example, a floppy disk, a hard disk, a CD ROM disk, DVD ROM disk, a Blu-Ray disk, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs may be implemented in any programming language. Some examples of languages that may be used include C, C++, or JAVA. The software programs may be further translated into machine language or virtual machine instructions and stored in a program file in that form. The program file may then be stored on or in one or more of the articles of manufacture.
Certain embodiments were described above. It is, however, expressly noted that the present disclosure is not limited to those embodiments, but rather the intention is that additions and modifications to what was expressly described herein are also included within the scope of the present disclosure. Moreover, it is to be understood that the features of the various embodiments described herein were not mutually exclusive and can exist in various combinations and permutations, even if such combinations or permutations were not made express herein, without departing from the spirit and scope of the present disclosure. In fact, variations, modifications, and other implementations of what was described herein will occur to those of ordinary skill in the art without departing from the spirit and the scope of the present disclosure. As such, the present disclosure is not to be defined only by the preceding illustrative description.
This application is a continuation application of U.S. patent application Ser. No. 12/950,664 entitled “COMPONENT FILTERING FOR LOW-LIGHT NOISE REDUCTION” filed Nov. 19, 2010, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5331442 | Sorimachi | Jul 1994 | A |
5561723 | DesJardins et al. | Oct 1996 | A |
5661823 | Yamauchi et al. | Aug 1997 | A |
5768440 | Campanelli et al. | Jun 1998 | A |
5771318 | Fang et al. | Jun 1998 | A |
5793885 | Kasson | Aug 1998 | A |
5959693 | Wu et al. | Sep 1999 | A |
6148103 | Nenonen | Nov 2000 | A |
6167164 | Lee | Dec 2000 | A |
6229578 | Acharya et al. | May 2001 | B1 |
6259489 | Flannaghan et al. | Jul 2001 | B1 |
6272497 | Mendenhall et al. | Aug 2001 | B1 |
6389176 | Hsu et al. | May 2002 | B1 |
6452639 | Wagner et al. | Sep 2002 | B1 |
6608910 | Srinivasa et al. | Aug 2003 | B1 |
6614474 | Malkin et al. | Sep 2003 | B1 |
6621595 | Fan et al. | Sep 2003 | B1 |
6668097 | Hu | Dec 2003 | B1 |
6721458 | Ancin | Apr 2004 | B1 |
6784944 | Zhang | Aug 2004 | B2 |
6798910 | Wilson | Sep 2004 | B1 |
6928196 | Bradley et al. | Aug 2005 | B1 |
6965395 | Neter | Nov 2005 | B1 |
7142729 | Wredenhagen et al. | Nov 2006 | B2 |
7151863 | Bradley et al. | Dec 2006 | B1 |
7155058 | Gaubatz et al. | Dec 2006 | B2 |
7167595 | Hiroshige et al. | Jan 2007 | B2 |
7170529 | Chang | Jan 2007 | B2 |
7313288 | Dierickx | Dec 2007 | B2 |
7336821 | Ciuc | Feb 2008 | B2 |
7362911 | Frank | Apr 2008 | B1 |
7397964 | Brunner et al. | Jul 2008 | B2 |
7471320 | Malkin et al. | Dec 2008 | B2 |
7627192 | Yokochi | Dec 2009 | B2 |
7724307 | Wan et al. | May 2010 | B2 |
RE41402 | Kim | Jun 2010 | E |
7860337 | Zimmer | Dec 2010 | B2 |
7868950 | Samadani et al. | Jan 2011 | B1 |
7876972 | Bosco | Jan 2011 | B2 |
8009210 | Matsushima | Aug 2011 | B2 |
8149336 | Mohanty et al. | Apr 2012 | B2 |
8189943 | Yea et al. | May 2012 | B2 |
8290061 | Sang et al. | Oct 2012 | B2 |
8457433 | Hong | Jun 2013 | B2 |
8488031 | Schwartz et al. | Jul 2013 | B2 |
8699813 | Singh | Apr 2014 | B2 |
8755625 | Singh et al. | Jun 2014 | B2 |
20010012397 | Kato | Aug 2001 | A1 |
20020159650 | Hiroshige et al. | Oct 2002 | A1 |
20020181024 | Morimoto et al. | Dec 2002 | A1 |
20030048951 | Rengakuji et al. | Mar 2003 | A1 |
20030185463 | Wredenhagen et al. | Oct 2003 | A1 |
20030189655 | Lim et al. | Oct 2003 | A1 |
20030190092 | Dyas et al. | Oct 2003 | A1 |
20050013363 | Cho et al. | Jan 2005 | A1 |
20050036062 | Kang et al. | Feb 2005 | A1 |
20050276505 | Raveendran | Dec 2005 | A1 |
20060023794 | Wan et al. | Feb 2006 | A1 |
20060039590 | Lachine et al. | Feb 2006 | A1 |
20060110062 | Chiang et al. | May 2006 | A1 |
20060146193 | Weerasinghe et al. | Jul 2006 | A1 |
20060181643 | De Haan | Aug 2006 | A1 |
20060232709 | Renner et al. | Oct 2006 | A1 |
20060294171 | Bossen et al. | Dec 2006 | A1 |
20070040914 | Katagiri et al. | Feb 2007 | A1 |
20070091187 | Lin | Apr 2007 | A1 |
20070140354 | Sun | Jun 2007 | A1 |
20070183684 | Bhattacharjua | Aug 2007 | A1 |
20080085061 | Arici et al. | Apr 2008 | A1 |
20080088719 | Jacob | Apr 2008 | A1 |
20080112640 | Park et al. | May 2008 | A1 |
20080123979 | Schoner | May 2008 | A1 |
20080199099 | Michel et al. | Aug 2008 | A1 |
20080205786 | Young | Aug 2008 | A1 |
20080239153 | Chiu | Oct 2008 | A1 |
20080240602 | Adams | Oct 2008 | A1 |
20080317377 | Saigo et al. | Dec 2008 | A1 |
20090016603 | Rossato et al. | Jan 2009 | A1 |
20090033773 | Kinoshita | Feb 2009 | A1 |
20090129695 | Aldrich | May 2009 | A1 |
20090147111 | Litvinov | Jun 2009 | A1 |
20090154800 | Kojima et al. | Jun 2009 | A1 |
20090175535 | Mattox | Jul 2009 | A1 |
20090208106 | Dunlop et al. | Aug 2009 | A1 |
20090219379 | Rossato et al. | Sep 2009 | A1 |
20090278961 | Mohanty | Nov 2009 | A1 |
20090290067 | Ishiga | Nov 2009 | A1 |
20100020208 | Barbu | Jan 2010 | A1 |
20100021075 | Majewicz | Jan 2010 | A1 |
20100142843 | Chen | Jun 2010 | A1 |
20100182968 | Ojala et al. | Jul 2010 | A1 |
20110090351 | Cote et al. | Apr 2011 | A1 |
20110090370 | Cote et al. | Apr 2011 | A1 |
20110317045 | Vakrat et al. | Dec 2011 | A1 |
20120127370 | Singh et al. | May 2012 | A1 |
20120128243 | Singh et al. | May 2012 | A1 |
20120128244 | Singh et al. | May 2012 | A1 |
20120154596 | Wajs | Jun 2012 | A1 |
Number | Date | Country |
---|---|---|
1150248 | Oct 2001 | EP |
10-0872253 | Dec 2008 | KR |
10-1537295 | Jul 2015 | KR |
2005065115 | Jul 2005 | WO |
Entry |
---|
Final Office Action for U.S. Appl. No. 12/950,671 mailed Oct. 24, 2014, 16 pages. |
R. Jha and M. E. Jernigan “Edge adaptive filtering: How much and which direction?”, Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp. 364 -366 1989. |
H. Adelmann, “An edge-sensitive noise reduction algorithm for image processing”, Computers in Biology and Medicine 29, 1999, p. 137-145. |
R. Wallis, “An approach to the space variant restoration and enhancement of images”, Proceedings, Symposium on Current Mathematical Problems in Image Science, 1976, p. 107-111. |
J. Lee, “Refined Filtering of Image Noise Using Local Statistics”, Computer Graphics and Image Processing 15, 1981, p. 380-389. |
Translation of Korean Office Action for KR Patent Application Serial No. 2013-7015695 mailed Oct. 13, 2014, 4 pages. |
Translation of Response to Korean Office Action for KR Patent Application Serial No. 2013-7015695 filed Dec. 15, 2014, 8 pages. |
Translation of Notice of Allowance in Korean Patent Application Serial No. 2013-7015695 mailed Apr. 13, 2015, 1 pages. |
Office Action issued in Chinese Patent Application Serial No. 201180062954.X mailed Sep. 30, 2015, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 12/950,664 mailed Dec. 7, 2012. |
Response to Non-Final Office Action for U.S. Appl. No. 12/950,664 filed Jun. 7, 2013. |
Non-Final Office Action for U.S. Appl. No. 12/950,664 mailed Sep. 6, 2013. |
Response to Non-Final Office Action for U.S. Appl. No. 12/950,664 filed Dec. 6, 2013. |
Notice of Allowance for U.S. Appl. No. 12/950,664 mailed Feb. 3, 2014. |
Non-Final Office Action for U.S. Appl. No. 12/950,666 mailed Dec. 26, 2012. |
Response to Non-Final Office Action for U.S. Appl. No. 12/950,666 filed Apr. 26, 2013. |
Notice of Allowance for U.S. Appl. No. 12/950,666 mailed Aug. 5, 2013. |
Non-Final Office Action for U.S. Appl. No. 12/950,671 mailed Mar. 21, 2013. |
Response to Non-Final Office Action for U.S. Appl. No. 12/950,671 filed Jun. 20, 2013. |
Final Office Action for U.S. Appl. No. 12/950,671 mailed Aug. 26, 2013. |
Non-Final Office Action for U.S. Appl. No. 12/950,671 mailed Mar. 27, 2014. |
International Search Report and Written Opinion mailed on Jun. 11, 2012 for International Application No. PCT/US2011/060756. |
Kinabalu, Kota, “Impulse Detection Adaptive Fuzzy (IDAF) Filter,” http://www.computer.org/portal/web/scd1/doi/10.1109/ICCTD.2009.157. |
Justin Reschke, “Parallel Computing”, Sep. 14, 2004, http://www.cs.ucf.edu/courses/cot4810/fal104/presentations/Parallel—Computing.ppt, p. 1-28. |
Blaise, Barney, “Introduction to Parallel Computing”, May 27, 2010, http://web.archive.org/web/20100527181410/http://computing.llnl.gov/tutorials/parallel—comp/, p. 1-34. |
“Introduction to Parallel Programming”, Jun. 27, 2010, http://web.archive.org/web20100627070018/http://static.msi.umn.edu/tutorial/scicomp/general/intro—parallel—prog/content.html, p. 1-12. |
“Introduction to Parallel Programming Concepts”, date unknown, http://rcc.its.psu/education/workshops/pages/parwork/introctiontoParallelProgrammingConcepts.pdf, p. 1-124. |
Examination Report issued in EP Patent Application Serial No. 11799339.4 mailed Jan. 5, 2016, 6 pages. |
Reasons from the Summons to Attend Oral Proceedings issued in EP Patent Application Serial No. 11799339.4 mailed Sep. 21, 2016, 3 pages. |
Number | Date | Country | |
---|---|---|---|
20140241636 A1 | Aug 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12950664 | Nov 2010 | US |
Child | 14269979 | US |