Some digital cameras have interchangeable lenses. When changing a lens on such a digital camera, dust particles can enter a space between the lens and an optical low-pass filter positioned on top of a sensor. The dust particles inhibit light from reaching the sensor resulting in artifacts, which appear as dark spots on a digital image. A user usually does not realize that a digital image is contaminated by dust particles until sometime after the digital image is captured, which is too late for the user to do anything about the contamination for the already-captured digital image. Techniques are known for physically cleaning the dust particles from the sensor. However, physically cleaning the dust from the sensor is difficult and, if one is not careful, the sensor may become damaged.
This Summary is provided to introduce a selection of concepts in a simplified form that is further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first embodiment consistent with the subject matter of this disclosure, a user interface may be provided, such that a user may indicate an approximate location of an artifact appearing in a digital image. In some embodiments the user interface may permit the user to provide an approximate size of the artifact. After receiving the user input, parameters of a dust attenuation field may be estimated and an inverse transformation of the dust attenuation field may be applied to pixels in an approximate location of the artifact in order to photometrically adjust the pixels. To complement the above-mentioned method for photometrically adjusting the pixels, in some embodiments a target patch area, which includes pixels damaged by the artifact and undamaged pixels, may be selected and a pixel distance function may be applied to the selected target patch area and each of a number of candidate source patches. At least one candidate source patch may be selected based, at least in part, on a pixel distance determined by the pixel distance function. Damaged pixels of the target patch area may be restored based on corresponding pixels of the at least one selected candidate source patch.
In a second embodiment, a user interface may be provided to permit a user to provide an indication of an approximate location of an artifact in a digital image. An approximate size of the artifact may be estimated and a target patch area may be selected from a number of target patch areas. The selected target patch area includes at least some pixels damaged by the artifact and other undamaged pixels with known values. One or more candidate source patches may be selected based on a pixel distance function applied to the selected target patch area and each candidate source patch. Attenuation may be estimated based on values of pixels in the selected target patch area and corresponding pixels of the one or more candidate source patches. A median filter may be applied to improve the estimated attenuation. Applying an inverse of the improved estimated attenuation may recover at least some structure of the underlying digital image. When damaged pixels are mildly attenuated, application of the inverse of the improved estimated attenuation may be enough to recover the mildly attenuated pixels. Otherwise, another step may be performed, such as, for example, selecting a single candidate source patch and copying, or cloning, RGB values of pixels of the single candidate source patch to corresponding damaged pixels of the selected target patch area in order to restore the corresponding damaged pixels.
In a variation of the second embodiment, an attenuation field determined for another digital image captured by a same digital image capturing device used to capture the digital image may be used to restore values of damaged pixels.
In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description is described below and will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting of its scope, implementations will be described and explained with additional specificity and detail through the use of the accompanying drawings.
Embodiments are discussed in detail below. While specific implementations are discussed, it is to be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the subject matter of this disclosure.
In embodiments consistent with the subject matter of this disclosure, methods and a processing device are disclosed that restore pixels of a digital image damaged by artifacts caused by dust or other particles entering a digital image capturing device such as, for example, a digital camera or other device.
In one embodiment, a user interface may be provided for a user to indicate an approximate location of an artifact appearing in a digital image. The artifact may be caused by dust or other particles that have entered a digital image capturing device. The user may indicate the approximate location of the artifact by using a pointing device to select a location at an approximate central portion of the artifact. The pointing device may be: a computer mouse; a trackball; a finger on a touchpad; a stylus, a finger or an electronic pen making contact with a touchscreen; keys on a keyboard; or other type of pointing device. In some embodiments, the user may indicate an approximate size of the artifact. For example, in an embodiment using a computer mouse, a scroll wheel of the mouse may be used to increase or decrease a user-indicated approximate size of the artifact. In other embodiments, other methods may be used to indicate an approximate size of the artifact.
After receiving user input indicating an approximate location of the artifact in the digital image and an approximate size of the artifact, parameters of a dust attenuation field may be estimated. An inverse transformation of the dust attenuation field may be applied to pixels at the approximate location of the artifact in order to recover at least a structure of the underlying digital image.
A target area may be segmented into a number of target patch areas. Each of the target patch areas may include undamaged pixels with known values and damaged pixels with unknown or corrupted values. One of the target patch areas may be selected and at least one candidate source patch may be selected based on a patch distance function applied to the selected one of the target patch areas and each of a number of candidate source patches. Values of the damaged pixels of the target patch area may be restored based on values of corresponding pixels of the at least one selected candidate source patch.
In a second embodiment, user input may be received indicating an approximate location of an artifact in a digital image. The artifact may be caused by dust or other particles that entered a digital image capturing device. An approximate size of the artifact may be estimated and a target patch area may be selected from a number of target patch areas, each of which include at least some pixels damaged by the artifact and pixels with known values. At least one candidate source patch may be selected based on a patch distance function applied to the selected one of the target patch areas and each of a number of candidate source patches. Attenuation may be estimated based on values of pixels in the selected one of the target patch areas and corresponding pixels of the at least one selected candidate source patch.
After estimating attenuation of pixels in all target patch areas, a median filter may be applied to the pixels to improve the estimated attenuation. An inverse of the estimated attenuation may be applied to damaged pixels of the target patch areas in order to recover at least some structure of the underlying digital image.
Pixels of the target patch areas may further be recovered by determining a candidate source patch for each of the target patch areas. Unrecoverable pixels may be defined as pixels that have been attenuated beyond a given threshold. Values of corresponding pixels of the determined candidate source patches may be copied to corresponding unrecoverable pixels in order to restore the unrecoverable pixels.
In a variation of the second embodiment, an attenuation field of another digital image captured by a same digital image capturing device may be employed to restore values of damaged pixels.
Processor 120 may include one or more conventional processors that interpret and execute instructions, including, but not limited to a central processing unit (CPU) and a graphics processing unit (GPU). A memory may include RAM 130, ROM 140, and/or another type of dynamic or static storage device that stores information and instructions for execution by processor 120. RAM 130, or another type of dynamic storage device, may store instructions as well as temporary variables or other intermediate information used during execution of instructions by processor 120. ROM 140, or another type of static storage device, may store static information and instructions for processor 120.
Input device 150 may include a keyboard, a pointing device, an electronic pen, a touchscreen, or other device for providing input. Output device 160 may include a display, a printer, or other device for outputting information.
Processing device 100 may perform functions in response to processor 120 executing sequences of instructions contained in a tangible machine-readable medium, such as, for example, RAM 130, ROM 140 or other medium. Such instructions may be read into RAM 130 from another tangible machine-readable medium or from a separate device via a communication interface (not shown).
In some embodiments, a user may provide user input indicating only an approximate location of an artifact. The processing device may estimate a size of the artifact and may indicate the estimate of the size using an indication such as, for example, indication 202.
A distribution of red channel, green channel, and blue channel values of pixels within an area damaged by an artifact is different from a distribution of red channel, green channel, and blue channel values of pixels in an undamaged area, or band, surrounding the damaged area.
An image formation model is as follows:
I(x)=I′(x)+D(x),
where I is a final digital image, x=(x, y) indicates a position of a pixel, I′ indicates an underlying, unknown true image and D is dust. In an alternate embodiment, a multiplicative image formation model, such as, for example,
I(x)=I′(x)×D(x),
may be used.
In various embodiments, dust may be modeled as an isotropic two-dimensional Gaussian as follows:
D(x)=αexp(−(x−μ)2/(2σ2)),
where μ is an offset that reflects a spatial position of the Gaussian distribution that is a location of a dust center, σ is a standard deviation of the Gaussian distribution and reflects a size of an artifact caused by the dust, and α represents an intensity of dust attenuation. In some implementations, dust may be modeled as a two-dimensional non-isotropic Gaussian, which may not be axis aligned in some implementations.
In embodiments in which artifacts are caused by other types of particles, including, but not limited to, water droplets on a lens, a model different from the dust model D(x) may be used.
Next, the processing device may estimate parameters, α, μ, and σ of a dust model, such as the dust model, D(x)=α exp(−(x−μ)2/(2σ2)), previously described (act 606). Due to dust, a distribution of pixel values in a red channel, a green channel, and a blue channel of a damaged region and in a small surrounding region are different from one another. As previously discussed, with respect to
Next, the processing device may select a target patch area from among a number of target patch areas (act 608). A portion of the selected target patch area includes at least part of a damaged, or corrupted, region, ψ, and a remaining portion, Ω, of the selected target patch area may be outside of the damaged, or corrupted, region. In one implementation, a target patch area may be selected according to an onion peel ordering. That is, target patch areas located along an edge of the damaged, or corrupted, region may be selected before target patch areas located closer to a central portion of the damaged region. In another embodiment, a target patch area may be selected based, at least in part, on having a per pixel average confidence level higher than per pixel average confidence levels of other target patch areas. For example, initially, a pixel outside of the damaged, or corrupted, region may have a confidence level of 1 and a pixel inside the damaged, or corrupted, region may have a confidence level of 0. A priority function for selecting a target patch area may be modulated by a presence of strong image gradients. Thus, for example, pixels which lie on a continuation of strong edges may be assigned a higher priority value than other pixels.
The processing device may apply an inverse transformation of the dust model, D(x), to pixels within the damaged, or corrupted, region in order to photometrically recover an estimate of the underlying digital image (act 610).
Next, the processing device may calculate a pixel distance between the selected target patch area, T, and each candidate source patch, S, according to
with 0<λ≦1. In one embodiment, λ may have a value of 0.9 (act 612).
One or more candidate source patches may then be selected based on having a smallest pixel distance or pixel distances from among multiple candidate source patches (act 614). For example, if one candidate source patch is selected, the one candidate source patch may be selected based, at least in part, on having a pixel distance smaller than others of the candidate source patches, with respect to the target patch area. If N candidate source patches are selected, the N candidate source patches, collectively, may be selected based on having N of the smallest pixel distances from among the candidates source patches, with respect to the target patch area.
Next, the processing device may restore the selected target patch area based on the selected one or more candidate source patches (act 616). If only one candidate source patch is selected, then pixels of the target patch area, which are included in the damaged, or corrupted, region, may be restored by copying, or cloning, values of the red channel, the green channel, and the blue channel of corresponding pixels of the selected candidate source patch. In an embodiment in which N candidate source patches are selected, the pixels of the target patch area, which are included in the damaged region, may be restored according to
T′(x)=ΣiSi(x)×ωi for each x ε ψ, with i=1, . . . , N
where T′(x) is a restored pixel and ωi is a patch-based weight function of a pixel distance d(T,Si) (the larger the pixel distance, the smaller the weight).
The processing device may then estimate an approximate size of dust or a particle which caused the artifact (act 704).
The process may begin with the processing device labeling a large area as damaged, based on the user input which provides the approximate location of the artifact (act 802). For example, the large area may be a circular area having a 60 pixel radius or another suitable size. Next, the processing device may select a damaged pixel p from the damaged large area (act 804). The processing device may then interpolate the damaged pixel p according to
where k is a number of undamaged reference pixels on a boundary of the damaged area, PeripheralPixeli refers to an ith undamaged reference pixel, ri is a distance from the damaged pixel p to the ith undamaged reference pixel, and SD1(Damaged pixel p, PeripheralPixeli) is a pixel distance formula defined as
(Crg1−Crg2)2+(Crb1−Crb2)2+(Cgb1−Cgb2)+λ(Dy1−Dy2)2
where a subscript of 1 refers to the damaged pixel p, a subscript of 2 refers to an undamaged reference pixel, λ is a weighting coefficient, Crg=(Rβ−Gβ)/(Rβ+Gβ), Cgb=(Gβ−Bβ)/(Gβ+Bβ), Crb=(Rβ−Bβ)/(Rβ+Bβ), R, G and B correspond to red, green and blue channels, respectively, β is either 1 or 2,
Y is luminance and equals √{square root over ((R2+G2+B2))}, or Y may be calculated according to Y=0.299R+0.587G+0.114B, and Ŷ is a local average of Y (act 806).
According to the interpolation formula, surrounding pixels that are color-consistent with dust-attenuation content receive a much higher weighting coefficient than other pixels.
The processing device may then estimate attenuation of the pixel p, α(p), according to
where Y(Damaged pixel p) is luminance of the damaged pixel p and Y(Interpolated pixel p) is luminance of the interpolated pixel p (act 808). The estimated attenuation for pixel p, α(p), may then be saved (act 810).
The processing device may then determine whether there are any additional damaged pixels in the large area to interpolate (act 812). If there are any additional damaged pixels to interpolate, then the processing device may select a next damaged pixel p in the large area (act 814) and acts 806-810 may again be performed.
If, during act 812, the processing device determines that there are no additional damaged pixels to interpolate, then the attenuations may be integrated into a histogram according to a corresponding pixel's distance from a central portion of the large area (act 816). The processing device may estimate a dust size as a minimum radius, from a central portion of the large area, for which attenuation is greater than θ(act 818). In some embodiments, θ may be set to 0.99. In other embodiments, θ may be set to a different value.
Returning to
Per pixel priority may be calculated according to C(p)D(p), where C(p) is a confidence level of a pixel p in a target patch area and D(p) is a gradient of the pixel p in the target patch area. D(p) is a function of a strength of isophotes hitting a target front (boundary fringe) at each iteration. D(p) is defined as a gradient of a signal in a direction parallel to the front. The confidence level C(p) is a measure of an amount of reliable information surrounding pixel p. The confidence level is an average of confidence levels of all pixels in a considered patch neighborhood. Pixels outside the damaged region have an initial (and maximal) confidence level of 1 and pixels inside the damaged, “unrestored” region have an initial confidence of 0. Confidence levels of restored pixels may then be iteratively deduced during patch application. The confidence levels of the restored pixels may be set equal to the average confidence levels of neighboring pixels within a considered target patch. See Criminisi et al., “Region filling and object removal by exemplar-based image inpainting,” Image Processing, IEEE Transactions on, vol. 13, no. 9, pp. 1200-1212, September 2004. In some embodiments, when the dust attenuation is low (for example, less than a particular threshold), then the priority of the pixel may be calculated according to C(p)D(p)α(p), where α(p) is an estimated dust attenuation of the pixel. Because D(p) may be zero, in some embodiments, per pixel priority may be calculated according to C(p)(D(p)+epsilon), where epsilon may have a small value.
After selecting the target patch area, the processing device may determine at least one candidate source patch from a number of candidate source patches having a closest or smallest pixel distance(s) with respect to the target patch area (act 708). Any of the following pixel distance formulas may be used to determine the pixel distance:
where δ is a normalization constant and a weighting coefficient and γ(pixel1) is a weighting coefficient that favors undamaged pixels. In some embodiments, γ(pixel1) may be a luminance of pixel1. SD0(pixel1, pixel2) may be determined according to
(R1−R2)2+(G1−G2)2+(B1−B2)2 (Dist3)
where R, G, B refer to values of the red channel, the green channel, and the blue channel, respectively. A subscript of 1 refers to pixel 1 and a subscript of 2 refers to pixel 2. SD1(pixel1, pixel2) may be determined according to the following formula:
luminance, Ŷ is a local average of Y, and λ is a weighting coefficient.
Next, the processing device may estimate attenuation for the damaged pixels in the target patch area and a dust size (act 710).
The process may begin with the processing device calculating a recalibration factor (RF) (act 902). The recalibration factor may be calculated according to
That is, the recalibration factor (RF) is equal to an average luminance of pixels in a non-dust-attenuated region of the at least one selected candidate source patch divided by an average luminance of pixels in a non-dust-attenuated region of the target patch area (act 902). The processing device may then calculate attenuation (Att) of a red channel according to
and
where Rtargetεψ refers to a red channel value of a pixel in a damaged target area, Gtargetεψ refers to a green channel value of a pixel in the damaged target area, Btargetεψ refers to a blue channel value of a pixel in the damaged target area, RsourceεΩ refers to a red channel value of a pixel in an undamaged source patch, GsourceεΩ refers to a green channel value of a pixel in the undamaged source patch, and BsourceεΩ refers to a blue channel value of a pixel in the undamaged source patch (act 908). Attenuation worst-case may be calculated according to: Att_wc=min(Attred, Attgreen, Attblue) (act 910). Ideally attenuation is equal to Att_wc. If this is not true, the estimated attenuation may be considered suspicious.
The processing device may then roughly estimate a dust size by using traditional 2nd order statistics, such as, for example, variance and standard deviation of a dust attenuation distribution (act 912).
When more than one candidate source patch is selected during act 708, such as, for example, N candidate source patches, where N is an integer greater than or equal to 2, N of the candidate source patches having N smallest pixel distances, with respect to the target patch area, may be selected. To estimate a damaged signal in the target patch area, an average of the selected N candidate source patches may be used. Each pixel of the N selected candidate source patches may contribute to a final attenuation estimate with respective weights inversely proportional to a pixel distance, as defined by distance formula Dist2, above, multiplied by a geometrical distance to the target patch area (the closer a candidate source patch is to the target patch area, the greater the weight).
Returning to
If, during act 712, the processing device determines that an attenuation has been estimated for all of the damaged pixels in the digital image, then the processing device may apply a median filter to improve the attenuation field by filtering out most high-pass signals while preserving possible sharp contours (act 716). A size of the median filter is proportional to the estimated dust size.
Next, the processing device may apply an inverse attenuation to regions having a low dust attenuation and a good signal-to-noise ratio to photometrically adjust the damaged pixels (act 718). In some embodiments, a low dust attenuation and a good signal-to-noise ratio may be assumed where the improved attenuation is greater than a threshold, such as, for example, 0.5, or another suitable value. The dust attenuation may be considered unrecoverable where attenuation α<threshold, or where luminance recovered by applying the inverse attenuation differs too much from the dust-attenuated luminance, such as, for example, where:
The damaged pixels may be considered unreliable when applying an inverse attenuation would result in an attenuation α that differs too much from luminance recovered using “Att_wc”. That is, inverse attenuation may not be applied to recover damaged pixels when
In the above formulas, Y may be calculated according to 0.299R+0.587G+0.114B.
The processing device may then select a target patch area, beginning initially with a first target patch area selected during act 706, and subsequently may select target patch areas in a priority order (act 720). The processing device may then determine whether the selected target patch area has a region labeled as unreliably recovered (act 722).
If the selected target patch area does include a region labeled as unreliably recovered, then the processing device may select a single candidate source patch based on having a shortest pixel distance, with respect to the selected target patch area, according to
(act 724). The processing device may then restore damaged pixels by copying, or cloning, the values of the red channel, the green channel, and the blue channel from pixels in the selected single candidate source patch to corresponding damaged pixels in the selected target patch area (act 726).
The processing device may then determine whether all damaged pixels of the digital image have been restored (act 728). If all of the damaged pixels of the digital image have not been restored, then act 720 may again be performed to select a next target patch area. Otherwise, the process may be completed.
If, during act 722, the processing device determines that the selected target patch area does not have a region labeled as unreliably recovered, then the applying of the inverse attenuation during act 718 has already restored the target patch area and the processing device may then determine whether all damaged pixels of the digital image have been restored (act 728), as previously described.
The process may begin with receiving, or obtaining an attenuation field of the other digital image (act 1002). The processing device may then select a target patch area to restore in a same manner as was done during act 706 (act 1004). The processing device may then determine whether the attenuation field is a good attenuation field (act 1006). The processing device may determine that the attenuation field is a good attenuation field based on pixel recovery performance for the other digital image, or via another method.
Acts 1008-1028 may be identical to act 708-728 of
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms for implementing the claims.
Other configurations of the described embodiments are part of the scope of this disclosure. For example, in other embodiments, an order of acts performed by a process, such as the processes illustrated in
Number | Name | Date | Kind |
---|---|---|---|
5641596 | Gray et al. | Jun 1997 | A |
6393160 | Edgar | May 2002 | B1 |
6683995 | Ford et al. | Jan 2004 | B2 |
6711302 | Lee | Mar 2004 | B1 |
6987892 | Edgar | Jan 2006 | B2 |
6990252 | Shekter | Jan 2006 | B2 |
7088870 | Perez et al. | Aug 2006 | B2 |
7206461 | Steinberg et al. | Apr 2007 | B2 |
7369712 | Steinberg et al. | May 2008 | B2 |
7982743 | Cooper et al. | Jul 2011 | B2 |
8023567 | Gomila et al. | Sep 2011 | B2 |
20060133686 | Gomila et al. | Jun 2006 | A1 |
20080304764 | Lin et al. | Dec 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20100303380 A1 | Dec 2010 | US |