The invention relates generally to the field of image processing, and in particular to an image processing method for removing streaks in multi-band digital images (images consisting of three or more spectral bands). The invention is particularly useful in removing streaking in multi-band digital images that are acquired by a linear image sensing array, but may also be used to remove streaks in conventional color film that are caused by the camera or processing equipment.
Multi-band imaging sensors typically are designed such that each band of the imaging system is sensitive to a pass-band of electromagnetic radiation. For example, a standard color imaging system consists of three bands (or arrays of detectors) sensitive to red, green, and blue light, respectively. Imaging systems such as multi-spectral or hyper-spectral systems contain many detector bands. These systems may contain spectral bands sensitive to non-visible parts of the electromagnetic spectrum such as to NIR (near-infrared), SWIR (short-wave infrared) or MWIR (mid-wave infrared) in addition to bands sensitive to red, green, and blue light. Color composite imagery is commonly formed from multi-spectral imagery and hyper-spectral imagery by mapping three selected bands to the red, green, and blue bands of an output display device such as a video monitor, or a digital color printer.
Every detector of a given spectral band in an electronic image sensor, such as a CCD image sensor, may have a different response function that relates the input intensity of light (or other electromagnetic radiation) to a pixel value in the digital image. This response function can change with time or operating temperature. Image sensors are calibrated such that each detector in a given spectral band, has the same response for the same input intensity (illumination radiance). The calibration is generally performed by illuminating each detector of the spectral band with a given radiance from a calibration lamp and then recording the signal from each detector to estimate the response function of the detector. The response function for each detector is used to equalize the output of all of the detectors such that a uniform illumination across all of the detectors will produce a uniform output. This calibration is typically performed separately for each band of a multi-band imaging system.
Even when the detectors are calibrated to minimize the streaking in the image, some errors from the calibration process are unavoidable. Typically, a spectral filter is placed on a given detector, or sensor chip to create an imaging band sensitive to a specific region in the electromagnetic spectrum. Depending on the architecture of the sensor array, it may be necessary to have several spectral filters of the same bandpass to cover the entire array. Often, due to the variations in the spectral filter manufacturing process, the filters that are placed over the detectors in a given band may be slightly different in spectral bandpass and spectral shape. In addition, material variations, and the angle of incidence of light on a spectral filter, causes additional spectral variations depending on the position of the spectral filter on the sensor array. As a result, each detector within a spectral band is sensitive to a slightly different spectrum of light, but they are all calibrated using the same calibration lamp with a broad, non-uniform spectrum. Since the scene spectrum is unknown, the calibration process assumes that the spectrum of the calibration lamp and scene are identical. The spectrum of the calibration lamp will usually be somewhat different than the spectrum of the scene being imaged, hence calibration errors will occur. This calibration error is also referred to as spectral banding. Calibration errors also occur because the calibration process includes an incomplete model of the complete optical process and because the response function for each detector changes over time and operating temperature.
Streaking can be seen in uniform areas of an image acquired by a linear detector and become very apparent when the contrast of the image is enhanced. Calibration differences between the red, green, and blue detectors of color imaging systems (or any of the bands in a multi-spectral or hyper-spectral imaging system) produce streaks of varying colors in the composite color image. These streaks not only reduce the aesthetic quality of digital images but can impact the interpretability of features in the images. Streaking also severely degrades the performance of pattern recognition, feature extraction algorithms, image classification algorithms and automated or semi-automated target recognition algorithms.
Streaks can be attenuated by reducing the contrast of each image band or by blurring each image band in a direction perpendicular to the streaking, but these methods degrade the quality of the overall image. Previously developed algorithms designed to remove streaks from digital imagery while preserving the sharpness and contrast of the image were designed to remove streaks on single band imagery; not on multi-band imagery. These algorithms only take into account spatial information present in the image to remove streaks. No attempt is made to examine additional color information or spectral correlation available in multi-spectral or hyper-spectral imagery to remove streaks. As a result, when applying these techniques to multi-band imagery, these algorithms do not completely remove all of the color streaks present in the original image and may introduce objectionable color streaks or bands as artifacts. These algorithms do not preserve and/or restore the overall color fidelity of the image. In addition, applying algorithms designed to remove streaks from single band imagery on color or multi-spectral imagery, is a non-optimal method for streak removal for color imagery or multi-spectral imagery, as these algorithms do not use all of the available information that is present in multi-band imagery during the streak removal process.
U.S. Pat. No. 5,065,444, issued Nov. 12, 1991, to Garber discloses a method of removing streaks from single band digital images by assuming that pixels in a predetermined region are strongly correlated, examining the pixels in the region, computing the difference between the pixels in the region, thresholding the pixel differences lower than a predetermined value, computing a gain and offset value from the distribution of differences, and using the gain and offset value to remove the streaking. Methods that assume a strong correlation between pixels that are near each other, such as the one disclosed by Garber will interpret scene variations as streaks and produce additional streaking artifacts in the image as a result of attempting to remove existing streaks.
U.S. Pat. No. 5,881,182, issued Mar. 9, 1999, to Fiete et al., which is incorporated herein by reference, discloses a method of removing streaks by comparing the means between a local window region of two columns of data in the imagery to determine if a streak was present, and presenting statistical methods to calculate a gain and offset to remove the streaks. To apply this method on spectral or color imagery (imagery that consists of more than one spectral band), this method would be applied independently to each band of the spectral imagery, and then the bands of the spectral imagery are recombined to form a color composite image. The method of Fiete et al. looks only at the spatial and luminance information within each band independently; hence calibration differences between each of the bands may not be corrected. As a result, when applied independently to each band of a multi-band image, and then combining these spectral bands together to form a color composite image, all of the color streaks may not be completely removed from the imagery. Yet, when the three spectral bands from a multi-band spectral image, each containing some unremoved streaks and slight banding artifacts, are combined to form a composite three band color image, these artifacts manifest to form objectionable color streaks and color banding in the color composite imagery. The method of Fiete et al. does not adequately remove color streaks from multi-band imagery.
There is a need therefore for an improved digital image processing method for removing streaks in color or multi-band images. The method presented here is an improvement of the method of Fiete et. al. to better remove color streaks and bands from multi-band imagery.
The object of the present invention is achieved in a method of removing columnar streaks from a multi-band digital image of the type in which the spectral bands are transformed to an advantageous spectral space, the streak removal operation is performed in the advantageous spectral space, and then the bands are transformed back into the original space. The method of removing columnar streaks is a method of the type in which it is assumed that pixels in a predetermined region near a given pixel within each transformed band are strongly related to each other and employing gain and offset values to compute streak removal information, by testing for a strong relation between the pixels in a predetermined region near a given pixel and computing streak removal information only if such a strong relationship exists, whereby image content that does not extend the full length of the image in the columnar direction will not be interpreted as a streak.
The method of the present invention adaptively removes streaking, as well as banding, in multi-band digital images without reducing the sharpness, contrast, or color fidelity of the image. Streaking occurs in multi-band image output from linear scanners and is generally caused by differences in the responsivity of detectors or amplifiers or non-uniform spectral response of filters. The method disclosed detects pixel locations in the image where pixel-to-pixel differences (both within bands and between bands) caused by streaking can be distinguished from normal variations in the scene data. A linear regression is performed between each spatially and spectrally adjacent pixel in a direction perpendicular to the streaking at the detected locations. A statistical outlier analysis is performed on the predicted values to remove the pixels that are not from the streaking. A second linear regression is performed to calculate the slope and offset values. The slope is set to unity if it is not statistically different from unity, and the offset is set to zero if it is not statistically different from zero. The slope and offset values are then used to remove the streaking from the corresponding line of image data.
This invention adaptively removes streaking in multi-band digital image by using both spatial and spectral information within the image. By using spectral information present in multi-band imagery, better determinations can be made of streaks than by using just spatial information present in just one band of the imagery.
The streak removal operation consists of transforming the multi-band data to an advantageous spectral space, testing for a strong correlation between the pixels in a predetermined region and computing streak removal information only if such a strong relationship exists, and the transforming back into the original spectral space. This process will remove the residual streaks that appear even after a calibration is performed on the imaging sensor. This method does not reduce the contrast, sharpness, and preserves and/or improves the color fidelity of the image.
FIGS. 2(a)-2(b) illustrates the artifacts produced by methods that assume that pixels in a predetermined region near a given pixel are strongly related to each other;
The streak removal process of the present invention can be employed in a typical image processing chain, such as the one shown in
If the original image was a photographic color image having streaks or scratches, for example the scratches seen in old movie film, the images may be scanned in a high quality scanner and the streaks or scratches removed by the method of the present invention.
The preferred embodiment of the multi-band streak removal process described below is presented in the context of removing streaks from a multi-band sensor system. A single-band sensor system collects a single image that represents a single spectral band of the scene. A multi-band sensor system collects a total of Nband multiple images, each acquired at different spectral bands denoted (λ1, λ2, λ3, . . . , λNband), as shown in FIG. 4. If the multi-band sensor systems contain more than three spectral bands, then at the completion of the spectral band removal operation, any three of the spectral bands may be selected to create a color composite image for output display. (i.e. mapped to the red, green, and blue channels of an output device) or a subset of the Nband images may be selected for input into an automated information extraction algorithm.
Presented below is the single-band streak removal operation 18. For the discussion of this invention it will be assumed that the streaks occur in columnar direction of each band of the multi-band digital image 12. The pixel at column coordinate x and row coordinate y and band location z has a digital count value i(x,y,z). If dx is the detector for column x, then the response curve for detector dx in the digital sensor 10 can be modeled as a linear function of the input illumination radiance, thus
i(x,y,z)=axI(x,y,z)+bx, (1)
where I(x,y,z) is the intensity of the illumination radiance at location (x,y,z) in the image, ax is the gain for detector dx, and bx is the bias for detector dx.
Streaks occur in the digital image 12 because adjacent detectors in the digital sensor 10 have different response curves. The difference Δ(x,y,z) between adjacent pixels within band z is given by
Δ(x,y,z)=i(x,y,z)−i(x+1,y,z)=axI(x,y,z)+bx−ax+1I(x+1,y,z)−bx+1, (2)
and is dependent on the detector response as well as the difference between the illumination radiance incident on the adjacent pixels. If the detectors dx and dx+1 have the same response curves, i.e. if ax=ax+1 and bx=bx+1, then
Δ(x,y,z)=i(x,y,z)−i(x+1,y,z)=ax[I(x,y,z)−I(x+1,y,z)], (3)
and the difference between i(x,y,z) and i(x+1,y,z) is proportional to the difference between the illumination radiance incident on the adjacent pixels, which is desired, and no streaks due to sensor calibration errors will be present.
If I(x,y,z)=I(x+1,y,z) in Eq. (2) then
Δ(x,y,z)=i(x,y,z)−i(x+1,y,z)=[ax−ax+1]I(x,y,z)+[bx−bx+1], (4)
and the difference between i(x,y,z) and i(x+1,y,z) is entirely from the different response curves between detectors dx and dx+1.
If I(x+1,y,z) is substituted for I(x,y,z) using Eq. (3) then
i(x,y)=Δaxi(x+1,y,z)+Δbx (6)
and i(x,y,z) is just a linear transformation of i(x+1y,z) with a slope Δax and offset Δbx. By determining Δax and Δbx, the streaking between columns x and x+1 can be removed if the pixel count values i(x+1,y,z) are replaced with î(x+1,y,z)where
î(x+1,y,z)≡Δaxi(x+1,y,z)+Δbx. (7)
The difference between adjacent pixels is now
Δ(x,y,z)=i(x,y,z)−î(x+1,y,z)=axI(x,y,z)+bx−{Δax[ax+1I(x+1,y,z)+bx+1]+Δbx}
=ax[I(x,y,z)−I(x+1,y,z)], (8)
which is the desired result from Eq. (3), hence no streaks due to sensor calibration error will be present.
Methods that determine Δax and Δbx by assuming that the illumination radiance is always approximately equal in a predetermined region near pixel i(x,y,z), e.g. I(x,y,z)≈I(x+1,y,z), such as the one disclosed in U.S. Pat. No. 5,065,444, will generate poor estimates of Δax and Δbx where I(x,y,z)≠I(x+1,y,z) and artifacts will occur. Methods that determine Δax and Δbx by testing for strong relationships in spatial features within a single band and computing Δax and Δbx only from pixels where I(x,y,z)≈I(x+1,y,z), such as the one disclosed in U.S. Pat. No. 5,881,122 do not use any available information present in the other bands. Spectral streaking will not be removed using these methods.
According to the present invention, spectral streaks will be removed if a spectral transformation is first performed on each imaging band as a pre-processing step to transform the data into a spectrally advantageous space. In general, the spectral transformation will take the form:
i′(x,y,z′)=θ[i(x,y,z)], (9)
where ∂ is a transformation operator, operating on each of the spectral bands of the input image, i(x,y,z). In the preferred embodiment, this transformation is a linear combination of the original bands, given by
where αz′,z are the linear transformation coefficients. In matrix notation this transform is given by
ĩ′(x,y,z′)=Ãĩ(x,y,z) (11)
where à is the Nband×Nband transformation matrix. The transformation combines the radiometric and spectral information from each band into new bands, such that when streaks are removed from the transformed data, the spectral information is included. The optimal transformation to use will be dependent on the number of bands of data, the spectral bandpass of each of the imaging bands, and other imaging band dependent sensor characteristics.
If the multi-band image contains three bands, or three pre-selected bands from the multi-band image are desired to form a color composite output, then the following transformation is used on the data in the preferred embodiment to minimize color streaking artifacts:
i′(x,y,1)=0.2999i(x,y,1)+0.587i(x,y,2)+0.114i(x,y,3) (12)
i′(x,y,2)=−0.1687i(x,y,1)−0.3313i(x,y,2)+0.500i(x,y,3) (13)
i′(x,y,3)=0.500i(x,y,1)−0.4187i(x,y,2)−0.0813i(x,y,3). (14)
Once the spectral transformation is performed, a streak removal operation is performed on each of the spectrally transformed bands of i′(x,y,z′) one band at a time using information from the all other spectral bands in the streak removal process. Let i′(x,y,zref) refer to the band currently undergoing the streak removal operation, where Zref is referred to as the reference band. Let i′(x,y,ztesti) refer to all other bands in the image, excluding the reference bands (zref). These bands shall be referred to as the test bands. In the streak removal operation performed on an individual band, a test is performed for a strong relationship in spatial features between spectrally and spatially correlated pixels and Δax and Δbx are computed only from those pixels where i′(x,y,zref)≈i′(x+1y,zref) and i′(x,y,ztestid)≈i(x+1,y,zref) thus preventing artifacts due to the processing to remove streaking from occurring and allowing spatial information from other bands that are highly correlated to the current band to be used in the streak removal process. A schematic of the streak removal process 18 disclosed in this invention is shown in FIG. 5. First two adjacent columns of image data are selected 30 from band zref. Next, a column of pixel value pairs representing the pixel values of the adjacent pixels of the two columns is formed 32. Next a pair of columns of local mean values representing the mean values of pixels in an N pixel window for each of the adjacent columns of image data is formed 34. The local means μref(x,y,zref) and μref(x+1,y,zref) are calculated using
where N is the window length. To determine if i′(x,y,zref)≈i′(x+1,y,zref), a mask, such as the mask 35 shown in
Next, a test for similarity between bands in the local pixel regions is also performed. First a bias, Bi, is added to the pixel values in each of the local windows in the bands ztesti used to calculate the mean in Eq. (17) such that μref=μtesti. Next, a correlation is calculated 36 over the local window region (x,y+n) between each test band ztesti and the reference band zref, given by
Next, a local difference metric Mref(x,y,zref)is calculated that measures the similarity between local pixel regions. A difference metric based on the difference between the mean reduced values is given by
If the calculated correlation, Corri>TC, where TC is the correlation threshold, for a given test band (Ztesti), then an additional difference metric Mtesti(x,y,ztesti) is calculated:
where ibias(x,y+n,ztesti) represents the bias adjusted pixels over the local window region. Next, the average local difference metric 37 is calculated:
where NbandsC are the number of test bands in which Corri>TC.
The local pixel regions are similar if M(x,y,z)<TM, where TM is the difference metric threshold. The optimal value for TM will depend on the characteristics of the digital sensor 10. A maximum difference threshold, TΔ, is defined by determining the largest magnitude difference of Δ(x,y,z) that is possible from calibration differences alone.
To determined the values of Δax and Δbx in Eq. (7), two columns of pixel values ix,z(n) and pixel values ix+1(n), where n is a counting index, are generated 38 for each row x, where only the k values of i(x,y,zref) and i(x+1,y,zref) that satisfy the conditions M(x,y,zref)<TM and |Δ(x,y,zref)|<TΔ are used.
Initial estimates of the slope and offset are determined by performing a linear regression between ix(n) and ix+1(n) to determine the regression line 39 in FIG. 7. The initial estimate of the slope, Δa′x, is calculated 40 by
where k is the total number of elements in ix(n). The initial estimate of the offset, Δb′x, is calculated 42 by
The slope Δax and offset Δbx for Eq. (7) are determined by performing a second linear regression between ix(n) and ix+1(n) after the statistical outliers 43 in
where
îx(n)=Δaxix+1(n)+Δbx. (25)
Values of i(x,y,zref) that satisfy the condition |ix(n)−îx(n)|>Ts are determined 56, these values are not statistical outliers. The outlier threshold Ts is proportional to se and is typically set equal to 3se. Two new columns of pixel values, ix(n) and its adjacent pixel ix+1(n) are generated 48 for each row x, where only the j≦k. The slope Δax and offset Δbx for Eq. (7) are now determined 60 by
The final statistical tests performed 52 are to determine if the slope Δax is statistically different from unity and the offset Δbx is statistically different from zero. These tests are performed to ensure that the difference in the response curves estimated for detectors dx and dx+1 are statistically different. If they are not statistically different, then using the estimates for Δax≠1 and Δbx≠0 may add streaking to the image rather than remove it, hence degrading the quality of the image rather than improving it.
A statistical hypothesis test is used to determine if the slope Δax is statistically different from unity. The t statistic is given by
where
The t statistic is compared to the t distribution value tα/2 to determine if Δax is statistically different from unity. If tΔa
To determine if the offset Δbx is statistically different from zero, the t statistic is given by
If tΔb
Finally, the pixels i(x+1,y,zref) in column x are modified by Eq. (7) to remove the streaks 58. The procedure outlined above is repeated for the next column of image data. This process is continued until all columns of the image data have been processed in the reference band. This process is then repeated for each band of the spectrally transformed image. Once each band of the spectrally transformed image is streak-removed, the inverse spectral transformation 60 is applied
ĩ(x,y,z)=Ã−1ĩ′(x,y,z′) (32)
where Ã−1 is the Nband×Nband inverse transformation matrix, and the corrected digital image 24 is output. In the preferred embodiment for a three-band color image, the inverse spectral transform is:
i(x,y,1)=i′(x,y,1)+1.402i′(x,y,3) (33)
i(x,y,2)=i′(x,y,1)−0.34414i′(x,y,2)−0.71414i′(x,y,3) (34)
i(x,y,3)=i′(x,y,1)+1.772i′(x,y,2). (35)
The invention has been described with reference to a preferred embodiment. However, it will be appreciated that variations and modifications can be effected by a person of ordinary skill in the art without departing from the scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
5065444 | Garber | Nov 1991 | A |
5729631 | Wober et al. | Mar 1998 | A |
5881182 | Fiete et al. | Mar 1999 | A |
6496286 | Yamazaki | Dec 2002 | B1 |
Number | Date | Country | |
---|---|---|---|
20020114533 A1 | Aug 2002 | US |