This invention relates to systems and method for pan sharpening digital imagery, including remotely-sensed multispectral imagery.
Panchromatic imagery has lower spectral resolution and higher spatial resolution than multispectral imagery. Conversely, multispectral remotely-sensed imagery has higher spectral resolution and lower spatial resolution. While many methods have attempted to marry the two to achieve the best of both, they often result in the worst of both instead. For example, in part because the multispectral bands, specifically the red-blue-green bands, are not spectrally co-extensive with the full panchromatic band, colors are often distorted when attempting to pan sharpen the multispectral image with the panchromatic image.
One well-known color transform method in image processing is called Hue Intensity Saturation, or HIS, transform. The HIS transform is described in, for example, Gonzalez and Woods, Digital Image Processing, Section 6.2.3, pages 407-412, Pearson Education Inc, Upper Saddle River, N.J. 07458, which is incorporated here for all that it discloses. The HIS transform converts the red, green and blue (RGB) components of pixels in the multispectral image into values for hue (H), intensity (I) and saturation (S). When manipulating an image in the HIS color space, the values are independent so manipulation of one will not affect the others. Thus, intensity I may be manipulated without affecting the hue H or saturation S. When the manipulations are complete, the HIS color space is changed back to an adjusted RGB in a process known as reverse transformation. The multispectral image is thus “sharpened.”
The following summary is provided as a brief overview of the claimed invention. It shall not limit the invention in any respect, with a detailed and fully enabling disclosure being set forth in the Detailed Description of the invention section. Likewise, the invention shall not be limited in any numerical parameters, hardware, software, platform, or other variables, unless otherwise stated herein.
An embodiment of the present invention comprises a method for pan sharpening a first multispectral image of an area. The method comprises: providing at least a second multispectral image of the area, the first and second multispectral images comprising a synoptic pair and each multispectral image comprising multiple spectral bands of M band groups, where M is a number of band groups in a range of 1 to M; providing a panchromatic image of the geographical area, the panchromatic image comprising a p band; for each band group m in the range of 1 to M: (a) upsampling band group m; (b) pan sharpening band group m using the p band; (c) producing a pan sharpened result for band group m; and continuing steps (a)-(c) to produce pan sharpened results for each band group from 1 to M; and fusing together the pan sharpened results for band groups 1 to M to produce a master fused image.
Another embodiment comprises a method for pan sharpening a multispectral image of a location. The method comprises: providing a panchromatic image of the location, the panchromatic image having a p band; performing a forward transform of each of red, green and blue components for a pixel of the multispectral image to obtain hue, intensity and saturation bands, the intensity band having a level of spatial detail; smoothing the p band to produce a smoothed p band comprising substantially the same level of spatial detail of the intensity band; modifying the intensity band using the p band and the smoothed p band to produce a modified intensity band; obtaining a modified red component, a modified green component and a modified blue component using the modified intensity band; and using the modified red, green and blue components to sharpen the multispectral image, producing a pan sharpened multispectral image.
In yet another embodiment, the present invention comprises a method for pan sharpening a multispectral image of a location, the multispectral image having a plurality of pixels. The method comprises: providing a panchromatic image of the location, the panchromatic image having a p band; for each pixel, determining a vector x, the vector x having N spectral bands and comprising component xi where i is an index that runs from 1 to N; determining an intensity based on a length of vector x; determining a ratio of component xi to the intensity; determining an adjusted component xi′ using component xi; using the adjusted component xi′ to calculate an adjusted vector x′ to pan sharpen each pixel to produce a sharpened multispectral image.
The present invention also comprises a method for pan sharpening a multispectral image of a location, the multispectral image having N spectral bands. The method comprises: providing a panchromatic image of the location, the panchromatic image having a p band; performing a forward transform of each of red, green and blue components for a pixel of the multispectral image to obtain hue, intensity and saturation bands, the intensity band having a level of spatial detail; generating a synthetic p band from the N spectral bands using a coefficient for each of N spectral bands based on a spectral overlap between the p band and each of the spectral bands; matching an intensity of the p band with an intensity of the synthetic p band, producing an intensity matched synthetic p band; modifying the intensity band by a difference between the p band and the intensity matched synthetic p band; obtaining a modified red component, a modified green component and a modified blue component using the modified intensity band; and using the modified red, green and blue components to sharpen the multispectral image.
The present invention also comprises various embodiments of a computer-readable medium associated with an image processing system for pan sharpening multispectral image of a location. The computer readable medium includes program instructions that, when executed by one or more processors of the image processing system, cause the processors to carry out the steps of the various embodiments of the methods described.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee. Illustrative and presently preferred exemplary embodiments of the invention are shown in the drawings in which:
The present invention comprises methods, systems and image processing circuitry for sharpening remotely-sensed multispectral imagery of any resolution using associated panchromatic imagery, such as panchromatic image 14. Remotely-sensed imagery as used herein includes imagery of the Earth or space and should not be viewed as being limited to geospatial imagery. In addition to panchromatic image 14, remotely-sensed multispectral image 10 may be collected using a remote satellite or aerial sensor. Multispectral image 10 comprises pixels in N spectral bands and associated with other data, such as location data (e.g., geographic location). As used herein, “associated panchromatic imagery” or “panchromatic image” shall be understood as referring to an image in the panchromatic band, or p band, that corresponds to multispectral image 10 of the same or overlapping geographic location. Panchromatic image 14 may include the entire image or a portion 12 of panchromatic image 14, as well as its associated data. It should be understood that multispectral image 10 may refer to the entire image or a portion 12 of multispectral image 10 or a portion of the data associated with multispectral image 10, whether multispectral image 10 has been pre-processed or combined with other images to produce a fused image. Thus, the present invention should not be viewed as being limited to pan sharpening of an entire image or all the data associated with the remotely-sensed image or related images.
The problems addressed by the present invention may be somewhat different based on the number and type of sensors that are used to collect the remotely-sensed image. Method 100 of the present invention addresses problems created when fusing a plurality of synoptic multispectral images 10 and associated panchromatic images 14 (e.g., taken at nearly the same, but not precisely the same time.) Embodiments of method 200 of the present invention address shortcomings in prior art methods that employ both forward and reverse transformation processes. Finally, embodiments of method 300 allow for excellent quality pan sharpening of multispectral images 10 without the need for forward transform and reverse transform at all.
Method 100 comprises pan sharpening of a plurality of synoptic multispectral images and associated panchromatic imagery. A particular problem arises when attempting to register a plurality of multispectral images and the associated panchromatic image of a geographic area collected nearly synoptically in time using different banks of multispectral sensors (e.g., pushbroom sensors). These sensors collect the plurality of multispectral images 10 and the associated panchromatic image 14 at nearly, but not exactly the same, time. Because the images are collected from different sensors at slightly varying times, pixel smear, lack of crispness and color distortions may result when fusing images, especially in the case of moving objects, for example.
In prior art pan sharpening methods, such as shown in
where ground sample distance (GSD) is measured in units of meters and refers to a distance between adjacent pixel edges when projected through the camera of the sensor. The quantities d1 and d2 are measured in units of pixels since they are normalized by the ground sample distance. Assuming that the time difference between T0 and T1 is approximately the same as the time difference between T2 and T1, then the distances traveled between the two imaging events are equal:
T1−T0=T2−T1=δT Equation 3
d
1
=d
1
=d Equation 6
If velocity V0 is high, then the object will be at different spatial locations in the imagery between imaging events. At time T0, the object is at point P0. At later time T1, the object has traveled distance d, and at later time T2 the object has traveled distance d from the last location. In total, the object travels distance 2d. Since the multispectral images 10 and the associated panchromatic image 14 are acquired at different times (T0, T1, T2), the object is at different spatial locations in each image. The object moves a distance of 2d between times T2 and T0; thus, any direct mixture of the first and second set of multispectral bands as in prior art methods will exhibit a large amount of smearing as shown in pan sharpened image 15 of
Method 100 of the present invention minimizes, and in some embodiments may eliminate, smear by separating the data fusion problem of the prior art into separate steps based on the number of multispectral bands. As shown in
Once the group of bands m, has been pan sharpened 106, a fused, pan sharpened image Pm 140 for group of bands m is produced 108.
In another embodiment, method 100 further comprises fusing 110 the pan sharpened images for group of bands m, P1 and P2 (Pm, P1, P2), 140, 141, 142 into master fusion file 144. While preferred in some applications, the fusing step 110 is not required.
Method 100 of the present invention may also be employed to rectify smears and discoloration caused by mis-registration between spectral bands of synoptic multispectral images 10 that occurs when the same object is visible in multiple multispectral images 10 but appears in slightly different places in the image planes.
Embodiments of method 200 of the present invention address shortcomings in prior art methods that employ both forward and reverse transformation processes. For example, HIS pan sharpening model assumes that the panchromatic intensity is closely matched by a simple linear combination of the R, G and B multispectral bands, in which each multispectral component is given equal weight, as evidenced by the well-known calculation for determining intensity (I):
However, this assumption may be fundamentally untrue for many remote sensing optical platforms. In addition, the spectral bandwidth of the panchromatic band may be much larger than the combined spectral bandwidths of the visible R, G and B bands, leading to color distortions in pan sharpened image 15. Reflectance of healthy, non-senescent vegetation in wavelengths longer than 700 nm is higher than the reflectance in the R, G, or B bands due to the presence of chlorophyll. Since there is an abundance of radiometric energy present in the panchromatic band that is not present the R, G, or B bands, color distortions from using HIS pan sharpening methods are most noticeable over vegetated areas, but can also be observed over all types of scene content, as well, as shown in
In one embodiment for pan sharpening multispectral image 10 of a location (e.g., geographic location), method 201 (which may be referred to as HIS-enhanced pan sharpening) comprises providing panchromatic image 14 of the same location, or one in which the location is overlapping. Multispectral image 10 is upsampled to the same spatial resolution as panchromatic image 14. A forward transform of each of red r, green g, and blue b components for a pixel of the multispectral image 10 is performed to obtain hue, intensity and saturation bands. The process for performing the forward transform is well-known and, therefore, is not repeated here or elsewhere in describing the various embodiments of methods 100, 200, 300 of the present invention. Since the intensity band (I) has a certain level of spatial detail, the p band is smoothed to produce a smoothed p band comprising substantially the same level of spatial detail of the intensity band. Smoothing of method 201 of the present invention comprises removing high spatial frequency details from the p band, and may be accomplished by a sliding window convolution filter, performed with a square window, in which the value of the middle output pixel is the mean of all pixels. Other filters, such as a Fast Fourier Transform (FFT) filter or other filters, may also be used.
In an alternative embodiment, method 201 may further comprise an optional statistically matching step in which the distributions of the p band and smoothed p band are matched to distributions of the intensity band. This may be done through histogram matching or other methods of statistical matching, including computing a mean and a standard deviation of the p band, computing a mean and a standard deviation of the intensity band and modifying the p band and the smoothed p band to match the mean and standard deviation of the intensity band.
The intensity band is then modified by a ratio of the p band and the smoothed p band, denoted as Psmooth to produce a modified intensity band as given by the following equation for modified intensity, denoted as I′:
where ε is a small number to prevent division by zero.
Method 201 further comprises performing a reverse transform, including obtaining a modified red component r′, a modified green component g′ and a modified blue component b′ using the modified intensity band I′, as given by the following equation:
r′=3rI
g′=3gI
b′=3bI Equation 9
The process for performing the reverse transform is well-known and, therefore, is not repeated here or elsewhere in describing the various embodiments of methods 100, 200, 300 of the present invention. Multispectral image 10 is then changed back to the original color space as the modified red, green and blue components are used to sharpen the multispectral image 10, producing HIS-enhanced pan sharpened image 210, as can be seen in
In yet another embodiment of method 200 for pan sharpening multispectral image 10 of a location (e.g., geographic location), a difference modulation technique (DMT) is employed to achieve both high color quality and spatial quality in the final HIS-DMT pan sharpened image 212. Method 202 (which may be referred to as HIS-DMT pan sharpening) comprises providing panchromatic image 14 of the same location, or one in which the location is overlapping. Multispectral image 10 is upsampled to the same spatial resolution as panchromatic image 14. A forward transform of each of red r, green g and blue b components for pixels of multispectral image 10 is performed to obtain hue, intensity and saturation bands. Since the intensity band (I) has a certain level of spatial detail, the p band is smoothed to produce a smoothed p band comprising substantially the same level of spatial detail of the intensity band. Smoothing of method 202 of the present invention comprises removing high spatial frequency details from the p band, which, in one embodiment, was accomplished by a sliding window convolution filter, with a square window, although other filtering methods, such as FFT filtering are also possible.
In an alternative embodiment, method 202 may further comprise an optional statistically matching step in which the distributions of the p band and smoothed p band are matched to distributions of the intensity band. This may be done through histogram matching or other methods of statistical matching such as have been previously described.
The intensity band is then modified by a difference between the p band and the smoothed p band Psmooth, thus modulating the original intensity band I to produce a modified intensity band I′ as given by the following equation:
I′=1+β(p−psmooth) Equation 10
The quantity β comprises a user-specified sharpening factor and may be set to any value equal to or greater than zero. Where β equals zero, then the original intensity is not modulated at all and no sharpening is performed. Where β is greater than zero, the original intensity I is modulated by the difference between the p band and the smoothed p band Psmooth to arrive at I′.
Method 202 further comprises obtaining a modified red component r′, a modified green component g′ and a modified blue component b′ using the modified intensity band I′ in Equation 9 above. Multispectral image 10 is then changed back to the original color space as the modified red, green and blue components are used to sharpen the multispectral image 10, producing HIS-DMT pan sharpened image 212, as can be seen in
Since β is a user-specified sharpening factor, β could be determined to be any value greater than zero, depending on the level of sharpening desired. If more sharpened is desired as opposed to less sharpening, the sharpening factor β can be raised. In embodiments of method 200 of the present invention, β may be in a range from between zero and 2.0. If β equals zero, no sharpening will occur. In embodiments where β equals 1.0, then the original intensity I is modulated by the full difference between the p band and the smoothed p band and the resulting image will appear nearly as sharp as in panchromatic image 14.
In an alternative embodiment of method 202′, as is explained in greater detail below, instead of smoothing, a synthetic p band can be generated from all the N bands of multispectral image 10 (e.g., as shown in Equation 12). Method 202′ may further comprise matching an intensity of the synthetic p band to an intensity of the p band, using a coefficient for each of N spectral bands where each of the coefficients is based on a spectral overlap between the p band and each of the spectral bands, resulting in an intensity matched synthetic p band (e.g., as shown in Equations 12 and 13). An adjusted intensity may be calculated by multiplying sharpening factor β by the difference between the p band and the intensity matched synthetic p band (e.g., as shown in Equation 14).
In yet another embodiment for pan sharpening multispectral image 10 of a geographic location, method 200 may comprise another DMT pan sharpening technique. In the present embodiment, method 203 comprises using the synthetic p band to obtain DMT pan sharpened image 213, as more fully described below.
For pan sharpening multispectral image 10 (having N spectral bands), method 203 comprises providing panchromatic image 14 of the location (e.g., geographic location). A forward transform of each of red r, green g and blue b components for pixels of multispectral image 10 is performed to obtain hue, intensity and saturation bands, the intensity band (I) having a certain level of spatial detail.
Method 203 further comprises generating synthetic p band from the N spectral bands using a coefficient as given in the following equation:
where MSi denotes the i′th multispectral band, and the coefficients Ci are the weights given to each multispectral component.
The coefficients Ci may be determined by calculating the spectral overlap between the given MS band and the p band using the data from the curves shown in
where Ki is the factor for the i′th MS band which converts digital numbers to units of radiance (in watts per steradian per meter squared W·sr−1·m−2); KP is the factor for the p band which converts digital numbers to units of radiance; Pspectral
Method 203 further comprises matching an intensity of the p band with an intensity of the synthetic p band, producing an intensity matched synthetic p band. It may be desirable to produce an intensity matched synthetic p band given that the model used to generate the synthetic p band may contain errors. In addition, since the bandwidth of the p band may be significantly wider than the combined bandwidths of the N spectral bands of multispectral image 10, the intensities may differ significantly between the p band and the synthetic p band. The intensity of synthetic p band is matched to that of the p band using a linear-gain offset correction as follows:
where Pan is the p band, Pansynthetic is the synthetic p band, gain is the linear multiplier, and offset is the bias correction. The gain and offset terms above are determined by gathering spatially coincident pixels from both the p band and the synthetic p band and fitting them via a linear least squares fit, according to methods well-known in the art.
As in the case of other embodiments of method 200, method 203 comprises determining a modified intensity band I′, in this case, modulating original intensity band I based on a difference between the p band and the intensity matched synthetic p band modified by sharpening factor β, as follows:
I′=1+β(Pan−(gain×Pansynthetic+offset)) Equation 14
Gain, offset and sharpening factor β have been described previously. As in the case of method 202, 202′ β may be in a range from between zero and 2.0. In embodiments where β equals 1.0, then the original intensity I would be modulated by the full difference between the p band and the synthetic p band and the resulting image would appear nearly as sharp as in panchromatic image 14. As the value of β increases, DMT pan sharpened image 213 would become noticeably sharper. In yet other embodiments where β is greater than 1, the sharpening effect is amplified; however, some color distortion may occur. If β were set to a value above 2.0, over-sharpened images would be produced. By adjusting the value of sharpening factor β, the user may adjust the degree of pan sharpening on the fly in real time on either the entire multispectral image 10 or a portion 12 of it.
Method 203 further comprises obtaining a modified red component r′, a modified green component g′ and a modified blue component b′ using the modified intensity band I′ in Equation 14 above. Multispectral image 10 is then changed back to the original color space as the modified red, green and blue components are used to sharpen the multispectral image 10, producing DMT pan sharpened image 213, as can be seen in
As would be apparent to one of ordinary skill in the art after becoming familiar with the teachings of the present invention, the embodiments of method 200 that have been specifically described herein could be easily adapted to pan sharpening methods that are similar to HIS pan sharpening, such as Hue Light Saturation (HLS) and Hue Value Saturation (HVS). Generally, the difference between such methods is the mathematical definition of intensity, light and value. The hue and saturation components of all of the foregoing methods are generally the same.
The present invention also includes embodiments of method 300, which is directed to a method of in-place pan sharpening that avoids explicit forward and reverse transformation steps to and from a different color space. Since the algorithms of method 300 are simpler than methods involving forward and reverse transformation steps, method 300 provides decided advantages in improved image processing without sacrificing image quality. In addition, since embodiments of method 300 are vector-based, all multispectral bands can be sharpened, not just with reference to red, green and blue components, as in other methods.
Since method 300 skips the forward transformation step of other pan sharpening methods, in favor of a vector transformation based on vector length, method 300 may begin with upsampling multispectral image 10. Method 300 further comprises determining a vector associated with an index of pixels in multispectral image 10, as follows:
{right arrow over (x)}′=[x1′,x2′, . . . , xN′] Equation 15
where each component xi, of vector x, represents a pixel in upsampled multispectral image 10, and the pixel index i runs from 1 to N.
Method 300 further comprises determining intensity I by determining the L2 norm, or length, of vector x, as follows:
A ratio Ri of xi, the i-th component of vector x, to intensity I may be determined, as follows:
where ε is a small number to prevent division by zero. There can be N such ratios. The original color component xi is recovered when Ri is multiplied by I as follows:
xi=RiI Equation 18
In specific embodiments described herein, method 300 comprises method 301 for direct substitution in-place pan sharpening, method 302 for enhanced substitution in-place pan sharpening and method 303 for DMT in-place pan sharpening. Embodiments of method 301 will now be described.
After upsampling multispectral image 10, method 301 comprises determining vector x′ associated with the pixel index i for multispectral image 10, as has been previously described in connection with Equation 15, which is incorporated here. As set forth above, I may be determined with reference to Equation 16 and Ri may be determined with reference to Equation 17, as has been previously explained, and which equations are incorporated here.
Panchromatic image 14 of the same or overlapping geographic location as multispectral image 10 is provided as part of method 301. In an alternative embodiment, method 301 may comprise matching a histogram of the p band with a histogram of the I band, as follows:
p=HISTOGRAMMATCH(,I) Equation 19
Histogram matching is not required, however. Other types of statistical matching could be used as well, but this is not required either. If histogram or statistical matching is not desired, then the p band can be used unaltered.
Method 301 further comprises sharpening by determining adjusted component xi′ using component xi by one of multiplying the ratio by the p band (or multiplying the component xi by a ratio of the p band to the intensity), as follows:
where ε is a small number to prevent division by zero. Equation 20 is applied to compute adjusted component xi′ for all N components of the original N multispectral bands. All adjusted components xi′ are assembled into adjusted vector x′, thereby sharpening each pixel to produce direct substitution in-place pan sharpened image 311, as shown in
In another embodiment, method 300 comprises method 302 for enhanced substitution in-place pan sharpening. While method 301 may be more effective than prior art methods for pan sharpening, method 302 may be still more advantageous since method 302 generally performs better in replicating the original colors of multispectral image 10 with better spatial recovery than method 301.
After upsampling multispectral image 10 and providing panchromatic image 14, method 302 comprises smoothing the p band of panchromatic image 14, as follows:
psmooth−SMOOTH(p) Equation 21
In one embodiment, smoothing the p band to produce a smoothed p band comprises using a sliding window convolution filter, with a square window, in which the value of the middle output pixel is the mean of all pixels in the window. Other filtering methods, such as FFT filtering, are also possible.
In an alternative embodiment, method 302 may comprise statistical matching of the distributions of the p band and the smoothed p band to distributions of the I band through calculation of the mean and standard deviation, as has been previously described, or through histogram matching. Neither statistical matching nor histogram matching are required, however.
Method 302 further comprises sharpening by determining adjusted component xi′ using component xi by multiplying the component xi of vector x by a ratio of the p band to the smoothed p band, as follows:
where ε is a small number to prevent division by zero. Equation 22 is an alternative way of expressing Ri·p. Equation 22 is applied to compute adjusted component xi′ for all N components of the original N multispectral bands. All adjusted components xi′ are assembled into adjusted vector x′, thereby sharpening each pixel to produce enhanced substitution in-place pan sharpened image 312, as shown in
An embodiment of method 303 for DMT in-place pan sharpening will now be described. After upsampling multispectral image 10, method 303 comprises determining vector x′ associated with the pixel index i for multispectral image 10, as has been previously explained with reference to Equation 15, which is incorporated here. I may be determined with reference to Equation 16 and Ri may be determined with reference to Equation 17, as has been previously explained, and which are incorporated here.
Panchromatic image 14 of the same or overlapping location (e.g., geographic location) as multispectral image 10 is provided as part of method 303. Method 303 comprises smoothing the p band of panchromatic image 14 with reference to Equation 21, which is incorporated here. In one embodiment, smoothing the p band to produce a smoothed p band comprises using a sliding window convolution filter, with a square window, in which the value of the middle output pixel is the mean of all pixels in the window. Other filtering methods, such as FFT filtering, are also possible.
In an alternative embodiment, method 303 may comprise statistical matching of the distributions of the p band and the smoothed p band to distributions of the I band through calculation of the mean and standard deviation, as has been previously described, or through histogram matching with reference to Equation 19, or other types of statistical matching. Neither statistical matching nor histogram matching is required, however.
Method 303 further comprises modulating each of the color components xi by determining adjusted component xi′ by the difference between the p band and the smoothed p band, as follows:
xi′=xi+β(p−psmooth) Equation 23
Sharpening factor β has been previously described. As in the case of method 202, 202′, 203, β may be in a range from between zero and 2.0. In embodiments where β equals 1.0, then the original intensity I would be modulated by the full difference between the p band and the synthetic p band and the resulting image would appear nearly as sharp as in panchromatic image 14. As the value of β increases, DMT in-place pan sharpened image 313 would become noticeably sharper. In yet other embodiments where β is greater than 1, the sharpening effect is amplified; however, some color distortion may occur. If β were set to a value above 2.0, over-sharpened images would be produced. By adjusting the value of sharpening factor β, the user may adjust the degree of pan sharpening on the fly in real time on either the entire multispectral image 10 or a portion 12 of it.
Equation 23 is applied to compute adjusted component xi′ for all N components of the original N multispectral bands. All adjusted components xi′ are assembled into adjusted vector x′, thereby sharpening each pixel to produce DMT in-place pan sharpened image 313, as shown in
The various methods 100, 200, 300 of the present invention may be implemented using various image processing equipment and systems, including electronics, circuitry and special purpose processors for processing of remotely-sensed imagery, such as multispectral image 10 and panchromatic image 14. In one embodiment, a computer readable medium operatively associated with the image processing system of the present invention may be programmed with instructions that, when executed by one or more processors, cause the image processing system to automatically carry out the steps of any or all of methods 100, 200, 300 for pan sharpening multispectral image 10. The program instructions are stored in the computer readable medium. Multispectral image 10 and panchromatic image 14 may be stored, as well, saved to files in the image processing system. The various pan sharpened images 140, 141, 142, 210, 212, 213, 311, 312, 313 produced by methods 100, 200, 300 of the present invention may also be saved to output files; data generated from intermediate steps in methods 100, 200, 300 may also be saved to appropriate files, as would be familiar to one of ordinary skill in the art.
In one embodiment, the user executes the instructions with which the computer readable medium associated with a high performance computing system is programmed by selecting a command line tool for the desired pan sharpening method 100, 200, 300. For example, a graphic user interface (GUI) for use in conjunction with method 100 is shown in
As would be apparent to one of ordinary skill in the art after becoming familiar with the teachings of the present invention, in another embodiment of the image processing system of the present invention, provision could be made for additional user input through the GUI or use of more general purpose processors equipped with software embedded with special instructions to follow various steps of any or all of methods 100, 200, 300 of the present invention.
Embodiments of method 200 and 300 were tested as explained in more detail below. While there appears to be no agreement in the literature about how to quantitatively evaluate performance of a pan sharpening method, one quality index is the Wang-Bovic quality index described in Wang Z., Bovic, A., “A Universal Image Quality Index,” IEEE Signal Processing Letters, Vol. 9, No. 3 (March 2002), which is incorporated here for all that it teaches. The utility of the Wang-Bovic quality index for evaluating pan sharpening performance has been demonstrated in Borel, C., Spencer, C. Ewald, K., Wamsley C., “Novel methods for panchromatic sharpening of multi/hyper-spectral image data,” SPIE conference paper (Jul. 22, 2009). The Wang-Bovic quality index for two images f and g is defined as:
where the variances are represented as σf and σg and the means are represented as μf and μg. Following Wang-Bovic, the first term is the spatial cross correlation between f and g, the second term is a comparison of the means between f and g, and the third term is a comparison of the contrasts. The index goes between −1 and 1. When the image f is considered to be the original, unaltered image, and the image g is considered to be the altered image, then QWB is considered to measure the quality of g with respect to f.
The Wang-Bovic quality index requires a reference image; however, in pan sharpening methods, no reference image exists at the pan sharpened resolution. Rather, the pan sharpened image must be downsampled to the original multispectral resolution, which allows direct computation of the quality index. The QWB can be computed at a certain scale, or block size. A block size of approximately ¼ of the image can be used, according to Borel.
The Wang-Bovic quality index can be computed for each band in the original multi-spectral image, producing a vector of values. The quantity Qλ can be defined as follows:
Qλ=[QWB(MS1,PS1),QWB(MS2,PS2), . . . , QWB(MSN,PSN)] Equation 25
where MS indicates the original multispectral band in the image, and PS indicates the pan sharpened band (downsampled to the multispectral resolution), and N is the number of bands in the image.
The computation of QWB alone is unsatisfactory to fully evaluate pan sharpening quality. Since the computation is carried out at the same spatial resolution as the multispectral image, the QWB index cannot evaluate the spatial quality of the image at the panchromatic resolution. For evaluating the spatial performance of a pan sharpening algorithm, simply computing the cross-correlation of the original pan band with each band of the pan sharpened image provides an effective measure of spatial quality. The cross-correlation of two signals A and B is defined as:
where μA and μB are the means of signals A and B, and the summation runs over all elements of each signal. The CC metric goes from −1 to 1. The cross-correlation can be computed between the pan band and every band in the pan sharpened image producing a vector of values. We define the quantity Cλ as:
CCλ=[CC(Pan,PS1),CC(Pan,PS2), . . . , CC(Pan,PSN)] Equation 27
where Pan indicates the panchromatic band, and PS indicates the pan sharpened band, with the subscript indicating the band index, and N is the number of bands in the image. Both the spectral and spatial quality measures defined above are computed using a moving window approach, which is well known.
The spectral and spatial quality for each band can be averaged as follows to produce spectral and spatial quality indicators for all bands:
Table 1 compares spatial and spectral quality measurements for HIS direct pan sharpening method of the prior art, HIS-DMT pan sharpening method 202, and HIS-enhanced pan sharpening method 201 are presented. Both the HIS-DMT pan sharpening method 202 and the HIS enhanced pan sharpening method 201 greatly improved the spectral quality of multispectral image 10, at the cost of a small loss in spatial quality, compared to the HIS direct pan sharpening method.
Table 2 shows spatial and spectral quality measurements for the HIS-DMT pan sharpening method 202, with different values of sharpening factor β. Spatial quality increases as values of sharpening factor β approach 1.
In addition, the tradeoff between spectral quality and spatial quality as the value of sharpening factor β increases can be seen in
Table 3 shows spectral and spatial quality measurements for direct in-place pan sharpening method 301, enhanced in-place pan sharpening method 302 and DMT in-place pan sharpening method 303.
Table 4 shows spectral and spatial quality measurements for the DMT in-place pan sharpening method 303 for various values of sharpening factor β.
Having herein set forth the various embodiments of the present invention, it is anticipated that suitable modifications can be made thereto which will nonetheless remain within the scope of the invention. The invention shall therefore only be construed in accordance with the following claims:
This application claims priority to both provisional U.S. Patent Application Ser. No. 61/478,377, filed Apr. 22, 2011, and provisional U.S. Patent Application No. 61/478,388, filed Apr. 22, 2011, both of which are incorporated by reference as though fully set forth herein.
Number | Name | Date | Kind |
---|---|---|---|
5949914 | Yuen | Sep 1999 | A |
6011875 | Laben et al. | Jan 2000 | A |
6097835 | Lindgren | Aug 2000 | A |
6243483 | Petrou et al. | Jun 2001 | B1 |
7305103 | Turner et al. | Dec 2007 | B2 |
7340099 | Zhang | Mar 2008 | B2 |
7400770 | Keaton et al. | Jul 2008 | B2 |
7613360 | Ma et al. | Nov 2009 | B2 |
7620203 | Simmons et al. | Nov 2009 | B1 |
7826685 | Riley et al. | Nov 2010 | B2 |
7835594 | Riley et al. | Nov 2010 | B2 |
7936949 | Riley et al. | May 2011 | B2 |
8078009 | Riley et al. | Dec 2011 | B2 |
8224082 | Kumar et al. | Jul 2012 | B2 |
8260086 | Riley et al. | Sep 2012 | B2 |
8290301 | Robinson | Oct 2012 | B2 |
8320712 | Choi et al. | Nov 2012 | B2 |
8478067 | Riley et al. | Jul 2013 | B2 |
8493264 | Sasakawa | Jul 2013 | B2 |
20020057438 | Decker | May 2002 | A1 |
20040234126 | Hampshire et al. | Nov 2004 | A1 |
20050111754 | Cakir et al. | May 2005 | A1 |
20060215907 | Shefer | Sep 2006 | A1 |
20080166062 | Adams et al. | Jul 2008 | A1 |
20090059326 | Hong | Mar 2009 | A1 |
20090226114 | Choi et al. | Sep 2009 | A1 |
20100013948 | Azuma et al. | Jan 2010 | A1 |
20120057049 | Imagawa et al. | Mar 2012 | A1 |
Entry |
---|
Garzelli, A.; Nencini, F.; Capobianco, L.; , “Optimal MMSE Pan Sharpening of Very High Resolution Multispectral Images”, Geoscience and Remote Sensing, IEEE Transactions on , vol. 46, No. 1, pp. 228-236, Jan. 2008. |
Aiazzi, B., et al: “An MTF-Based Spectral Distortion Minimizing Model for Pan-Sharpening of Very High Resolution Multispectral Images of Urban Areas”, 2nd GRSS/ISPRS Joint Workshop on Data Fusion and Remote Sensing Over Urban Areas, pp. 90-94, May 2003. |
Tu et al. “A Fast Intensity-Hue-Saturation Fusion Technique With Spectral Adjustment for IKNOS Imagery”, IEEE Geoscience and Remote Sensing Letters, Oct. 2004, vol. 1, No. 4. |
Rahmani et al., “An Adaptive Pan Sharpening Method” IEEE Geoscience and Remote Sensing Letters, Oct. 2010, vol. 7, No. 4. |
Number | Date | Country | |
---|---|---|---|
61478377 | Apr 2011 | US | |
61478388 | Apr 2011 | US |