Hyperspectral sensors are a new class of optical sensor that collect a spectrum from each point in a scene. They differ from multi-spectral sensors in that the number of bands is much higher (twenty or more), and the spectral bands are contiguous. For remote sensing applications, they are typically deployed on either aircraft or satellites. The data product from a hyperspectral sensor is a three-dimensional array or “cube” of data with the width and length of the array corresponding to spatial dimensions and the spectrum of each point as the third dimension. Hyperspectral sensors have a wide range of remote sensing applications including: terrain classification, environmental monitoring, agricultural monitoring, geological exploration, and surveillance. They have also been used to create spectral images of biological material for the detection of disease and other applications. Known target detection algorithms have been derived from several models of hyperspectral imagery.
The Gaussian mixture model has served as a basis for detecting known targets from hyperspectral and multispectral imagery. This approach models each datum as a realization of a random vector having one of several possible multivariate Gaussian distributions. If each observation, y∈Rn, arises from one of d normal classes then the data have a normal or Gaussian mixture probability density function:
where ωk is the probability of class k and
is the normal probability density function having mean μk and covariance Γk*. The parameters {(ωk,μk,Γk)|1≦k≦d} are typically estimated from the imagery using defined clusters, the expectation maximization algorithm or related algorithms such as the stochastic expectation maximization algorithm. Known target detection algorithms are generally implemented using a bank or a linear combination of the likelihood ratio detection statistics for each class. The covariance of the observations under the target present hypothesis is usually assumed to equal the covariance of the observations under the background only hypothesis. Thus the test for the presence of a target against background class k is often formulated as the likelihood ratio for the hypotheses:
H0,k:y˜N(μk,Γk)
H1,k:y˜N(s,Γk),
where s∈Rn is the spectrum of the target. In this case, the log of the likelihood ratio is equivalent to the spectral matched filter for a target against a background modeled by class k, i.e.
Linear and convex models have also served as the basis for formulating known target detection algorithms. In this approach the data are modeled as
H0:y=Wαb+η
H1:y=Sαt+Wαb+η, [Eqns. 3]
where: W is an n×P matrix such that the columns of W span an interference subspace of dimension P; S is an n×Q matrix such that the columns of S span a signal subspace of dimension Q; η is additive noise such that η˜N(0,σ2Γ). W, S, and Γ are assumed known, and αt∈RQ and αb∈RP are assumed unknown. σ2 may be known or unknown. Additionally, constraints may be placed on the coefficient vectors αt and αb, e.g
General procedures have not been developed for simultaneously estimating W and Γ. However, if either 1) αb is locally constant or 2) the data may be segmented into regions such that αb is essentially constant on each region, the term Wαb may be absorbed into the noise which is then modeled by η˜N(μ,Γ), where the parameters μ and Γ are estimated locally or for each segment. With W=0, Γ may be estimated from background reference data, and if Γ=In×n, a basis for W may be estimated as the eigenvectors of a background data correlation matrix having eigenvalues greater than σ2, a threshold determined from the eigenspectrum of the data correlation matrix. Eqns. 3 apply a convex or linear model to the data if the constraints (c.1, c.2) are or are not imposed, respectively.
The linear models have been used by several practitioners in the art to derive likelihood ratio and generalized likelihood ratio detection statistics. See, for example, Sharf et al. [L. L. Scharf and B. Friedlander, “Matched Subspace Detectors,” IEEE Transactions on Signal Processing, Vol 42. No. 8, August 1994, pp. 2146–2157], Kraut et al. [S. Kraut, L. L. Scharf, L. T. McWhorter, “Adaptive Subspace Detectors,” IEEE Transactions on Signal Processing,” Vol. 49, No. 1, January 2001, pp. 1–16.], and Manolakis et al. [D. Manolakis, C. Siracusa, and G. Shaw, “Hyperspectral Subpixel Target Detection Using the Linear Mixing Model,” IEEE Transactions on Geoscience and Remote Sensing, Vol 39, No. 7, July 2001, pp. 1392–1409]. Likelihood ratio and generalized likelihood ratio (GLR) techniques have also been applied to the convex model. For example, Manolakis et al. showed that the GLR test when Γ=In×n, σ2 is unknown, and W and S are known is
where PA is orthogonal projection with reference to the Euclidean inner product onto the subspace A, and A⊥⊂Rn is the subspace orthogonal to A.
Spectra from a class of material are often better modeled as random rather than as fixed vectors. This may be due to biochemical and biophysical variability of materials in a scene. For such data, neither the linear mixture model nor the normal mixture model is adequate, and better classification and detection results may accrue from using more accurate methods. Stocker et al. [A. D. Stocker and A. P. Schaum, “Application of stochastic mixing models to hyperspectral detection problems,” SPIE Proceedings 3071, Algorithms for Multispectral and Hyperspectral Imagery III, S. S. Shen and A. E. Iverson eds. August 1997] propose a stochastic mixture model in which each fundamental class is identified with a normally distributed random variable, i.e.
They estimate the parameters of the model by quantizing the set of allowed abundance values, and fitting a discrete normal mixture density to the data. More precisely, let Δ=1/M denote the resolution of the quantization. Then the set of allowed coefficient sequences is
For each {right arrow over (α)}=(α1, . . . ,αd)∈A, define
Then the observations are fit to the mixture model
The fitting is accomplished using a variation of the stochastic expectation maximization algorithm such that Eqn. 6 is satisfied in a least squares sense. Stocker et al. demonstrate improved classification in comparison with clustering methods using three classes, and they demonstrate detection algorithms using this model. They note, however, that the method is impractical if the data are comprised of a large number of classes or if Δ is small, as the number of elements of A, which is given by:
becomes very large. Furthermore, quantizing the allowed abundance values leads to modeling and estimation error.
Stocker et al. used this model to develop a known target detection statistic: the finite target matched filter (FTMF). Observations of the target, t, and background, b, are represented as samples from the normal random variable t˜N(μ1,Γ1) and b˜N(μ0,Γ1), respectively. An observation that consists of a fraction (1−ƒ) of background material and ƒ of target material is then modeled as y˜N((1−ƒ)μ0+ƒμ1,(1−ƒ)2Γ0+ƒ2Γ1)=p(y|ƒ). Stocker et al. define the FTMF as the generalized likelihood ratio test:
and a detection algorithm is achieved by applying a threshold to the values of TFTMF. A bank of FTMFs may be applied to Gaussian mixture data given by Eqns. 1 or 7.
These unresolved problems and deficiencies are clearly felt in the art and are solved by this invention in the manner described below.
A method for detecting targets comprises: a) receiving spectral data; b) using a normal compositional model for estimating background parameters from the spectral data and target components; c) estimating abundance values of classes of the normal compositional model from the background parameters and the spectral data; d) estimating target class covariance values from the spectral data, the background parameters, and the target components; e) estimating target-plus-background abundance values from the target class covariance values, the background parameters, the spectral data, and the target components; f) employing a normal compositional model for determining a likelihood ratio detection statistic from the target class covariance values, target-plus-background abundance values, spectral data, target components, background parameters, and background abundance values; and g) generating a determination output signal that represents whether an observation includes a target from the likelihood ratio detection statistic.
For a more complete understanding of this invention, reference is now made to the following detailed description of the embodiments as illustrated in the accompanying drawing, in which like reference designations represent like features throughout the several views and wherein:
The invention is used to detect known signatures from spectral imagery. The invention is operated as shown in
1. Normal Compositional Model (NCM)
The normal compositional model (NCM) represents each observation yi∈Rn as:
or other constraints, where εk,η∈Rn are random vectors such that εk˜N(μk, Γk), η˜N(μ0, Γ0) and c=0,1. The number of classes used in the model of the background is d=d0, and the number of classes used in the background-plus-target model is d=d1=d0+ds, where ds is the number of target components. Assuming that Γk≠0 for all 1≦k≦d, we do not require the linear independence of the background and or target mean vectors. Constraint c.2.b may be used in place of c.2.a to account for variations in scale or as in remote sensing, scalar variations in illumination. Applied to remote sensing data, η models path radiance, additive sensor noise, and other additive terms. By choosing c=0, and constraints c.1 and c.2.a, the model reduces to the Schaum-Stocker model (Eqn. 5). Although, with this choice of parameters and constraints the present invention has advantages over the Schaum-Stocker approach because the estimation procedure does not confine the abundance values to preselected quantized values. Therefore, it is not restricted to a small number of classes and it provides more accurate estimates of class parameter and abundance values. This model reduces to the convex and linear mixing models (Eqn. 3) by choosing Γk=0 for all 1≦k≦d and C=1, although the parameter estimation technique described below will not refine initial estimates of the μk in this case. It does, however, provide a maximum likelihood approach to estimating the parameters of the distribution of η. Furthermore, if Γk=0 for all 1≦k≦d and c=1, the NCM reduces to the linear or convex mixture models if the constraints c.1 and c.2.a or c.2.b are not or are imposed, respectively. Furthermore, by imposing the constraints c.2.a and αki=0,1 for each 1≦i≦N, exactly one of αki=1, and the model encompasses the Gaussian mixture model (Eqn. 1). Whereas specialized constraints applied to the parameters of the NCM reduce it to the older models, in general, without imposing special constraints, the NCM provides a model having higher likelihood than these alternatives.
2. Parameter Estimation
The parameter estimation module is illustrated in
A. Initialization
The initialization module is depicted in
B. Updating Abundance Estimates (UA)
For given parameters (μk,Γk), 1≦k≦d, and given abundances αi=(α1i, . . . ,αdi), let
Then, yi˜N(μ(αi)+μ0,Γ(αi)+Γ0). Maximum likelihood abundance estimates are thus obtained by solving
subject to the constraints c.1, and c.2.a, or c.2.b, or other constraints.
C. Update Class Parameters (UP)
For given abundance estimates, the class parameters of the background,
Ω={(μk,Γk)|0≦k≦d},
may be estimated by applying the expectation-maximization (EM) algorithm. Let
Ωr={(μkr,Γkr)|0≦k≦d}
denote the estimate of the parameters after the rth iteration of the EM algorithm. Given the abundance values {αki|1≦i≦N,1≦k≦d}, define
The EM update equations are:
The class parameters are updated (UP) using the expectation-maximization equations (13) and the current abundance estimates {αkij}. Likelihood increases with each iteration of UA or UP. Thus, a sequence of parameter estimates of increasing likelihood is obtained by the application of a sequence of updates: UA,UP,UA,UP, . . . . The iteration is halted when a convergence criterion is satisfied.
3. Detection Algorithms
Given parameters of the background and the target classes, the generalized likelihood ratio test may be computed. Let αi0 and αi1 denote the abundance estimates obtained by solving (Eqn. 12) using only the background classes and the combination of background and target classes, respectively. The log of the likelihood function given the abundance and parameter values is
and the generalized likelihood ratio is
An inference concerning the presence of a target in pixel i is made based on the values of TK(yi). A threshold τ is determined that corresponds to an attribute of the test, e.g., probability of false alarm, by analyzing the probability distribution of TK(yi) applied to background data. The decision criterion is then
The class parameters may be updated using a segmented expectation maximization algorithm in place of the expectation-maximization algorithm. In this approach a lower threshold, possibly zero, is placed on the abundance of a class, and only those pixels for which the abundance exceeds the threshold are utilized in the update of the associated class parameters. This approach saves computations and improves the speed of convergence of the parameter estimates.
Rather than solving for the maximum likelihood value of the abundance estimates in the parameter estimation phase of the operation, random samples of the abundance estimates may be generated and these may be used in place of the maximum likelihood estimates in the updating of the class parameters.
For parameter estimation, the image may be culled of spectra that are sufficiently close to the target spectra in order to further reduce the bias in the estimate of background parameters from data that may also contain targets.
The processing may be conceived of as applying two models to the data: 1) the background only model, and 2) the background and target model. In the description above, the background parameters were estimated only once. Alternatively, one could estimate parameters of the background only model and separately estimate parameters of the background-plus-target model.
Referring to
Referring to
Step 250 for determining converged class parameter candidates 252 further includes step 240 for creating updated background class parameters 242 from the current class parameters 222, updated abundance estimates 232, and spectral data 112, and step 260 for generating converged class parameter candidates 252 if the background class parameters 242 satisfy second convergence criteria. However, if the background class parameters 242 do not satisfy second convergence criteria, then step 260 generates a non-convergence signal that is provided to step 240, to which step 250 returns.
Clearly, other embodiments and modifications of this invention may occur readily to those of ordinary skill in the art in view of these teachings. Therefore, this invention is to be limited only by the following claims, which include all such embodiments and modifications when viewed in conjunction with the above specification and accompanying drawing.
This application claims the benefit of U.S. Provisional Application No. 60/394,649, filed 9 Jul. 2002, and is related by common inventorship and subject matter to the commonly-assigned U.S. Provisional Patent Application No. 60/394,708 entitled “System and Method for Detecting Anomalies in Multispectral and Hyperspectral Imagery Employing the Normal Compositional Model” filed on 9 Jul. 2002.
Number | Name | Date | Kind |
---|---|---|---|
6079665 | Nella et al. | Jun 2000 | A |
Number | Date | Country | |
---|---|---|---|
60394649 | Jul 2002 | US | |
60394708 | Jul 2002 | US |