Method of determining alignment of images in high dimensional feature space

Information

  • Patent Grant
  • 7653264
  • Patent Number
    7,653,264
  • Date Filed
    Friday, March 3, 2006
    18 years ago
  • Date Issued
    Tuesday, January 26, 2010
    14 years ago
Abstract
A method determines alignment of images in high dimensional feature space. The method comprises registering a source image of a reference modality to a target image of a second modality with an algorithm based upon a measure of information affinity present in both of the source and target image to create a registered image. Next, a plurality of feature vector are extracted from the registered image for each of the source and target images and attributes of the joint distribution of feature vector are captured using an entropic graph spanning the features. Edge lengths are between proximal feature vectors are extracted from the entropic graph and a similarity measure of one of an α-divergence estimate or an α-affinity estimate is constructed based upon these edge lengths to quantify whether the source and target image are sufficiently registered.
Description
Computer Code

Portions of the subject application are contained on a single compact disc and are incorporated herein by reference. The compact disc is titled “Computer Code” in IBM-PC machine format, compatible with MS-Windows. and contains seven files in ASCII file format having the following files names: 1mstkruskal.txt; 2mstprim.txt: 3hpmst.txt: 4knngkd.txt: 5miknng.txt: 6gaknng.txt: and 7nonlinknng.txt.


BACKGROUND OF THE INVENTION

1. Field of the Invention


The subject invention relates to a method of determining alignment of images in high dimensional feature space, and more specifically, to determining alignment of a sequence of images of different modalities from various types of applications.


2. Description of the Prior Art


Registering images aligns a target image onto a source, or reference, image using various registration algorithms to enhance image correlation, remove geometric distortion, and to facilitate various image processing tasks, such as image analysis, segmentation, understanding, visualization, and rendering. Image registration methods select a sequence of intensity preserving transformations, or algorithms, to maximize an image similarity measure between the reference image and the target image. Image registration has wide applications in medical imaging, DNA sequencing, video motion analysis, satellite imagery, remote sensing, security and surveillance. The accuracy of the registration algorithm critically depends on two factors: the selection of a highly discriminating image feature space and the choice of similarity measure to match these image features. These factors are especially important when some of the intensity differences are due to the sensor itself, as arises in registration with different types of imaging sensors or registration of speckle-limited images.


Multi-sensor images typically have intensity maps that are unique to the sensors used to acquire them and a direct linear correlation between intensity maps may not exist. Several other matching functions have been suggested in D. Hill, P. Batchelor, M. Holden, and D. Hawkes, “Medical image registration,” Phys. Med. Biol., vol. 26, pp. R1-R45, 2001; M. Jenkinson, P. Bannister, M. Brady, and S. Smith, “Improved methods for the registration and motion correction of brain images,” Oxford University, Tech. Rep., 2002; and C. Penney, J. Weese, J. Little, D. Hill, and D. Hawkes, “A comparison of similarity measures for used in 2-D-3-D medical image registration,” IEEE Trans. on Medical Imaging, vol. 17, no. 4, pp. 586-595, 1998.


Some of the most widespread techniques are: histogram matching (J. Huang, S. Kumar, M. Mitra, and W. Zhu, “Spatial color indexing and applications,” in Proc. of IEEE Int'l Conf. Computer Vision ICCV '98, pp. 602-608); texture matching (J. Ashley, R. Barber, M. Flickner, D. Lee, W. Niblack, and D. Petkovic, “Automatic and semiautomatic methods for image annotation and retrieval in qbic,” in Proc. SPIE Storage and Retrieval for Image and Video Databases III, pp. 24-35); intensity cross correlation (J. B. Maintz and M. Viergever, “A survey of medical image registration,” Medical Image Analysis, vol. 2, no. 1, pp. 1-36, 1998); optical flow matching (M. Lef'ebure and L. Cohen, “Image registration, optical flow and local rigidity,” J. Mathematical Imaging and Vision, vol. 14, no. 2, pp. 131-147, 2001); kernel-based classification methods (N. Cristiani and J. Shaw-Taylor, Support Vector Machines and other kernel-based learning methods. Cambridge U. Press, 2000); and boosting classification methods (J. S. de Bonet and P. Viola, “Structure driven image database retrieval,” in Advances in neural information processing, vol. 10, 1997; T. Kieu and P. Viola, “Boosting image retrieval,” in IEEE Conference on Computer Vision and Pattern Recognition, 2000).


In such cases, it is well known that the standard linear cross correlation is a poor similarity measure. Previously, the images have been aligned based upon a single dimension feature, such as gray scale pixel intensity, using simple correlation techniques. These methods produce adequate results for images that are of a single modality. However, if the images are from different modalities or have high dimensional features, the single dimension feature approach does not produce adequate results.


Previous sequence alignment methods that can be applied to high dimensional data have been based on simple correlation measures that can handle the high computational load. When the sequence of images consists of only a few objects one can apply sophisticated methods that compute the entire empirical distribution of low dimensional features extracted from the sequence. These methods have superior performance when the features are well selected as they can adapt to non-linearities, spurious differences, and artifacts within the sequence. Examples include histogram equalization methods and methods based on minimizing joint entropy or maximizing mutual information (MI). However, these feature-distribution methods are difficult to apply as the dimension feature becomes high. Thus, these sequence alignment methods have been limited to low dimensional feature spaces, such as coincidences between pixel intensity levels.


The multi-modality images may arise from alignment of gene sequences over several genomes, across-modality registration of time sequences of patient scans acquired during cancer therapy, or simultaneous tracking of several objects in a video sequence. The previous sequence alignment methods fail when the objects in the sequence are high dimensional and are not exact replicates of each other, e.g., due to the presence of non intensity-preserving deformations or spurious noise and artifacts. In particular, computationally simple linear cross correlation methods fail to capture non-linear similarities between objects.


Entropic methods use a matching criterion based on different similarity measures defined as relative entropies between the feature densities. Entropic methods have been shown to be virtually unbeatable for some medical imaging image registration applications as discussed in C. R. Meyer, J. L. Boes, B. Kim, P. H. Bland, K. R. Zasadny, P. V. Kison, K. F. Koral, K. A. Frey, and R. L. Wahl, “Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations,” Medical Image Analysis, vol. 1, no. 3, pp. 195-206, April 1997 and D. Hill, P. Batchelor, M. Holden, and D. Hawkes, “Medical image registration,” Phys. Med. Biol., vol. 26, pp. R1-R45, 2001.


Several properties of entropic methods have contributed to their popularity for image registration: 1) because they are statistically based measures they easily accommodate combinations of texture based and edge based registration features; 2) relative entropies are easily defined that are invariant to invertible intensity transformations on the feature space; and 3) they are simple to compute and the number of local maxima can be controlled by suitably constraining the set of image transformations.


The difficulty in applying these entropic methods to a long sequence of images becomes almost insurmountable since the dimension of the feature space grows linearly in the length of the sequence. In order to apply these methods to long sequences, each pair of images has to be analyzed thereby increasing the likelihood of introducing computational errors. Further, such previous entropic methods have resulted in a computational bottleneck that is a major hurdle for information based sequence alignment algorithms.


These related art methods are characterized by one or more inadequacy. Specifically, limitations of the prior linear correlation methods do not extend to higher dimensional features and the prior entropic methods are equally unworkable with regard to higher dimensional features. Accordingly, it would be advantageous to provide a method that overcomes these inadequacies.


SUMMARY OF THE INVENTION AND ADVANTAGES

The subject invention provides a method of determining alignment of images in high dimensional feature space. The method comprises registering a source image of a reference modality to a target image of a second modality with an algorithm based upon a measure of information affinity present in both of the source and target image to create a registered image. Next, a plurality of feature vectors are extracted from the registered image for each of the source and target images and a distribution of the feature vectors are plotted on an entropic graph. Edge lengths are determined between proximal feature vectors from the entropic graph and a similarity measure of one of an α-divergence estimate or an α-affinity estimate is determined based upon these edge lengths such that the α-divergence or α-affinity estimate indicate whether the source and target image are sufficiently registered.


The subject invention provides a method that overcomes the inadequacies that characterize the prior art methods. First, the subject invention is able to register images of multi-modalities based upon an information affinity such as mutual information (MI) through the use of the entropic graph and the edge lengths. Second, the subject invention is able to produce the information affinity without encountering any computational bottleneck. The subject invention is also able to analyze a long sequence of images in the same feature space simultaneously instead of having to perform multiple analyses on each pair of images.





BRIEF DESCRIPTION OF THE DRAWINGS

Other advantages of the present invention will be readily appreciated, as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings wherein:



FIG. 1 is a block diagram of an image registration system;



FIG. 2A is an MRI image of the brain with additive noise, T1 weighted I1;



FIG. 2B is an MRI image of the brain with additive noise, T2 weighted I2;



FIG. 3 is a joint gray-level pixel coincidence histogram from registering FIGS. 2A and 2B;



FIG. 4A is a visible image, I1, of Atlanta, Ga. acquired via satellite;



FIG. 4B is a thermal infrared image, I2, of Atlanta, Ga. acquired via satellite;



FIG. 4C is a thermal infrared image, T(I2), of Atlanta, Ga. acquired via satellite shown in FIG. 4B that has been rotated;



FIG. 5A is a joint gray-level pixel coincidence histogram from registering FIGS. 4A and 4B;



FIG. 5B is a joint gray-level pixel coincidence histogram from registering FIGS. 4A and 4C;



FIG. 6A is an ultrasound breast image;



FIG. 6B is an ultrasound breast image shown in FIG. 6A that has been rotated by about 8°;



FIG. 7A is a joint coincidence histogram from registering FIG. 6A with itself, i.e., perfect alignment;



FIG. 7B is a joint coincidence histogram from registering FIG. 6A with FIG. 6B;



FIG. 7C is a joint coincidence histogram from registering FIG. 6A with another ultrasound breast image separated by about 2 mm;



FIG. 7D is a joint coincidence histogram from registering FIG. 6A with another ultrasound breast image separated by about 2 mm and rotated about 8°;



FIG. 8 is a graphical illustration of a minimal spanning tree for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2), the ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12 and Σ12;



FIG. 9 is a graphical illustration of a minimal spanning tree for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2), the ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12−3 and Σ12;



FIG. 10 is a graphical illustration of a k-nearest neighbor graph (k-NNG) for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2), the ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12 and Σ12;



FIG. 11 is a graphical illustration of a k-NNG for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2), the ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12−3 and Σ12;



FIG. 12 is a graphical illustration of a Henze-Penrose (HP) affinity for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2), the ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12 and Σ12;



FIG. 13 is a graphical illustration of a HP affinity for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2), the ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12−3 and Σ12;



FIG. 14 is a composite of twenty 2-dimensional slices extracted from 3-dimensional volumetric ultrasound breast scans from twenty patients undergoing chemotherapy;



FIG. 15 is a representation of an 8×8 independent component analysis (ICA) set obtained from training on randomly selected 8×8 blocks in the ultrasound training database of breast scans. Features extracted from an image are the 66-dimensional vectors obtained by projecting all 8×8 neighborhoods of the image onto the ICA basis and including the spatial coordinate.



FIG. 16 is a graphical representation of a rotational root mean squared error obtained from registration of ultrasound breast images (FIGS. 14 and 15) using six different image similarity/dissimilarity criteria and standard error bars are as indicated. These plots were obtained by averaging 15 cases, each with 250 Monte Carlo trials adding noise to the images prior to registration, corresponding to a total of 3750 registration experiments;



FIG. 17A is a thermal infrared image, I1, of Atlanta, Ga. acquired via satellite;



FIG. 17B a visible image, I2, of Atlanta, Ga. acquired via satellite that has been rotated;



FIG. 18 is a graphical registration of a rotational root mean squared error obtained from rotational registration of multisensor satellite imagery (FIGS. 17A and 17B) using six different image similarity/dissimilarity criteria and standard error bars are as indicated. These plots were obtained from Monte Carlo trials consisting of adding independent identically distributed Gaussian distributed noise to the images prior to registration;



FIG. 19 is a graphical representation of an average affinity and divergence, over all images, in the vicinity of zero rotation error by α-geometric arithmetic mean affinity, α-mutual information based upon k-NNG, and Henze-Penrose affinity; and



FIG. 20 is a graphical representation of an average affinity and divergence, over all images, in the vicinity of zero rotation error by α-Jensen based upon k-NNG and α-Jensen based upon MST.





DETAILED DESCRIPTION OF THE INVENTION

A method of determining alignment of images in high dimensional feature space is disclosed. The method is particularly useful for aligning images of different modalities. Examples of modalities includes, but are not limited to, X-ray imaging, Magnetic Resonance Imaging (MRI), functional MRI (fMRI), computed tomography (CT) imaging, single photon emission computed tomography (SPECT), and positron emission tomography (PET). Other non-medical uses of the subject invention may include alignment of gene sequences over several genomes, simultaneous tracking of several objects in a video sequence, audio waveforms, and satellite imagery.


The subject invention registers a source image of a reference modality to a target image of a second modality. Preferably, the subject invention is well suited to register two or more images in the feature space. Specifically, the subject invention is robust enough to handle relatively long sequences of images. As one example, the subject invention may be used to register sequences of images obtained from security camera for facial recognition. Various types of features may be utilized by the subject invention. Illustrative examples of features include, but are not limited to, pixel gray level, location of pixel intensity such as upper left, lower left, etc., edges in the images, measures of texture, orientation between 0° and 360°, and curviness/deviation from straightness. The subject invention is able to determine the alignment of the images based upon combinations of various features.


Referring to FIG. 1, the three main components of an image registration system are shown. The system includes (1) definition and extraction of features that discriminate between different image poses Iref and Itar; (2) adaptation of a matching criterion that quantifies feature similarity and is capable of resolving important differences between images, yet is robust to image artifacts; and (3) implementation of optimization techniques which allow fast search over possible transformations T. The subject invention is principally concerned with the second component of the system, which is the choice of matching criterion between the images and may also be called a similarity or dissimilarity measure. It is to be appreciated by those of ordinary skill in the art that the similarity measure between the source image and the target image is based upon a α-divergence or α-affinity estimate indicating whether the source and target image are sufficiently registered in the registered image.


The system is extendible to alignment of a sequence of images {I0, . . . , IM} by 1) identifying one member of this sequence, e.g., I0, as the source, or reference, image Iref, 2) redefining Itar as the sequence of target images {I0, . . . , IM}; and 3) redefining the transformation T as a transformation [T1; . . . ; TM] on the sequence Ttar of target images. The rest of the elements of the system in FIG. 1 remain the same as discussed above. In particular, the image matching criterion is applied to the pair (Iref; Itar) as before.


With reference to FIGS. 2A and 2B, FIG. 2A is an MRI image of the brain with additive noise, T1 weighted I1 and FIG. 2B is an MRI image of the brain with additive noise, T2 weighted I2. Although acquired by a single sensor, the time weighting renders different intensity maps to identical structures in the brain. Thus, a joint gray-level pixel coincidence histogram, shown in FIG. 3, of FIGS. 2A and 2b is clustered and does not exhibit a linear correlation between intensities. The subject invention is able to register such images with an algorithm based upon a measure of information affinity present in both of the source and target image to create the registered image. Preferably, the subject invention utilizes mutual information (MI), but other types of information affinities may be used in place of mutual information.


The MI can be interpreted as a similarity measure between the reference and target pixel intensities or as a dissimilarity measure between the joint density and the product of the marginals of these intensities. Let X0 be a reference image and consider a transformation of the target image (X1), defined as XT=T(X1). Assume that the images are sampled on a grid of M×N pixels, a pair of feature vectors, such as (scalar) gray levels, are extracted. The feature vectors are defined as (z0k, zTk) and extracted from the k-th pixel location in the reference and target images, respectively.


The basic assumption underlying MI image registration is that {(z0k, zTk)}k=1MN are independent identically distributed (i.i.d) realizations of a pair (Z0, ZT), (ZT=T(Z1)) of random variable having joint density f0.1(z0, zT). If the reference and the target images were perfectly correlated, i.e. identical images, then Z0 and ZT would be dependent random variables. FIG. 4A is a visible image, I1, of Atlanta, Ga. acquired via satellite and FIG. 4B is a thermal infrared image, I2, of Atlanta, Ga. acquired via satellite. FIG. 4C is a thermal infrared image, T(I2), of Atlanta, Ga. acquired via satellite shown in FIG. 4B that has been rotated. Referring to FIGS. 4A, 4B, and 4C, the MI alignment procedure is illustrated through a multi-sensor remote sensing example. Aligned images acquired by visible and thermally sensitive satellite sensors, generate a joint gray level pixel coincidence histogram f0,1(z0, z1). Note that in FIG. 5A the joint gray-level pixel coincidence histogram is not concentrated along the diagonal due to the multi-sensor acquisition of the images. When the thermal image is rotationally transformed, the corresponding joint gray-level pixel coincidence histogram, shown in FIG. 5B, f0,1(z0, zT) is dispersed, thus yielding lower mutual information than before.


Yet another example utilizing MI is shown in FIGS. 6A and 6B. FIG. 6A is an ultrasound breast image and FIG. 6B is an ultrasound breast image shown in FIG. 6A that has been rotated by about 8°. Applying MI as discussed above, a joint coincidence histogram from registering FIG. 6A with itself, i.e., perfect alignment; is shown in FIG. 7A. FIG. 7B is a joint coincidence histogram from registering FIG. 6A with FIG. 6B. As can be seen, FIG. 7B yields lower mutual information. FIG. 7C is a joint coincidence histogram from registering FIG. 6A with another ultrasound breast image separated by about 2 mm and FIG. 7D is a joint coincidence histogram from registering FIG. 6A with another ultrasound breast image separated by about 2 mm and rotated about 8°. As can again be seen, lower mutual information is present in these histograms, thus illustrating one disadvantage of using MI to register such images.


On the other hand, if the images were statistically independent, the joint density of Z0 and ZT would factor into the product of the marginals f0,1(z0, zT)=f0(z0)f1(zT). As a result of this, the subject invention uses a α-divergence expressed as Dα(f0,1(z0, zT)∥f0(z0)f1(zT)) between f0,1(z0, zT) and f0(z0)f1(zT) as a similarity measure. For α ε (0, 1), the above α-divergence becomes α-mutual information (α-MI) between Z0 and ZT and has the general formula:







α





MI

=



D
α



(

f
||
g

)


=


1

α
-
1



log










f
α



(

x
,
y

)





f

1
-
α




(
x
)





f

1
-
α




(
y
)






xdy

.









Utilizing α-MI, the subject invention is better suited to handle registration of FIGS. 4B and 4C and FIGS. 6A and 6B as will be discussed in further detail below.


Yet another similarity measure that may be used with the subject invention is an α-geometric-arithmetic (α-GA) divergence. Given continuous distributions f and g, then α-GA divergence has the general formula:










α







D
GA



(

f
,
g

)



=


D
α



(


pf
+
qg

||


f
p



g
q



)








=


1

α
-
1



log






(


pf


(
z
)


+

qg


(
z
)



)

α




(



f
p



(
z
)





g
q



(
z
)



)


1
-
α





z











The α-GA divergence is a measure of discrepancy between the arithmetic mean and the geometric mean off and g, respectively, with respect to weights p, q=1−p, and p ε[0,1]. The α-GA divergence interprets the dissimilarity between the weight arithmetic mean pf(x)+qg(x) and the weighted geometric mean f(x)gq(x).


Still another similarity measure that may be used is a Henze-Penrose (HP) divergence. While divergence measures dissimilarity between distributions, similarity between distributions can be measured by affinity measures. One measure of affinity between probability distributions f and g is expressed as the general formula:








A
HP



(

f
,
g

)


=

2

pq








f


(
z
)



g


(
z
)









pf


(
z
)


+

qg


(
z
)







z








with respect to weights p, q=1−p, and p ε[0,1]. This affinity was introduced in N. Henze and M. Penrose, “On the multivariate runs test,” Annals of Statistics, vol. 27, pp. 290-298, 1999 as the limit of the Friedman-Rafsky statistic disclosed in J. H. Friedman and L. C. Rafsky, “Multivariate generalizations of the Wald-Wolfowitz and Smirnov two-sample tests,” Annals of Statistics, vol. 7, no. 4, pp. 697-717, 1979. The subject invention translates the affinity measure into a divergence measure, the HP divergence, expressed in the general formula below:








D
HP



(

f
||
g

)


=


1
-


A
FR



(

f
,
g

)



=







p
2




f
2



(
z
)



+


q
2




g
2



(
z
)






pf


(
z
)


+

qg


(
z
)







z








To date, no one has previously applied the HP divergence to image registration. Any of the above three similarity measures may be used with large feature dimensions according to the subject invention. In the prior art methods, such similarity measures were not workable because as high dimensional feature spaces can be more discriminatory, a barrier was created to performing robust high resolution histogram-based entropic registration.


The subject invention overcomes this barrier by estimating the α-entropy via an entropic graph whose vertices are the locations of the feature vectors in feature space. The feature vectors are extracted from the registered image for each of the source and target images and plotted on the entropic graph. As the images are better aligned, the feature vectors will become more closely clustered. The α-entropy is based upon edge lengths between distributions of the feature vectors from the entropic graph. Said another way, the entropic graph is further defined as a minimal graph spanning the feature vectors that minimizes a function of the total edge length of the minimal graph and that approximates the α-affinity or α-divergence of the distribution of the feature vectors. Once the edge length is known, the α-divergence estimate or the α-affinity estimate is determined and the similarity measure is determined as discussed above. The entropic graph reveals which feature vectors are closest to each other, i.e. proximal in the sense of being connected by a (single) edge of the graph.


The entropic graph is preferably based upon one of a minimum spanning tree (MST), a k-nearest neighbor graph (k-NNG), a Steiner tree, a Delaunay triangulation, or a traveling salesman problem (TSP). More preferably, the entropic graph is based on either the MST or the k-NNG.


A spanning tree is a connected acyclic graph which passes through all n feature vectors in Zn. The MST connects these points with n-1 edges, denoted {ei}, in such a way as to minimize the total length as expressed in the general formula:









L
γ



(

Z
n

)


=


min

e

T






e





e


γ




,




where T denoted the class of acyclic graph (trees) that span Zn.


Referring to FIG. 8, a graphical illustration of a MST for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2) is shown. The ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12 and Σ12. FIG. 9 is a graphical illustration of a MST for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2). The ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12−3 and Σ12.


The MST approach to estimating a α-Jensen difference between feature densities of two images can be implemented as follows. Assume two sets of feature vectors







Z
0

=




{

z
0

(
i
)


}


i
=
1


n
0







and





Z





1

=


{

z





1

}


i
=
1


n
1








are extracted from images X0 and X1 and are i.i.d. realizations from multivariate densities f0 and f1, respectively. Define the set union Z=Z0∪Z1 containing n=n0+n1 unordered feature vectors. If n0, n1 increase at constant rate as a function of n then any consistent entropy estimator constructed from the vector







{

Z

(
i
)


}


i
=
1



n
0

+

n
1







will converge to Hα(pf0+qf1) as n→∞ where p=limη→∞








n
0

n

.





This motivates the following finite sample entropic graph estimator of α-Jensen difference:

ΔĤα(p,f0,f1)=Ĥα(Z0∪Z1)−[α(Z0)+α(Z1)],


where p=n0/n, Ĥα(Z0∪Z1) is the MST entropy estimator constructed on the n point union of both sets of feature vectors and the marginal entropies Ĥα(Z0), Ĥα(Z1) are constructed on the individual sets of n0, n1 feature vectors, respectively. The MST based estimator as determined above can easily be implemented in high dimensions, it bypasses the complications of choosing and fine tuning parameters such as histogram bin size, density kernel width, complexity, and adaptation speed, since the topology of the MST does not depend on the edge weight parameter γ, the MST α-entropy estimator can be generated for the entire range of α ε (0,1) once the MST for any given α is computed.


With reference to FIG. 10, a graphical illustration of a k-NNG for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2) is shown. The ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12 and Σ12. FIG. 11 is a graphical illustration of a k-NNG for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2), the ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12−3 and Σ12.


The k-NNG is a continuous quasi additive power weighted graph is a computationally attractive alternative to the MST. Given i.i.d vectors Zn in Rd, the 1-nearest neighbor of zi in Zn is given by








arg

z



Z
n


{

z
i

}





min




z
-

z
i





,




where ∥z−zi∥ is the usual Euclidean (L2) distance in Rd. For general integer k≧1, the k-nearest neighbor of a point is defined in a similar way. The k-NNG graph puts a single edge between each point in Zn and its k-nearest neighbor. Let Nk,i=Nk,i(Zn) be the set of k-nearest neighbors of zi in Zn. The k-NN problem consists of finding the set of Nk,i for each point zi in the Zn−{z}.


This problem has exact solution which run in linear-log-linear time and the total graph length is:








L

γ
,
k




(

Z
n

)


=




i
-
1

N






e


N

k
,
i









e


γ

.







In general, the k-NNG will count edges at least once, but sometimes count edges more than once. If two points, X1 and X2 are mutual k-nearest neighbors, then the same edge between X1 and X2 will be doubly counted.


Analogously to the MST, the log length of the k-NNG has limit










lim






n






log


(



L

γ
,
k




(

X
n

)



n



)



=



H




(
f
)


+

c
kNNG



,


(

a
.
s
.

)

.





Once again this suggest an estimator of the Renyi α-entropy:







Δ








H
^

α



(

Z
n

)



=


1

(

1
-
α

)




[


log








L

γ
,
k




(

Z
n

)


/

n
α



-

log






β

d
,
γ
,
k




]






As in the MST estimate of entropy, the constant CkNNG=(1−α)−1 log βd,y,k can be estimated off-line by Monte Carlo simulation of the k-NNG on random samples drawn from the unit cube. A related k-NNG is the graph where edges connecting two points are counted only once, referred to as a single-count k-NNG. Such a graph estimator eliminates one of the edges from each point pair that are mutual k-nearest neighbors. A k-NNG can be built by pruning such that every unique edge contributes only once to the total length. The resultant graph has the identical appearance to the initial unpruned k-NNG, when plotted on the page. However, the cumulative length of the edges in the graphs differ, and so does their β factor.


Referring now to FIG. 12, a graphical illustration of a HP affinity for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2) is shown. The ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12 and Σ12. FIG. 13 is a graphical illustration of a HP affinity for a Gaussian case of two bivariate normal distributions N(μ1, Σ1) and N(μ2, Σ2), the ‘x’ labeled points are samples from f1(x)=N(μ1, Σ1) and the ‘o’ labeled points are samples from f2(o)=N(μ2, Σ2), and μ12−3 and Σ12.


Once the above has been conducted, the subject invention may re-register the source image and the target image with a different algorithm to shorten the edge lengths from the entropic graph thereby improving the similarity measure therebetween.


The methods according to the subject invention may be put to many practical applications. Illustrative examples of such practical applications include registering images from whole breast imaging, constructing probabilistic atlases with a larger degree of certainty, feature-based matching, and registering tagged MRI sequences. A more detailed example of one application to ultrasound breast imaging is below and with reference to FIGS. 14-16.


Ultrasound (US) imaging is an important medical imaging modality for whole breast imaging that can aid discrimination of malignant from benign lesions, can be used to detect multi-focal secondary masses, and can quantify response to chemotherapy or radiation therapy. Referring to FIG. 14, a set of twenty 2D slices extracted from a 3D volumetric US breast scanner is shown for twenty different patients (cases) receiving chemotherapy. The women were imaged on their backs with the transducer placed so as to image through the breast toward the chest wall. Some of the cases clearly exhibit tumors (delineated masses with shadows), others exhibit significant connective tissue structure (bright thin lines or edges), and all have significant speckle noise and distortions.


In registering ultrasound images of the breast, the reference and secondary images have genuine differences from each other due to biological changes and differences in imaging, such as positioning of the tissues during compression and angle dependence of scattering from tissue boundaries. The tissues are distorted out of a given image plane as well as within it. Speckle noise, elastic deformations, and shadows further complicate the registration process thus making ultrasound breast images notoriously difficult to register. It is for this reason that conventional registration methods tend to have problems with US breast images. Here, the subject invention has an advantage of matching on high dimensional feature spaces implemented with entropic similarity metrics.


To benchmark the various registration methods studied, the mean squared registration error for registering a slice of US breast image volume to an adjacent slice in the same image volume (case) was examined. For each case, differing amounts of spatially homogeneous and independent random noise were added to both slices in order evaluate algorithm robustness. A training database of volumetric scans of 6 patients and a test database of 15 patient scans were created. Feature selection was performed using the training database and registration performance was evaluated over the test database. These databases were drawn from a larger database of 3D scans of the left or right breast of female subjects, aged 21-49 years, undergoing chemotherapy or going to biopsy for possible breast cancer. Each volumetric scan has a field of view of about 4 cm3 (voxel dimensions 0.1 mm2×0.5 mm) and encompasses the tumor, cyst or other structure of interest. The scans were acquired at 1 cm depth resolution yielding 90 cross-sectional images at 0.4 cm horizontal resolution. The patient data was collected with the intention to monitor therapy progress in the patients. Tumor/Cyst dimensions vary and can range from 5 mm3 to 1 cm3 or higher. This application was restricted to rotation transformations over ±16°.


The following results for vector valued features were constructed by projecting image patches onto a basis for the patch derived from independent component analysis (ICA). The ICA basis is especially well suited since it aims to obtain vector features which have statistically independent elements and can therefore facilitate estimation of α-MI and other entropic measures.


Specifically, in ICA, an optimal basis is found from a training set which decomposes images Xi in the training set into a small number of approximately statistically independent components {Sj} each supported on an 8×8 pixel block having the general formula:







X
i

=




j
=
1

p




a
ij



S
j







The basis elements {Sj} was selected from an over-complete linearly dependent basis using randomized selection over the database. For image i the feature vectors zi are defined as the coefficients {aij} in the above formula obtained by projecting each of its 8×8 sub-image blocks onto the basis.



FIG. 15 illustrates the estimated 64 dimensional (8×8) ICA basis for the training database. The basis was extracted by training on over 100,000 randomly sampled 8×8 sub-images taken from the 6 volumetric breast ultrasound scans. The algorithm used for extraction was Hyvarinen and Oja's FastICA ICA code which uses a fixed-point algorithm to perform maximum likelihood estimation of the basis elements in the ICA data model above. Given this ICA basis and a pair of to-be-registered image slices, coefficient vectors are extracted by projecting each 8×8 neighborhood in the images onto the basis set. A single 66 dimensional feature sample is then created by concatenating the 64 dimensional coefficient vector with a two dimensional vector identifying the spatial coordinates of 8×8 neighborhood. For example {W(i,j), x(i,j), y(i,j)} represents the feature from the 8×8 image patch located at coordinate (i, j) in the image.


For each of the 15 scans in the test set, 2 image slices were extracted in the depth direction perpendicular to the skin, such that they showed the cross-section of the tumor. These two slices have a separation distance of about 5 mm. At this distance, the speckle decorelates but the underlying anatomy remains approximately unchanged. The first cross sectional slice was picked such that it intersected with the ellipsoidal-shaped tumor through its center. The second slice was picked closer to the edge of the tumor. These images thus show a natural decline in tumor size, as would be expected in time sampled scans of tumors responding to therapy. Since view direction changes from one image scan to the next for the same patient over time, rotational deformation is often deployed to correct these changes during registration. This effect was simulated by registering a rotationally deformed image with its unrotated slice-separated counterpart, for each patient in the 15 test cases. Rotational deformation was in steps of 2 degrees such that the sequence of deformations was [−16 −8 −4 −2 0 (unchanged) 2 4 8 16] degrees. Further, the images were offset (relatively translated) by 0.5 mm (5 pixels) laterally to remove any residual noise correlation since it can bias the registration results. Since some displacement can be expected from the handheld UL imaging process and the relative tissue motion of the compressible breast tissue, this is not unreasonable. For each deformation angle, divergence measures were calculated, where the ‘registered state’ is the one with 0 degrees of relative deformation.


For each extracted image slice, 250 noisy replicates were created by adding truncated Gaussian noise. Neighborhoods of the ultrasound image replicates were projected onto the 64 dimensional ICA basis and the spatially tagged projection coefficient vectors W(i,j) were extracted as features for registration. The root-mean-squared (rms) registration error is illustrated for six different algorithms in FIG. 16 as a function of the rms (truncated) Gaussian noise. Registration error was determined as the rms difference between the location of the peak in the matching criterion and the true rotation angle. FIG. 16, with the exception for the α-Jensen difference, illustrates that the standard single pixel MI underperforms relative to the other methods of the subject invention. It is believed, without intending to be bound, that this is due to the superiority of the 64 dimensional ICA features used by the methods of the subject invention. The α-Jensen difference implemented with kNN vs MST results in a nearly identical performance exhibiting less performance than the other metrics. Unlike the other metrics, the α-Jensen difference is not invariant to reparameterization, which explains its relatively poor performance for large rms noise. However, the α-GA divergence, α-MI, and HP affinity outperforms the Shannon MI based upon an entropy estimate that was picked from a histogram. The Shannon MI remained slightly above four to about 5.5. Whereas, the α-GA divergence, α-MI, and HP affinity remained mostly below four.


Another detailed example of a different application of the subject invention to multi-sensor satellite image fusion is below and with reference to FIGS. 17-20. Numerous sensors gather information in distinct frequency bands in the electromagnetic spectrum. These images help predict daily weather patterns, environmental parameters influencing crop cycles such as soil composition, water and mineral levels deeper in the Earth's crust, and may also serve as surveillance sensors meant to monitor activity over hostile regions. A satellite may carry more than one sensor and may acquire images throughout a period of time. Changing weather conditions may interfere with the signal. Images captured in a multi-sensor satellite imaging environment show linear deformations due to the position of the sensors relative to the object. This transformation is often linear in nature and may manifest itself as relative translational, rotational or scaling between images. This provides a good setting to observe different divergence measures as a function of the relative deformation between images.


Linear rotational deformation was simulated in order to reliably test the image registration algorithms presented above. FIG. 17A is a thermal infrared image, I1, of Atlanta, Ga. and FIG. 17B is a visible image, I2, of Atlanta, Ga. that has been rotated, both acquired as a part of the ‘Urban Heat Island’ project (Project Atlanta, NASA Marshall Space Flight Center, Huntsville, Ala.) that studies the creation of high heat spots in metropolitan areas across the USA. Pairs of visible light and thermal satellite images were also obtained from NASA's Visible Earth website (NASA Visible Earth internet site, http://visibleearth.nasa.gov/). The variability in imagery arises due to the different specialized satellites used for imaging. These include weather satellites wherein the imagery shows heavy occlusion due to clouds and other atmospheric disturbances. Other satellites focus on urban areas with roads, bridges and high rise buildings. Still other images show entire countries or continents, oceans and large geographic landmarks such as volcanoes and active geologic features. Lastly, images contain different landscapes such as deserts, mountains, and valleys with dense foliage.


Images are rotated through 0° to 32°, with a step size adjusted to allow a finer sampling of the objective function near 0°. The images are projected onto a Meyer wavelet basis, and the coefficients are used as features for registration. A feature sample from an image, I, in the database is represented as tuple consisting of the coefficient vector, and a two dimensional vector identifying the spatial coordinates of the origin of the image region it represents. For example {W(i,j), x(i,j), y(i,j)} represents the tuple from position {i, j} in the image. Now,








W

(

i
,
j

)




{


w

(

i
,
j

)


Low
-
Low


,

w

(

i
,
j

)


Low
-
High


,

w

(

i
,
j

)


High
-
Low


,

w

(

i
,
j

)


High
-
High



}


,





where the superscript identifies the frequency band in the wavelet spectrum. Features from both the images {Z1, Z2} are pooled together to from a joint sample pool {Z1∪Z2}. The MST and k-NNG are individually constructed from this sample pool.



FIG. 18 is a graphical registration of a rotational root mean squared error obtained from rotational registration of multisensor satellite imagery (FIGS. 17A and 17B) using six different image similarity/dissimilarity criteria and standard error bars are as indicated. These plots were obtained from Monte Carlo trials consisting of adding independent identically distributed Gaussian distributed noise to the images prior to registration.


From FIG. 18, best performance under the presence of noise can be seen through the use of the α-MI estimated using wavelet features and k-NNG. Comparable performances are seen through the use of HP affinity and α-GA divergences, both estimated using wavelet features. Interestingly, the single pixel Shannon MI has the poorest performance which may be attributed to its use of poorly discriminating scalar intensity features. Notice that the α-GA, HP affinity, and α-MI (Wavelet-kNN estimate), all implemented with wavelet features, have significantly lower MSE compared to the other methods.


Further insight into the performance of the method of the subject invention may be gained by considering the mean objective function over 750 independent trials. FIG. 19 shows the α-MI, HP affinity and the α-GA affinity and FIG. 20 shows the α-Jensen difference divergence calculated using the k-NNG and the MST. The sensitivity and robustness of the dissimilarity measures can be evaluated by observing the divergence function near zero rotational deformation.


The subject invention also contemplates other methods beyond the entropic graphs that may be used for registering a sequence of decorelating images. The analysis of heterogeneous tumor response to therapy including following macrocellular dynamics, e.g. displacement, growth, stasis, or regression, at the level of a MRI voxel can be quantified by a highly nonlinear registration, i.e. warping, of all high spatial resolution MRI anatomical interval exams. In some cases not only does the morphology of tumor and normal tissues change, but also the intensity of tissues may change due to transient edema or ischemia with only little or no morphological changes. Morphological changes typically lag behind the occurrence of intensity changes. The necessary registration of interval exams could proceed automatically in a pair-wise manner using similarity measures that are relatively independent of tissue scan intensities, e.g. mutual information, normalized mutual information, joint entropy, etc., to optimize the accuracy of the registration. Due to the presence of tumors and the usually rapid temporal morphological evolution of the tumor's morphology, pair-wise registration is important due to the large common, i.e. mutual, information present between adjacent intervals which results from the shortest interval pair having the greatest morphological similarities. The short-range, pair-wise registration is uninformed about longer interval similarities, after several pair-wise registrations the “registered” images at the end of the sequence of acquisitions are not as well registered with the initial image of the sequence due to the accumulation of small registration errors at each interval.


An incremental and ad hoc solution to the latter problem might identify ways of including the long range interval information in the registration, e.g. by defining a “super” interval to be made up of more than one interval set, performing the long range registration, and redistributing the measured error across the short interval registrations that make up the super interval. This latter approach may include the fact that the long range interval registration primarily contains information about low temporal frequency deformations, i.e. structures that change slowly with time, while the short term intervals primarily contain information regarding higher temporal frequency deformations. Estimating such weightings in a meaningful manner to allow the optimal summation of these components is difficult and equally error prone. An alternative and better solution because it includes the appropriate, intrinsic weighting of these components is the simultaneous, joint registration to a common reference of all intervals using mutual information as an objective function, which can be accomplished with the subject invention. Because joint mutual information formed across all interval exams is used to optimize the registration of the temporally distributed data volumes, the relative weighting of short and long range intervals is accomplished automatically, i.e. the longer interval pairs have less mutual information due to decreased morphological similarity, and thus the joint mutual information-based registration is appropriately less sensitive to the long range interval pairings than to short interval pairings.


In the process described above the registration of the temporal data volumes via joint mutual information optimization must occur through methods that support higher degrees of freedom than the typical two dimensional techniques found in current registration literature. Note that the joint mutual information formulation for a pair of, i.e. two images, is currently a 2 dimensional problem for single voxel registration techniques. At a minimum the joint optimization of N intervals will require solution of an N+1 dimensional joint mutual information problem. By computing the warping of all interval exams to a common reference, the dynamics of all tumor voxels are followed over space and time contained in these intervals. The positions can be tracked and the local size changes can be measured where increases imply local cellular proliferation, while decreases imply local cell death. These size changes are quantified by the determinant of the local Jacobian, i.e. the matrix of local spatial derivatives, of the warping. In summary these tools provide methods for quantification of a tumor's heterogeneous response to therapy where the region of interest need be identified, i.e. manually segmented, only once in the reference interval exam.


The subject invention also provides a computer readable recording medium storing an executable control program for executing the methods described above. The computer code for executing the subject invention is set forth in the “Computer Code”compact disc, which is incorporated herein by reference. The “Computer Code” compact disc includes the executable programs as follows,: 1) Program to construct MST using Kruskal algorithm (1mstkruskal.txt), 2) Program to construct MST using Prim algorithm (2mstprim.txt), 3) Program to estimate Henze-Penrose affinity using MST (3hpmst.txt), 4) Program to construct k-NNG using kd tree algorithm (4knngkd.txt), 5) Program to construct estimates of α-MI using k-NNG (5miknng.txt), 6) Program to construct estimates of α-GA mean divergence using k-NNG (6gaknng.txt), and 7) Program to construct estimates of Non-Linear Correlation Coefficient using k-NNG (7nonlinknng.txt).


While the invention has been described with reference to an exemplary embodiment, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims
  • 1. A method of determining alignment of decorelating images in high dimensional feature space, said method comprising: simultaneously registering a source image of a reference modality to a plurality of target images of a second modality with an algorithm based upon a measure of information affinity present in both of the source and target images to create a registered image;extracting a plurality of feature vectors from the registered image for each of the source and target images;plotting a distribution of the feature vectors on an entropic graph;determining edge lengths between the feature vectors from the entropic graph; anddetermining a similarity measure of one of an α-divergence estimate or an α-affinity estimate based upon these edge lengths to indicate whether the source and target images are sufficiently registered.
  • 2. A method as set forth in claim 1 wherein the entropic graph is further defined as a minimal graph spanning the feature vectors that minimizes a function of the total edge length of the minimal graph and that approximates the α-affinity or α-divergence of the distribution of the feature vectors.
  • 3. A method as set forth in claim 1 wherein the entropic graph is further defined as based upon one of a minimum spanning tree (MST) or a k-nearest neighbor graph (k-NNG), a Steiner tree, a Delaunay triangulation, or a traveling salesman problem (TSP).
  • 4. A method as set forth in claim 1 further comprising the step of re-deforming the source image into the target images with a different algorithm to shorten the edge lengths from the entropic graph thereby improving the similarity measure therebetween.
  • 5. A method as set forth in claim 1 wherein the reference modality is further defined as different than the second modality.
  • 6. A method as set forth in claim 1 wherein each of the feature vectors represent at least two feature dimensions.
  • 7. A method as set forth in claim 1 wherein each of the feature vectors represent more than two feature dimensions.
  • 8. A method as set forth in claim 1 wherein the step of determining the similarity measure is further defined as utilizing at least one of an α-mutual information (α-MI), an α-geometric-arithmetic (α-GA) divergence, and a Henze-Penrose (HP) divergence.
  • 9. A method as set forth in claim 8 wherein the α-MI is further defined by the general formula:
  • 10. A method as set forth in claim 8 wherein the α-GA divergence is further defined by the general formula:
  • 11. A method as set forth in claim 8 wherein the HP divergence is further defined by the general formula:
  • 12. A method of determining alignment of decorelating images in high dimensional feature space, said method comprising: simultaneously registering more than two images comprising at least a source image of a reference modality to target images of a second modality with an algorithm based upon a measure of mutual information present in both of the source and target images to create a registered image;extracting a plurality of feature vectors from the registered image for each of the source and target images;determining edge lengths between proximal feature vectors from an entropic graph; anddetermining a similarity measure of one of an α-divergence estimate or an α-affinity estimate based upon these edge lengths with at least one of an α-mutual information (α-MI), an α-geometric-arithmetic (α-GA) divergence, and a Henze-Penrose (HP) divergence.
  • 13. A method as set forth in claim 12 wherein the α-MI is further defined by the general formula:
  • 14. A method as set forth in claim 12 wherein the α-GA divergence is further defined by the general formula:
  • 15. A method as set forth in claim 12 wherein the HP divergence is further defined by the general formula:
  • 16. A method as set forth in claim 12 wherein the entropic graph is further defined as based upon one of a minimum spanning tree (MST) or a k-nearest neighbor graph (k-NNG).
  • 17. A computer readable recording medium storing an executable control program for executing a method of determining alignment of decorelating images in high dimensional feature space, said method comprising: simultaneously registering more than two images with an algorithm based upon a measure of information affinity present in the images to create a registered image;extracting a plurality of feature vectors from the registered image for each of the source and target images;determining edge lengths between proximal feature vectors from an entropic graph; anddetermining a similarity measure of one of an α-divergence estimate or an α-affinity estimate based upon these edge lengths to indicate whether the source and target images are sufficiently registered.
  • 18. A method as set forth in claim 17 wherein the entropic graph is further defined as based upon one of a minimum spanning tree (MST), a k-nearest neighbor graph (k-NNG), a Steiner tree, a Delaunay triangulation, or a traveling salesman problem (TSP).
  • 19. A method as set forth in claim 17 wherein the step of determining the similarity measure is further defined as utilizing at least one of an α-mutual information (α-MI), an α-geometric-arithmetic (α-GA) divergence, and a Henze-Penrose (HP) divergence.
CROSS REFERENCE TO RELATED APPLICATION

This application claims priority to Provisional Patent Application Ser. No. 60/658,427, filed Mar. 4, 2005.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

This invention was made with government support under 1P01CA87634 awarded by National Institute of Health (NIH) and under DAAD19-02-1-0262 awarded by Army Research Office (ARO). The government has certain rights in the invention.

US Referenced Citations (64)
Number Name Date Kind
4590607 Kauth May 1986 A
4635293 Watanabe Jan 1987 A
4641352 Fenster et al. Feb 1987 A
4685146 Fenster et al. Aug 1987 A
5260871 Goldberg Nov 1993 A
5359513 Kano et al. Oct 1994 A
5531520 Grimson et al. Jul 1996 A
5581638 Givens et al. Dec 1996 A
5590261 Sclaroff et al. Dec 1996 A
5594807 Liu Jan 1997 A
5610993 Yamamoto Mar 1997 A
5611000 Szeliski et al. Mar 1997 A
5633951 Moshfeghi May 1997 A
5812691 Udupa et al. Sep 1998 A
5839440 Liou et al. Nov 1998 A
5850486 Maas, III et al. Dec 1998 A
5937083 Ostuni Aug 1999 A
5956435 Buzug et al. Sep 1999 A
5974165 Giger et al. Oct 1999 A
5999840 Grimson et al. Dec 1999 A
6035066 Michael Mar 2000 A
6061467 Michael May 2000 A
6075905 Herman et al. Jun 2000 A
6195659 Hyatt Feb 2001 B1
6236742 Handel May 2001 B1
6249595 Foxall et al. Jun 2001 B1
6249616 Hashimoto Jun 2001 B1
6266453 Hibbard et al. Jul 2001 B1
6343143 Guillemaud et al. Jan 2002 B1
6373998 Thirion et al. Apr 2002 B2
6463167 Feldman et al. Oct 2002 B1
6463168 Alyassin et al. Oct 2002 B1
6484047 Vilsmeier Nov 2002 B1
6611615 Christensen Aug 2003 B1
6628845 Stone et al. Sep 2003 B1
6640201 Hahlweg Oct 2003 B1
6647154 Hahlweg Nov 2003 B1
6671423 Fujimoto et al. Dec 2003 B1
6681057 Nair et al. Jan 2004 B1
6701006 Moore et al. Mar 2004 B2
6728424 Zhu et al. Apr 2004 B1
6760037 Kallay et al. Jul 2004 B2
6788827 Makram-Ebeid Sep 2004 B1
6909794 Caspi Jun 2005 B2
6937750 Natanzon et al. Aug 2005 B2
6950542 Roesch et al. Sep 2005 B2
6990255 Romanik et al. Jan 2006 B2
7020311 Breeuwer Mar 2006 B2
7062078 Weese et al. Jun 2006 B2
20010036302 Miller Nov 2001 A1
20020012478 Thirion et al. Jan 2002 A1
20030095696 Reeves et al. May 2003 A1
20030174872 Chalana et al. Sep 2003 A1
20030216631 Bloch et al. Nov 2003 A1
20050013471 Snoeren et al. Jan 2005 A1
20050147325 Chen et al. Jul 2005 A1
20050190189 Chefd'hotel et al. Sep 2005 A1
20050219558 Wang et al. Oct 2005 A1
20050249434 Xu et al. Nov 2005 A1
20060002631 Fu et al. Jan 2006 A1
20060002632 Fu et al. Jan 2006 A1
20060004275 Vija et al. Jan 2006 A1
20060029291 Sun et al. Feb 2006 A1
20060093240 Sabuneu et al. May 2006 A1
Foreign Referenced Citations (3)
Number Date Country
0057361 Sep 2000 WO
02056241 Jul 2002 WO
2004063990 Jul 2004 WO
Related Publications (1)
Number Date Country
20060257027 A1 Nov 2006 US
Provisional Applications (1)
Number Date Country
60658427 Mar 2005 US