Image processing using measures of similarity

Information

  • Patent Grant
  • 7260248
  • Patent Number
    7,260,248
  • Date Filed
    Friday, March 15, 2002
    22 years ago
  • Date Issued
    Tuesday, August 21, 2007
    17 years ago
Abstract
The invention provides methods of relating a plurality of images based on measures of similarity. The methods of the invention are useful in the segmentation of a sequence of colposcopic images of tissue, for example. The methods may be applied in the determination of tissue characteristics in acetowhitening testing of cervical tissue, for example.
Description
FIELD OF THE INVENTION

This invention relates generally to image processing. More particularly, in certain embodiments, the invention relates to segmentation of a sequence of colposcopic images based on measures of similarity.


BACKGROUND OF THE INVENTION

It is common in the medical field to perform visual examination to diagnose disease. For example, visual examination of the cervix can discern areas where there is a suspicion of pathology. However, direct visual observation alone is often inadequate for identification of abnormalities in a tissue.


In some instances, when tissues of the cervix are examined in vivo, chemical agents such as acetic acid are applied to enhance the differences in appearance between normal and pathological areas. Aceto-whitening techniques may aid a colposcopist in the determination of areas where there is a suspicion of pathology.


However, colposcopic techniques generally require analysis by a highly trained physician. Colposcopic images may contain complex and confusing patterns. In colposcopic techniques such as aceto-whitening, analysis of a still image does not capture the patterns of change in the appearance of tissue following application of a chemical agent. These patterns of change may be complex and difficult to analyze. Current automated image analysis methods do not allow the capture of the dynamic information available in various colposcopic techniques.


Traditional image analysis methods include segmentation of individual images. Segmentation is a morphological technique that splits an image into different regions according to one or more pre-defined criteria. For example, an image may be divided into regions of similar intensity. It may therefore be possible to determine which sections of a single image have an intensity within a given range. If a given range of intensity indicates suspicion of pathology, the segmentation may be used as part of a diagnostic technique to determine which regions of an image may indicate diseased tissue.


However, standard segmentation techniques do not take into account dynamic information, such as a change of intensity over time. This kind of dynamic information is important to consider in various diagnostic techniques such as aceto-whitening colposcopy. A critical factor in discriminating between healthy and diseased tissue may be the manner in which the tissue behaves throughout a diagnostic test, not just at a given time. For example, the rate at which a tissue whitens upon application of a chemical agent may be indicative of disease. Traditional segmentation techniques do not take into account time-dependent behavior, such as rate of whitening.


SUMMARY OF THE INVENTION

The invention provides methods for relating aspects of a plurality of images of a tissue in order to obtain diagnostic information about the tissue. In particular, the invention provides methods for image segmentation across a plurality of images instead of only one image at a time. In a sense, inventive methods enable the compression of a large amount of pertinent information from a sequence of images into a single frame. An important application of methods of the invention is the analysis of a sequence of images of biological tissue in which an agent has been applied to the tissue in order to change its optical properties in a way that is indicative of the physiological state of the tissue. Diagnostic tests which have traditionally required analysis by trained medical personnel may be automatically analyzed using these methods. The invention may be used, for example, in addition to or in place of traditional analysis.


The invention provides methods of performing image segmentation using information from a sequence of images, not just from one image at a time. This is important because it allows the incorporation of time effects in image segmentation. For example, according to an embodiment of the invention, an area depicted in a sequence of images is divided into regions based on a measure of similarity of the changes those regions undergo throughout the sequence. In this way, inventive segmentation methods incorporate more information and can be more helpful, for example, in determining a characteristic of a tissue, than segmentation performed using one image at a time. The phrases “segmentation of an image” and “segmenting an image,” as used herein, may apply, for example, to dividing an image into different regions, or dividing into different regions an area in space depicted in one or more images (an image plane).


Segmentation methods of this invention allow, for example, the automated analysis of a sequence of images using complex criteria for determining a disease state which may be difficult or impossible for a human analyst to perceive by simply viewing the sequence. The invention also allows the very development of these kinds of complex criteria for determining a disease state by permitting the relation of complex behaviors of tissue samples during dynamic diagnostic tests to the known disease states of the tissue samples. Criteria may be developed using the inventive methods described herein to analyze sequences of images for dynamic diagnostic tests that are not yet in existence.


One way to relate a plurality of images to each other according to the invention is to create or use a segmentation mask that represents an image plane divided into regions that exhibit similar behavior throughout a test sequence. Another way to relate images is by creating or using graphs or other means of data representation which show mean signal intensities throughout each of a plurality of segmented regions as a function of time. Relating images may also be performed by identifying any particular area represented in an image sequence which satisfies given criteria.


In one aspect, the invention is directed to a method of relating a plurality of images of a tissue. The method includes the steps of: obtaining a plurality of images of a tissue; determining a relationship between two or more regions in each of two or more of the images; segmenting at least a subset of the two or more images based on the relationship; and relating two or more images of the subset of images based at least in part on the segmentation.


According to one embodiment, the step of obtaining a plurality of images of a tissue includes collecting an optical signal. In one embodiment, the optical signal includes fluorescence illumination from the tissue. In one embodiment, the optical signal includes reflectance, or backscatter, illumination from the tissue. In one embodiment, the tissue is illuminated by a white light source, a UV light source, or both. According to one embodiment, the step of obtaining images includes recording visual images of the tissue.


According to one embodiment, the tissue is or includes cervical tissue. In another embodiment, the tissue is one of the group consisting of epithelial tissue, colorectal tissue, skin, and uterine tissue. In one embodiment, the plurality of images being related are sequential images. In one embodiment, the chemical agent is applied to change its optical properties in a way that is indicative of the physiological state of the tissue. According to one embodiment, a chemical agent is applied to the tissue. In one embodiment, the chemical agent is selected from the group consisting of acetic acid, formic acid, propionic acid, butyric acid, Lugol's iodine, Shiller's iodine, methylene blue, toluidine blue, osmotic agents, ionic agents, and indigo carmine. In certain embodiments, the method includes filtering two or more of the images. In one embodiment, the method includes applying either or both of a temporal filter, such as a morphological filter, and a spatial filter, such as a diffusion filter.


In one embodiment, the step of determining a relationship between two or more regions in each of two or more of the images includes determining a measure of similarity between at least two of the two or more images. In one embodiment, determining the measure of similarity includes computing an N-dimensional dot product of the mean signal intensities of two of the two or more regions. In one embodiment, the two regions (of the two or more regions) are neighboring regions.


According to one embodiment, the step of relating images based on the segmentation includes determining a segmentation mask of an image plane, where two or more regions of the image plane are differentiated. In one embodiment, the step of relating images based on the segmentation includes defining one or more data series representing a characteristic of one or more associated segmented regions of the image plane. In one embodiment, this characteristic is mean signal intensity.


According to one embodiment, the step of relating images includes creating or using a segmentation mask that represents an image plane divided into regions that exhibit similar behavior throughout the plurality of images. In one embodiment, the step of relating images includes creating or using graphs or other means of data representation which show mean signal intensities throughout each of a plurality of segmented regions as a function of time. In one embodiment, the step of relating images includes identifying a particular area represented in the image sequence which satisfies given criteria.


In another aspect, the invention is directed to a method of relating a plurality of images of a tissue, where the method includes the steps of: obtaining a plurality of images of a tissue; determining a measure of similarity between two or more regions in each of two or more of the images; and relating at least a subset of the images based at least in part on the measure of similarity. In one embodiment, the step of determining a measure of similarity includes computing an N-dimensional dot product of the mean signal intensities of two of the two or more regions. In one embodiment, the two regions are neighboring regions.


In another aspect, the invention is directed to a method of determining a tissue characteristic. The method includes the steps of: obtaining a plurality of images of a tissue; determining a relationship between two or more regions in each of two or more of the images; segmenting at least a subset of the two or more images based at least in part on the relationship; and determining a characteristic of the tissue based at least in part on the segmentation.


According to one embodiment, the step of obtaining a plurality of images of a tissue includes collecting an optical signal. In one embodiment, the optical signal includes fluorescence illumination from the tissue. In one embodiment, the optical signal includes reflectance, or backscatter, illumination from the tissue. In one embodiment, the tissue is illuminated by a white light source, a UV light source, or both. According to one embodiment, the step of obtaining images includes recording visual images of the tissue.


According to one embodiment, the tissue is or includes cervical tissue. In another embodiment, the tissue is one of the group consisting of epithelial tissue, colorectal tissue, skin, and uterine tissue. In one embodiment, the plurality of images being related are sequential images. In one embodiment, the chemical agent is applied to change its optical properties in a way that is indicative of the physiological state of the tissue. According to one embodiment, a chemical agent is applied to the tissue. In one embodiment, the chemical agent is selected from the group consisting of acetic acid, formic acid, propionic acid, butyric acid, Lugol's iodine, Shiller's iodine, methylene blue, toluidine blue, osmotic agents, ionic agents, and indigo carmine. In certain embodiments, the method includes filtering two or more of the images. In one embodiment, the method includes applying either or both of a temporal filter, such as a morphological filter, and a spatial filter, such as a diffusion filter. In one embodiment, the method includes processing two or more images to compensate for a relative motion between the tissue and a detection device.


According to one embodiment, the step of determining a relationship between two or more regions in each of two or more of the images includes determining a measure of similarity between at least two of the two or more images. In certain embodiments, determining this measure of similarity includes computing an N-dimensional dot product of the mean signal intensities of two of the two or more regions. In one embodiment, the two regions are neighboring regions.


According to one embodiment, the segmenting step includes analyzing an aceto-whitening signal. In one embodiment, the segmenting step includes analyzing a variance signal. In one embodiment, the segmenting step includes determining a gradient image.


According to one embodiment, the method includes processing one or more optical signals based on the segmentation. In one embodiment, the method includes filtering at least one image based at least in part on the segmentation.


In certain embodiments, the step of determining a characteristic of the tissue includes determining one or more regions of the tissue where there is suspicion of pathology. In certain embodiments, the step of determining a characteristic of the tissue includes classifying a region of tissue as one of the following: normal squamous tissue, metaplasia, Cervical Intraepithelial Neoplasia, Grade I (CIN I), and Cervical Intraepithelial Neoplasia, Grade II or Grade III (CIN II/CIN III).


In another aspect, the invention is directed to a method of determining a characteristic of a tissue. The method includes the steps of: (a) for each of a first plurality of reference sequences of images of tissue having a first known characteristic, quantifying one or more features of each of a first plurality of mean signal intensity data series corresponding to segmented regions represented in each of the first plurality of reference sequences of images; (b) for a test sequence of images, quantifying one of more features of each of one or more mean signal intensity data series corresponding to one or more segmented regions represented in the test sequence of images; and (c) determining a characteristic of a tissue represented in the test sequence of images based at least in part on a comparison between the one or more features quantified in step (a) and the one or more features quantified in step (b).


According to one embodiment, step (c) includes repeating step (a) for each of a second plurality of reference sequences of images of tissue having a second known characteristic. In one embodiment, step (c) includes applying a classification rule based at least in part on the first plurality of reference sequences and the second plurality of reference sequences. In one embodiment, step (c) includes performing a linear discriminant analysis to determine the classification rule. In one embodiment, one of the one or more features of step (a) includes the slope of a curve at a given point fitted to one of the plurality of mean signal intensity data series. According to one embodiment, the method includes determining the segmented regions of the test sequence of images by analyzing an acetowhitening signal. In one embodiment, the first known characteristic is CIN II/CIN III and the second known characteristic is absence of CIN II/CIN III.





BRIEF DESCRIPTION OF THE DRAWINGS

The objects and features of the invention can be better understood with reference to the drawings described below, and the claims. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the drawings, like numerals are used to indicate like parts throughout the various views.


The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the U.S. Patent and Trademark Office upon request and payment of the necessary fee.



FIG. 1 is a schematic flow diagram depicting steps in the analysis of a sequence of images of tissue according to an illustrative embodiment of the invention.


FIGS. 2A and 2A-1 depict human cervix tissue and show an area of which a sequence of images are to be obtained according to an illustrative embodiment of the invention.



FIG. 2B depicts the characterization of a discrete signal from a sequence of images of tissue according to an illustrative embodiment of the invention.



FIGS. 3A and 3B show a series of graphs depicting mean signal intensity of a region as a function of time, as determined from a sequence of images according to an illustrative embodiment of the invention.



FIG. 4A depicts a “maximum” RGB image representation used in preprocessing a sequence of images of tissue according to an illustrative embodiment of the invention.



FIG. 4B depicts the image representation of FIG. 4A after applying a manual mask, used in preprocessing a sequence of images of tissue according to an illustrative embodiment of the invention.



FIG. 4C depicts the image representation of FIG. 4B after accounting for glare, used in preprocessing a sequence of images of tissue according to an illustrative embodiment of the invention.



FIG. 4D depicts the image representation of FIG. 4C after accounting for chromatic artifacts, used in preprocessing a sequence of images of tissue according to an illustrative embodiment of the invention.



FIG. 5 shows a graph illustrating the determination of a measure of similarity of time series of mean signal intensity for each of two regions according to an illustrative embodiment of the invention.



FIG. 6 is a schematic flow diagram depicting a region merging approach of segmentation according to an illustrative embodiment of the invention.



FIG. 7A represents a segmentation mask produced using a region merging approach according to an illustrative embodiment of the invention.



FIG. 7B shows a graph depicting mean signal intensities of segmented regions represented in FIG. 7A as functions of time according to an illustrative embodiment of the invention.



FIG. 8 is a schematic flow diagram depicting a robust region merging approach of segmentation according to an illustrative embodiment of the invention.



FIG. 9A represents a segmentation mask produced using a robust region merging approach according to an illustrative embodiment of the invention.



FIG. 9B shows a graph depicting mean variance signals of segmented regions represented in FIG. 9A as functions of time according to an illustrative embodiment of the invention.



FIG. 10 is a schematic flow diagram depicting a clustering approach of segmentation according to an illustrative embodiment of the invention.



FIG. 11A represents a segmentation mask produced using a clustering approach according to an illustrative embodiment of the invention.



FIG. 11B shows a graph depicting mean signal intensities of segmented regions represented in FIG. 11A as functions of time according to an illustrative embodiment of the invention.



FIG. 11C represents a segmentation mask produced using a clustering approach according to an illustrative embodiment of the invention.



FIG. 11D shows a graph depicting mean signal intensities of segmented regions represented in FIG. 11C as functions of time according to an illustrative embodiment of the invention.



FIG. 12 is a schematic flow diagram depicting a watershed approach of segmentation according to an illustrative embodiment of the invention.



FIG. 13 represents a gradient image used in a watershed approach of segmentation according to an illustrative embodiment of the invention.



FIG. 14A represents a segmentation mask produced using a watershed approach according to an illustrative embodiment of the invention.



FIG. 14B represents a segmentation mask produced using a watershed approach according to an illustrative embodiment of the invention.



FIG. 15A represents a seed region superimposed on a reference image from a sequence of images, used in a region growing approach of segmentation according to an illustrative embodiment of the invention.



FIG. 15B represents the completed growth of the “seed region” of FIG. 15A using a region growing approach according to an illustrative embodiment of the invention.



FIG. 16A represents a segmentation mask produced using a combined clustering approach and robust region merging approach according to an illustrative embodiment of the invention.



FIG. 16B shows a graph depicting mean signal intensities of segmented regions represented in FIG. 16A as functions of time according to an illustrative embodiment of the invention.



FIG. 17A represents a segmentation mask produced using a combined clustering approach and watershed technique according to an illustrative embodiment of the invention.



FIG. 17B shows a graph depicting mean signal intensities of segmented regions represented in FIG. 17A as functions of time according to an illustrative embodiment of the invention.



FIG. 18A represents a segmentation mask produced using a two-part clustering approach according to an illustrative embodiment of the invention.



FIG. 18B shows a graph depicting mean signal intensities of segmented regions represented in FIG. 18A as functions of time according to an illustrative embodiment of the invention.



FIG. 19 depicts the human cervix tissue of FIG. 2A with an overlay of manual doctor annotations made after viewing an image sequence.



FIG. 20A is a representation of a segmentation mask produced using a combined clustering approach and robust region merging approach with a correspondingly-aligned overlay of the manual doctor annotations of FIG. 19, according to an embodiment of the invention.



FIG. 20B is a representation of a segmentation mask produced using a combined clustering approach and morphological technique with a correspondingly-aligned overlay of the manual doctor annotations of FIG. 19, according to an embodiment of the invention.



FIG. 21A depicts a reference image of cervical tissue of a patient from a sequence of images obtained during an acetowhitening test according to an illustrative embodiment of the invention.



FIG. 21B depicts an image from the sequence of FIG. 21A after applying a manual mask, accounting for glare, and accounting for chromatic artifacts according to an illustrative embodiment of the invention.



FIG. 21C shows a graph depicting mean signal intensities of segmented regions for the sequence of FIG. 21A determined using a clustering segmentation approach according to an illustrative embodiment of the invention.



FIG. 21D represents a map of regions of tissue as segmented in FIG. 21C classified as either high grade disease tissue or not high grade disease tissue using a classification algorithm according to an illustrative embodiment of the invention.





DESCRIPTION OF THE ILLUSTRATIVE EMBODIMENT

In general, the invention provides methods for image segmentation across a plurality of images. Segmentation across a plurality of images provides a much more robust analysis than segmentation in a single image. Segmentation across multiple images according to the invention allows incorporation of a temporal element (e.g., the change of tissue over time in a sequence of images) in optics-based disease diagnosis. The invention provides means to analyze changes in tissue over time in response to a treatment. It also provides the ability to increase the resolution of segmented imaging by increasing the number of images over time. This allows an additional dimension to image-based tissue analysis, which leads to increase sensitivity and specificity of analysis. The following is a detailed description of a preferred embodiment of the invention.


The schematic flow diagram 100 of FIG. 1 depicts steps in the analysis of a sequence of images of tissue according to an illustrative embodiment of the invention. FIG. 1 also serves as a general outline of the contents of this description. Each of the steps of FIG. 1 is discussed herein in detail. Briefly, the steps include obtaining a sequence of images of the tissue 102, preprocessing the images 104, determining a measure of similarity between regions in each of the images 108, segmenting the images 110, relating the images 112, and finally, determining a tissue characteristic 114. Though not pictured in FIG. 1, the steps may be preceded by application of a chemical agent onto the tissue, for example. In other embodiments, a chemical agent is applied during the performance of the steps of the schematic flow diagram 100FIG. 1.


Among the key steps of the inventive embodiments discussed here are determining a measure of similarity between regions of tissue represented in a sequence of images and segmenting the images based on the measure of similarity. Much of the mathematical complexity presented in this description regards various methods of performing these key steps. As will become evident, different segmentation methods have different advantages. The segmentation techniques of the inventive embodiments discussed herein include region merging, robust region merging, clustering, watershed, and region growing techniques, as well as combinations of these techniques.



FIGS. 2A, 2A-1, and 2B relate to step 102 of FIG. 1, obtaining a sequence of images of the tissue. Although embodiments of the invention are not limited to aceto-whitening tests, an exemplary sequence of images from an aceto-whitening test performed on a patient is used herein to illustrate certain embodiments of the invention. FIG. 2A depicts a full-frame image 202 of a human cervix after application of acetic acid, at the start of an aceto-whitening test. The inset image 204 depicts an area of interest to be analyzed herein using embodiment methods of the invention. This area of interest may be determined by a technician or may be determined in a semi-automated fashion using a multi-step segmentation approach such as one of those discussed herein below.



FIG. 2B depicts the characterization 206 of a discrete signal w(i,j;t) from a sequence of images of tissue according to an illustrative embodiment of the invention. The signal could be any type of image signal of interest known in the art. In the illustrative embodiment, the signal is an intensity signal of an image.


In the illustrative embodiment, images of an area of interest are taken at N time steps {t0, t1, . . . , tN-1}. In one embodiment, time t0 corresponds to the moment of application of a chemical agent to the tissue, for instance, and time tN−1 corresponds to the end of the test. In another embodiment, time to corresponds to a moment following the application of a chemical agent to the tissue. For example, let custom character={0, . . . , r−1}×{0, . . . , c−1} and custom character={n0, . . . , nN-1} be the image and time domains, respectively, where r is the number of rows and c is the number of columns. Then, r×c discrete signals w(i,j;t) may be constructed describing the evolution of some optically-detectable phenomena, such as aceto-whitening, with time. For an aceto-whitening example, the “whiteness” may be computed from RGB data of the images. There are any number of metrics which may be used to define “whiteness.” For instance, an illustrative embodiment employs an intensity component, CCIR 601, as a measure of “whiteness” of any particular pixel, defined in terms of red (R), green (G), and blue (B) intensities as follows:

I=0.299R+0.587G+0.114B.  (1)


The “whitening” data is then given by w(i,j;t)=I(i,j;n), for example. Alternatively, the signal w(i,j;t) is defined in any of a multiplicity of other ways. The characterization 206 of FIG. 2B shows that the intensity signal w(i,j;t) has a value corresponding to each discrete location (i,j) in each of the images taken at N discrete time steps. According to the illustrative embodiment, a location (i,j) in an image corresponds to a single pixel. In an aceto-whitening example, since it is the whitening of the cervix that is of interest and not the absolute intensity of the cervix surface, the whitening signals are background subtracted. In one example of background subtraction, each of the signals corresponding to a given location at a particular time step are transformed by subtracting the initial intensity signal at that location as shown in Equation (2):

w(i,j;n)custom characterw(i,j;n)−w(i,j;n0),∀nεcustom character.  (2)


Noise, glare, and sometimes chromatic artifacts may corrupt images in a sequence. Signal noise due to misaligned image pairs and local deformations of the tissue may be taken into account as well. Alignment functions and image restoration techniques often do not adequately reduce this type of noise. Therefore, it may be necessary to apply temporal and spatial filters.



FIGS. 3A and 3B relate to part of step 104 of FIG. 1, preprocessing the images. FIGS. 3A and 3B show a series of graphs depicting mean signal intensity 304 of a pixel as a function of time 306, as determined from a sequence of images according to an illustrative embodiment of the invention. The graphs depict application of a morphological filter, application of a diffusion filter, modification of intensity data to account for background intensity, and normalization of intensity data, according to an illustrative embodiment of the invention.


According to the illustrative embodiment, a first filter is applied to the time axis, individually for each pixel. The images are then spatially filtered. Graph 302 of FIG. 3A depicts the application of both a temporal filter and a spatial filter at a representative pixel. The original data is connected by a series of line segments 308. It is evident from graph 302 that noise makes the signal choppy and adversely affects further analysis if not removed.


For temporal filtering, the illustrative embodiment of the invention applies the morphological filter of Equation (3):

w(t)⊙b=½[(w∘b)•b+(w•b)∘b],  (3)

where b is the structuring element, ∘ is the opening operator, and ◯ is the closing operator. According to the illustrative embodiment, the structuring element has a half circle shape. The temporally-filtered data is connected by a series of line segments 310 in the graph 302 of FIG. 3A. The noise is decreased from the series 308 to the series 310.


Illustratively, the images are then spatially filtered, for example, with either an isotropic or a Gaussian filter. A diffusion equation implemented by an illustrative isotropic filter may be expressed as Equation (4):















w


(

i
,
j

)





τ



k




·


w




=

k



w



,




(
4
)








where ∇ is the gradient operator, Δ is the Laplacian operator, and τ is the diffusion time (distinguished from the time component of the whitening signal itself). An isotropic filter is iterative, while a Gaussian filter is an infinite impulse response (IIR) filter. The iterative filter of Equation (4) is much faster than a Gaussian filter, since the iterative filter allows for increasing smoothness by performing successive iterations. The Gaussian filter requires re-applying a more complex filter to the original image for increasing degrees of filtration. According to the illustrative embodiment, the methods of the invention perform two iterations. However, in other embodiments, the method performs one iteration or three or more iterations. The spatially-filtered data for a representative pixel is connected by a series of line segments 312 in graph 302 of FIG. 3A. The noise is decreased from series 310 to series 312.


Graph 314 of FIG. 3B shows the application of Equation (2), background subtracting the intensity signal 304. Graph 318 of FIG. 3B shows the intensity signal data following normalization 320. In the illustrative embodiment, as explained below in further detail, normalization includes division of values of the intensity signal 304 by a reference value, such as the maximum intensity signal over the sequence of images. Glare and chromatic artifacts can affect selection of the maximum intensity signal; thus, in an illustrative embodiment, normalization is performed subsequent to correcting for glare and chromatic artifacts.


In the illustrative embodiment, the invention masks glare and chromatic artifacts from images prior to normalization. In the case of whitening data, glare may have a negative impact, since glare is visually similar to the tissue whitening that is the object of the analysis. Chromatic artifacts may have a more limited impact on the intensity of pixels and may be removed with the temporal and spatial filters described above.


Thresholding may be used to mask out glare and chromatic artifacts. In the illustrative embodiment thresholding is performed in the L*u*v* color space. Preferably, the invention also employs a correlate for hue, expressed as in Equation (5):










h
*

=

a






tan


(


v
*


u
*


)







(
5
)








Since the hue h* is a periodic function, the illustrative methods of the invention rotate the u*-v* plane such that the typical reddish color of the cervix correlates to higher values of h*. This makes it possible to work with a single threshold for chromatic artifacts. The rotation is given by Equation (6):










(




u
*






v
*




)




(




cos


(

-

π
3


)





sin


(

π
3

)







-

sin


(

π
3

)






cos


(

π
3

)





)




(




u
*






v
*




)

.






(
6
)








The masks for glare and chromatic artifacts are then respectively obtained using Equations (7) and (8):

maskglare=L*>90  (7)
maskhue=h*<π/5,  (8)

where L*ε[0,100] and h*ε[0,2π]. According to the illustrative embodiment, the masks are eroded to create a safety margin, such that they are slightly larger than the corrupted areas.



FIGS. 4A and 4B relate to part of step 104 of FIG. 1, preprocessing the images. FIG. 4A depicts a “maximum” RGB image representation 402 used in preprocessing a sequence of images of tissue according to an illustrative embodiment of the invention. In the illustrative embodiment, the maximum RGB image is computed by taking for each pixel the maximum RGB values in the whole sequence of images.



FIG. 4B depicts the image representation of FIG. 4A after applying a manual mask, used in preprocessing a sequence of images of tissue according to an illustrative embodiment of the invention. According to the illustrative embodiment, the method applies the manual mask in addition to the masks for glare and chromatic effects in order to account for obstructions such as hair, foam from the chemical agent, or other obstruction, and/or to narrow analysis to an area of interest. Area 406 of the frame 404 of FIG. 4B has been manually masked in accord with the methods of the embodiment.



FIG. 4C, which depicts the image representation of FIG. 4B after accounting for glare, is used in preprocessing a sequence of images of tissue according to an illustrative embodiment of the invention. Note that the areas 408, 410, 412, 414, 416, and 418 of the frame 405 of FIG. 4C have been masked for glare using Equation (7).



FIG. 4D, which depicts the image representation of FIG. 4C after accounting for chromatic artifacts, is used in preprocessing a sequence of images of tissue according to an illustrative embodiment of the invention. The areas 420 and 422 of the frame 407 of FIG. 4D have been masked for chromatic artifacts using Equation (8).


To reduce the amount of data to process and to improve the signal-to-noise ratio of the signals used in the segmentation techniques discussed below, illustrative methods of the invention pre-segment the image plane into grains. Illustratively, the mean grain surface is about 30 pixels. However, in other embodiments, it is between about a few pixels and about a few hundred pixels. The segmentation methods can be applied starting at either the pixel level or the grain level.


One way to “pre-segment” the image plane into grains is to segment each of the images in the sequence using a watershed transform. One goal of the watershed technique is to simplify a gray-level image by viewing it as a three-dimensional surface and by progressively “flooding” the surface from below through “holes” in the surface. In one embodiment, the third dimension is the gradient of an intensity signal over the image plane (further discussed herein below). A “hole” is located at each minimum of the surface, and areas are progressively flooded as the “water level” reaches each minimum. The flooded minima are called catchment basins, and the borders between neighboring catchment basins are called watersheds. The catchment basins determine the pre-segmented image.


Image segmentation with the watershed transform is preferably performed on the image gradient. If the watershed transform is performed on the image itself, and not the gradient, the watershed transform may obliterate important distinctions in the images. Determination of a gradient image is discussed herein below.


Segmentation is a process by which an image is split into different regions according to one or more pre-defined criteria. In certain embodiments of the invention, segmentation methods are performed using information from an entire sequence of images, not just from one image at a time. The area depicted in the sequence of images is split into regions based on a measure of similarity of the detected changes those regions undergo throughout the sequence.


Segmentation is useful in the analysis of a sequence of images such as in aceto-whitening cervical testing. In an illustrative embodiment, since the analysis of time-series data with a one-pixel resolution is not possible unless motion, artifacts, and noise are absent or can be precisely identified, segmentation is needed. Often, filtering and masking procedures are insufficient to adequately relate regions of an image based on the similarity of the signals those regions produce over a sequence of images. Therefore, the illustrative methods of the invention average time-series data over regions made up of pixels whose signals display similar behavior over time.


In the illustrative embodiment, regions of an image are segmented based at least in part upon a measure of similarity of the detected changes those regions undergo. Since a measure of similarity between regions depends on the way regions are defined, and since regions are defined based upon criteria involving the measure of similarity, the illustrative embodiment of the invention employs an iterative process for segmentation of an area into regions. In some embodiments, segmentation begins by assuming each pixel or each grain (as determined above) represents a region. These individual pixels or grains are then grouped into regions according to criteria defined by the segmentation method. These regions are then merged together to form new, larger regions, again according to criteria defined by the segmentation method.


A problem that arises when processing raw image data is its high dimension. With a typical whitening signal for a single pixel described by, for example, a sixty-or-more-dimensional vector, it is often necessary to reduce data dimensionality prior to processing. In the illustrative embodiment, the invention obtains a scalar that quantifies a leading characteristic of two vectors. More particularly, illustrative methods of the invention take the N-dimensional inner (dot) product of two vectors corresponding to two pixel coordinates. A fitting function based on this dot product is shown in Equation (9). This fitting function quantifies the similarity between the signals at two locations.











φ


(


x
1

,

x
2


)


=





w


(


x
1

;
t

)


,

w


(


x
2

;
t

)






max


(


Ω


(

x
1

)


,

Ω


(

x
2

)



)




,




(
9
)








where x1 and x2 are two pixel coordinates, and Ω(x1)=<w(x1;t),w(x1;t)>is the energy of the signal at location x1.



FIG. 5 relates to step 108 of FIG. 1, determining a measure of similarity between regions in each of a series of images. FIG. 5 shows a graph 502 illustrating the determination of a measure of similarity of a time series of mean signal intensity 504 for each of two regions k and l according to an illustrative embodiment of the invention. FIG. 5 represents one step in an illustrative segmentation method in which the similarity between the time-series signals of two neighboring regions is compared against criteria to determine whether those regions should be merged together. The type of measure of similarity chosen may vary depending on the segmentation method employed.


Curve 506 in FIG. 5 represents the mean signal intensity 504 of region k in each of a sequence of images and is graphed versus time 505. Mean signal intensity 504 of region k is expressed as in Equation (10):











w


(

k
;
t

)


=


1

N
k








(

i
,
j

)



𝕊
k









w


(

i
,

j
;
t


)





,




(
10
)








where custom characterkcustom character2 is the set of all pixels that belong to the kth region and Nk is the size of custom characterk.


Curve 508 of FIG. 5 represents the mean signal intensity 504 of region l and is graphed versus time 505. Mean signal intensity 504 of region l is expressed as in Equation (10), replacing “k” with “l” in appropriate locations. The shaded area 510 of FIG. 5 represents dissimilarity between mean signals over region k and region l. The chosen measure of similarity, φkl, also referred to herein as the fitting function, between regions k and l may depend on the segmentation method employed. For the region merging segmentation technique, discussed in more detail below, as well as for other segmentation techniques, the measure of similarity used is shown in Equation (11):











φ
kl

=





w


(

k
;
t

)


,

w


(

l
;
t

)






max


(


Ω


(
k
)


,

Ω


(
l
)



)




,




(
11
)








where the numerator represents the N-dimensional dot product of the background-subtracted mean signal intensities 504 of region k and region l; the denominator represents the greater of the energies of the signals corresponding to regions k and l, Ω(k) and Ω(l); and −1≦φkl≦1. In this embodiment, the numerator of Equation (11) is normalized by the higher signal energy and not by the square root of the product of both energies.


In the case of whitening signals, for example, the fitting function defined by Equation (9) can be used to obtain a gradient image representing the variation of whitening values in x-y space. The gradient of an image made up of intensity signals is the approximation of the amplitude of the local gradient of signal intensity at every pixel location. The watershed transform is then applied to the gradient image. This may be done when pre-segmenting images into “grains” as discussed above, as well as when performing the hierarchical watershed segmentation approach and combined method segmentation approaches discussed below.


A gradient image representing a sequence of images is calculated for an individual image by computing the fitting value φ(i,j;io,jo) between a pixel (io, jo) and all its neighbors (i,j)εE custom character(i0,j0),where custom character(i0,j0)={(i0−1,j0),(i0,j0−1),(i0+1,j0),(i0,j0+1)}.


Since the best fit corresponds to a null gradient, the derivative of the fitting value is computed as in Equation (12):












φ




(

i
,

j
;

i
0


,

j
0


)


=


δ

1

-


φ


(

i
,

j
;

i
0


,

j
0


)



1
+

φ


(

i
,

j
;

i
0


,

j
0


)






,




(
12
)










where







φ




(

i
,

j
;

i
0


,

j
0


)






(


-


,


)

.








The





sign





of







φ




(

i
,

j
;

i
0


,

j
0


)







is





given





by





Equation






(
13
)


:






δ
=

{




1


if




Ω


(


i
0

,

j
0


)




Ω


(

i
,
j

)








-
1



if




Ω


(


i
0

,

j
0


)


<

Ω


(

i
,
j

)






.









(
13
)







Then, the derivatives of the signals are approximated as the mean of the forward and backward differences shown in Equations (14) and (15).



















x




w


(


i
0

,

j
0


)



=




φ




(


i
0

,



j
0

-
1

;

i
0


,

j
0


)






-


φ




(


i
0

,



j
0

+
1

;

i
0


,

j
0


)



2





(
14
)
















y




w


(


i
0

,

j
0


)



=





φ




(



i
0

-
1

,


j
0

;

i
0


,

j
0


)






-


φ




(



i
0

+
1

,


j
0

;

i
0


,

j
0


)



2

.





(
15
)








The norm of the gradient vector is then calculated from the approximations of Equations (14) and (15).


Since the fitting values include information from the entire sequence of images, one may obtain a gradient image which includes information from the entire sequence of images, and which, therefore, shows details not visible in all of the images. The gradient image may be used in the watershed pre-segmentation technique discussed herein above and the hierarchical watershed technique discussed herein below. Had the gradient image been obtained from a single reference image, less detail would be included, and the application of a watershed segmentation method to the gradient image would segment the image plane based on less data. However, by using a gradient image as determined from Equations (14) and (15), the invention enables a watershed segmentation technique to be applied which divides an image plane into regions based on an entire sequence of data, not just a single reference image.



FIG. 6 relates to step 110 of FIG. 1, segmenting the area represented in a sequence of images into regions based on measures of similarity between regions over the sequence. Various techniques may be used to perform the segmentation step 110 of FIG. 1. FIG. 6 shows a schematic flow diagram 602 depicting a region merging technique of segmentation according to an illustrative embodiment of the invention. In this technique, each grain or pixel is initially a region, and the method merges neighboring according to a predefined criterion in an iterative fashion. The criterion is based on a measure of similarity, also called a fitting function. The segmentation converges to the final result when no pair of neighboring regions satisfies the merging criterion. In the case of aceto-whitening data, for instance, it is desired to merge regions whose whitening data is similar. The fitting function will therefore quantify the similarity over the sequence between two signals corresponding to two neighboring regions.


Thus, the segmentation begins at step 604 of FIG. 6, computing the fitting function to obtain “fitting values” for all pairs of neighboring regions. In the region merging approach, this is the measure of similarity provided by Equation (11), where mean signal intensity of a region k is defined by Equation (10). This fitting function is equivalent to the minimum normalized Euclidean distance, δ2kl, between the mean signal intensities of regions k and l shown in Equation (16):













δ
kl
2

=


||


w


(

k
;
t

)


-

w


(

l
;
t

)





||

L
2

2



max


(


Ω


(
k
)


,

Ω


(
l
)



)









=

2



(



1
2



(

1
+


min


(


Ω


(
k
)


,

Ω


(
l
)



)



max


(


Ω


(
k
)


,

Ω


(
l
)



)




)


-





w


(

k
;
t

)


,

w


(

l
;
t

)






max


(


Ω


(
k
)


,

Ω


(
l
)



)




)

.









(
16
)








This notation reveals the effect of normalizing using the higher energy of the two signals instead of normalizing each signal by its L2 norm. The method using Equation (16) or Equation (11) applies an additional “penalty” when both signals have different energies, and therefore, fitting values are below 1.0 when the scaled versions of the two signals are the same, but their absolute values are different.


In step 606 of FIG. 6, the fitting values (measures of similarity) corresponding to pairs of neighboring regions that are larger than a given threshold are sorted from greatest to least. In step 608, sorted pairs are merged according to best fit, keeping in mind that each region can only be merged once during one iteration. For instance, if neighboring regions k and l have a fitting value of 0.80 and neighboring regions k and m have a fitting value of 0.79, regions k and l are merged together, not k and m. However, region m may be merged with another of its neighboring regions during this iteration, depending on the fitting values computed.


In step 609, the method recalculates fitting values for all pairs of neighboring regions containing an updated (newly merged) region. In the embodiment, Fitting values are not recalculated for pairs of neighboring regions whose regions are unchanged.


In step 610 of FIG. 6, it is determined whether the fitting values of all pairs of neighboring regions are below a given threshold. The fitting function is a measure of similarity between regions; thus, the higher the threshold, the more similar regions have to be in order to be merged, resulting in fewer iterations and, therefore, more regions that are ultimately defined. If the fitting values of all the regions are below the given threshold, the merging is complete 612. If not, the process beginning at step 606 repeats, and fitting values of the pairs of neighboring regions as newly defined are sorted.


Once merging is complete 612, a size rule is applied that forces each region whose size is below a given value to be merged with its best fitting neighboring region, even though the fitting value is lower than the threshold. In this way, very small regions not larger than a few pixels are avoided.


The segmentation method of FIG. 6, or any of the other segmentation methods discussed herein, is performed, for example, where each pixel has up to four neighbors: above, below, left, and right. However, in other illustrative embodiments, segmentation is performed where each pixel can has up to eight neighbors or more, which includes diagonal pixels. It should also be noted that images in a given sequence may be sub-sampled to reduce computation time. For instance, a sequence of 100 images may be reduced to 50 images by eliminating every other image from the sequence to be segmented.



FIGS. 7A and 7B illustrate step 112 of FIG. 1, relating images after segmentation. FIG. 7A depicts a segmentation mask 702 produced using the region merging segmentation technique discussed above for an exemplary aceto-whitening sequence. In an illustrative embodiment, the threshold used in step 610 of FIG. 6 to produce this segmentation mask 702 is 0.7. Each region has a different label, and is represented by a different color in the mask 702 in order to improve the contrast between neighboring regions. Other illustrative embodiments use other kinds of display techniques known in the art, in order to relate images of the sequence based on segmentation and/or based on the measure of similarity.



FIG. 7B shows a graph 750 depicting mean signal intensities 752 of segmented regions represented in FIG. 7A as functions of a time index 754 according to an illustrative embodiment of the invention. The color of each data series in FIG. 7B corresponds to the same-colored segment depicted in FIG. 7A. This is one way to visually relate a sequence of images using the results of segmentation. For instance, according to the illustrative embodiment, regions having a high initial rate of increase of signal intensity 752 are identified by observing data series 756 and 758 in FIG. 7B, whose signal intensities 752 increase more quickly than the other data series. The location of the two regions corresponding to these two data series is found in FIG. 7A. In another example, kinetic rate constants are derived from each of the data series determined in FIG. 7B, and the regions having data series most closely matching kinetic rate constants of interest are identified. In another example, one or more data series are curve fit to obtain a characterization of the mean signal intensities 752 of each data series as functions of time.


Mean signal intensity may have a negative value after background subtraction. This is evident, for example, in the first part of data series 760 of FIG. 7B. In some examples, this is due to the choice of the reference frame for background subtraction. In other examples, it is due to negative intensities corresponding to regions corrupted by glare that is not completely masked from the analysis.



FIG. 8 relates to step 110 of FIG. 1, segmenting the area represented in a sequence of images into regions based on measures of similarity between regions over the sequence. FIG. 8 shows a schematic flow diagram 802 depicting a robust region merging approach of segmentation according to an illustrative embodiment of the invention. One objective of the robust region merging approach is to take into account the “homogeneity” of data inside the different regions. While the region merging approach outlined in FIG. 6 relies essentially on the mean signal of each region to decide subsequent merging, the robust region merging approach outlined in FIG. 8 controls the maximum variability allowed inside each region. More specifically, the variance signal, σ2w(k;t), associated with each region, k, is computed as in Equation (17):














σ
w
2



(

k
;
t

)


=




1

N
k








(

i
,
j

)



s
k










(


w


(

i
,

j
;
t


)


-

w


(

k
;
t

)



)

2










=





1

N
k








(

i
,
j

)



s
k










w
2



(

i
,

j
;
t


)




-


(


1

N
k








(

i
,
j

)



s
k





w


(

i
,

j
;
t


)




)

2



,







(
17
)








where w(k;t) is mean signal intensity of region k as expressed in Equation (10). The merging criterion is then the energy of the standard deviation signal, computed as in Equation (18):













σ
w

,

σ
w




=




t

T






σ
w
2



(

k
;
t

)


.






(
18
)







Segmentation using the illustrative robust region merging approach begins at step 804 of the schematic flow diagram 802 of FIG. 8. In step 804 of FIG. 8, the variance signal, σ2w(k;t), of Equation (17) is computed for each region k. Then variance signal energy, or the energy of the standard deviation signal as shown in Equation (18), is calculated for each region k. In step 806 of FIG. 8, the values of variance signal energy that are larger than a given threshold are sorted. This determines which regions can be merged, but not in which order the regions may be merged. In step 808 of FIG. 8, the sorted pairs are merged according to the increase in variance each merged pair would create, Δσ(k,l), given by Equation (19):












Δ
σ



(

k
,
l

)


=




t

T




(



σ
w
2



(


k

l

;
t

)


-


1
2

[



σ
w
2



(

k
;
t

)


+


σ
w
2



(

l
;
t

)



]


)



,




(
19
)








where k and l represent two neighboring regions to be merged. Thus, if a region can merge with more than one of its neighbors, it merges with the one that increases less the variance shown in Equation (19). Another neighbor may merge with the region in the next iteration, given that it still meets the fitting criterion with the updated region.


According to the illustrative embodiment, it is possible that a large region neighboring a small region will absorb the small region even when the regions have different signals. This results from the illustrative merging criterion being size-dependent, and the change in variance is small if the smaller region is merged into the larger region. According to a further embodiment, the methods of the invention apply an additional criterion as shown in step 807 of FIG. 8 prior to merging sorted pairs in step 808. In step 807, fitting values corresponding to the pairs of neighboring regions are checked against a threshold. The fitting values are determined as shown in Equation (11), used in the region-merging approach. According to the illustrative embodiment, a candidate pair of regions are not merged if its fitting value is below the threshold (e.g., if the two regions are too dissimilar). In the illustrative embodiment, the invention employs a fixed similarity criterion threshold of about φkl=0.7. This value is low enough not to become the main criterion, yet the value is high enough to avoid the merging of regions with very different signals. However, other φkl values may be used without deviating from the scope of the invention.


In step 809 of FIG. 8, values of the variance signal are recalculated for pairs of neighboring regions containing an updated (newly-merged) region. According to an embodiment of the invention, variance signal values are not recalculated for pairs of neighboring regions whose regions are unchanged.


In step 810 of FIG. 8, the illustrative method of the invention determines whether the values of the variance signal energy for all regions are below a given variance threshold. If all values are below the threshold, the merging is complete 812. If not, the process beginning at step 806 is repeated, and values of variance signal energy of neighboring pairs of regions above the threshold are sorted.



FIGS. 9A and 9B illustrate step 112 of FIG. 1, relating images after segmentation. FIG. 9A depicts a segmentation mask 902 produced using the robust region merging segmentation technique discussed above for an exemplary aceto-whitening sequence. In the illustrative embodiment, the variance threshold used in step 810 of FIG. 8 to produce the segmentation mask 902 is 70. However, other variance thresholds may be employed without deviating from the scope of the invention. Each region has a different label, and is represented by a different color in the mask 902 to improve the contrast between neighboring regions. Other display techniques may be used to relate images of the sequence based on segmentation and/or based on the measure of similarity.



FIG. 9B shows a graph 950 depicting mean signal intensity 952 of segmented regions represented in FIG. 9A as functions of a time index 954 according to an illustrative embodiment of the invention. The color of each curve in FIG. 9B corresponds to the same-colored segment depicted in FIG. 9A. This is one way to visually relate a sequence of images using the results of segmentation.


According to the illustrative embodiment, the method observes data series 956, 958, 960, and 962 in FIG. 9B, whose signal intensities 952 increase more quickly than the other data series. The location of the four regions corresponding to these four data series are in FIG. 9A. In another embodiment, the method derives kinetic rate constants from each of the data series determined in FIG. 9B, and the regions having data series most closely matching kinetic rate constants of interest are identified. In another example, the method curve fits one or more data series to obtain a characterization of the mean signal intensities 952 of each data series as functions of time.



FIG. 10 relates to step 110 of FIG. 1, segmenting an area represented in a sequence of images into regions based on measures of similarity between regions over the sequence, according to one embodiment of the invention. FIG. 10 shows a schematic flow diagram 1002 depicting a segmentation approach based on the clustering of data, or more precisely, a “fuzzy c-means” clustering segmentation approach, used in an embodiment of the invention. An objective of this clustering approach is to group pixels into clusters with similar values. Unlike the region merging and robust region merging approaches above, the method does not merge regions based on their spatial relation to each other. In other words, two non-neighboring regions may be merged, depending on the criterion used.


Let custom character={x1, . . . , xn}⊂custom characterd be a set of n d-dimensional vectors. An objective of clustering is to split custom character into c subsets, called partitions, that minimize a given functional, Jm. In the case of the fuzzy c-means, this functional is given by Equation (20):











J
m

=




k
=
1

n






i
=
1

c





(

u
ik

)

m







x
k

-

v
i




2





,




(
20
)








where vi is the “center” of the ith cluster, uikε[0,1] is called the fuzzy membership of xk to vi, with











i
=
1

c



u
ik


=
1

,





and mε[1,∞] is a weighting exponent. The inventors have used m=2 in exemplary embodiments. The distance ∥•∥ is any inner product induced norm on custom characterd. The minimization of Jm as defined by Equation (20) leads to the following iterative system:











u
ik

=


(




j
=
1

c




(





x
k

-

v
i








x
k

-

v
j





)


2

m
-
1




)


-
1



,




(
21
)







v
i

=






k
=
1

n





(

u
ik

)

m



x
k







k
=
1

n




(

u
ik

)

m



.





(
22
)







The distance, ∥xk−vj∥, is based on the fitting function given in Equation (11). If it is assumed that similar signals are at a short distance from each other, then Equation (23) results:











||


x
k

-

v
i


||

=


1
2



(

1
-

φ
ki


)



,




(
23
)








where φki is given by Equation (11).


Thus, in this embodiment, the segmentation begins at step 1004 of FIG. 10; and the method initializes values of vi, where i=1 to c and c is the total number of clusters. In one embodiment, the method sets the initial value of vi randomly. In step 1006 of FIG. 10, the method calculates values of uik, the fuzzy membership of xk to vi, according to Equation (21). In step 1008 of FIG. 10, the method updates values of vi according to Equation (22), using the previously determined value of uik. The algorithm converges when the relative decrease of the functional Jm as defined by Equation (20) is below a predefined threshold, for instance, 0.001. Thus, in step 1010 of FIG. 10, an embodiment of the method determines whether the relative decrease of Jm is below the threshold. If not, then the process beginning at step 1006 is repeated. If the relative decrease of Jm is below the threshold, then the segmentation is completed by labeling each pixel according to its highest fuzzy membership,







e
.
g
.

maxind

i





ε






{

1
,









,




c

}







{

u
ik

}

.






Some embodiments have more regions than clusters, since pixels belonging to different regions with similar signals can contribute to the same cluster.



FIGS. 11A and 11B show an illustrative embodiment of the method at step 112 of FIG. 1. FIG. 11A depicts a segmentation mask 1102 produced using the clustering technique discussed above for an exemplary aceto-whitening sequence, according to the embodiment. The number of clusters, c, chosen in this embodiment is 3. There are more regions than clusters, since portions of some clusters are non-contiguous. The threshold for the relative decrease of Jm in step 1010 of FIG. 10 is chosen as 0.001 in this embodiment. Each cluster has a different label and is represented by a different color in the mask 1102. Other embodiments employ other kinds of display techniques to relate images of the sequence based on segmentation and/or based on the measure of similarity.



FIG. 11B shows a graph 1120 depicting mean signal intensities 1122 of segmented regions represented in FIG. 11A as functions of a time index 1124 according to an illustrative embodiment of the invention. The color of each data series in FIG. 11B corresponds to the same-colored cluster depicted in FIG. 11A. The graph 1120 of FIG. 11B is one way to visually relate a sequence of images using the results of segmentation according to this embodiment. In this embodiment, the method identifies a cluster having a high initial rate of increase of signal intensity 1122 by observing data series 1126 in FIG. 11B, whose signal intensity increases more quickly than the other data series. Regions belonging to the same cluster have data series 1126 of the same color. In another embodiment, the method derives kinetic rate constants from each of the data series determined in FIG. 11B, and the regions having data series most closely matching kinetic rate constants of interest are identified. In another example, the method curve fits one or more data series to obtain characterization of the mean signal intensities 1122 of each data series as functions of time.


Similarly, FIGS. 11C and 11D illustrate an embodiment of the method at step 112 of FIG. 1, relating images after segmentation. FIG. 11C depicts a segmentation mask 1140 produced using the clustering technique for the exemplary aceto-whitening sequence in FIG. 11A, according to the embodiment. In FIG. 11C, however, the number of clusters, c, chosen is 2. Again, there are more regions than clusters, since portions of the same clusters are non-contiguous. The threshold for the relative decrease of Jm in step 1010 of FIG. 10 is 0.001 for this embodiment.



FIG. 12 relates to step 110 of FIG. 1, segmenting the area represented in a sequence of images into regions based on measures of similarity between regions over the sequence, according to an illustrative embodiment of the invention. In this embodiment, morphological techniques of segmentation are continued beyond filtering and pre-segmentation. FIG. 12 shows a schematic flow diagram 1202 depicting a hierarchical watershed approach of segmentation according to the illustrative embodiment.


In step 1204 of FIG. 12, the method computes a gradient image from Equations (14) and (15) according to the embodiment, which incorporates data from the entire sequence of images, as discussed above. The method segments data based on information from the entire sequence of images, not just one image. A watershed transform is applied to this gradient image in step 1206 of FIG. 12. Some embodiments apply first-in-first-out (FIFO) queues, sorted data, and other techniques to speed the performance of the watershed transform. Also, in some embodiments, the method applies a sigmoidal scaling function to the gradient image, prior to performing the watershed transform to enhance the contrast between whitish and reddish (dark) regions, which is particularly useful when analyzing images of a cervix. The catchment basins resulting from application of the watershed transform represent the segmented regions in this embodiment.



FIG. 13 shows a gradient image 1302 calculated from Equations (5) and (6) for an exemplary sequence of images, according to an embodiment of the invention.


According to an embodiment, the method at step 1208 of FIG. 12 constructs a new gradient image using geodesic reconstruction. Two different techniques of geodesic reconstruction which embodiments may employ include erosion and dilation. In step 1210 of FIG. 12, the method determines whether over-segmentation has been reduced sufficiently, according to this embodiment. If so, the segmentation may be considered complete, or one or more additional segmentation techniques may be applied, such as a region merging or robust region merging technique, both of which are discussed above. If over-segmentation has not been reduced sufficiently, the method calculates the watershed transform of the reconstructed gradient image as in step 1206, and the process is continued.


Certain embodiment methods use the hierarchical watershed to segment larger areas, such as large lesions or the background cervix. In some embodiments, the number of iterations is less than about 4 such that regions do not become too large, obscuring real details.


In some embodiments, the method performs one iteration of the hierarchical watershed, and continues merging regions using the robust region merging technique.



FIGS. 14A and 14B show an illustrative embodiment of the method at step 112 of FIG. 1, relating images after segmentation. FIG. 14A depicts a segmentation mask 1402 produced using one iteration of the hierarchical watershed technique discussed above for an exemplary aceto-whitening sequence, according to the embodiment. Each region has a different label, and is represented by a different color in the mask 1402. FIG. 14B depicts a segmentation mask 1430 produced using two iterations of the hierarchical watershed technique discussed above for the exemplary aceto-whitening sequence, according to the embodiment. The segmentation mask 1430 in FIG. 14B, produced using two iterations, has fewer regions and is more simplified than the segmentation mask 1402 in FIG. 14A, produced using one iteration.


Other embodiments employ a “region growing technique” to performing step 110 of FIG. 1, segmenting an area represented in a sequence of images into regions based on measures of similarity between regions over the sequence. The region growing technique is different from the region merging and robust region merging techniques in that one or more initial regions, called seed regions, grow by merging with neighboring regions. The region merging, robust region merging, and region growing methods are each iterative. The region merging and region growing techniques each use the same fitting function to evaluate the similarity between signals from neighboring regions. In some embodiments of the region growing algorithm, the user manually selects seed regions. In other embodiments, the seed regions are selected in an automatic fashion. Hard criteria may be used to select areas that are of high interest and/or which behave in a certain way. In some embodiments, a user selects seed regions based on that user's experience. The region growing algorithm then proceeds by detecting similar regions and adding them to the set of seeds.


One embodiment of the invention is a combined technique using the region growing algorithm, starting with a grain image, followed by performing one iteration of the hierarchical watershed technique, and then growing the selected region according to the robust region merging algorithm.


In some embodiments, the segmentation techniques discussed herein are combined in various ways. In some embodiments, the method processes and analyzes data from a sequence of images in an aceto-whitening test, for instance, using a coarse-to-fine approach. In one embodiment, a first segmentation reveals large whitening regions, called background lesions, which are then considered as regions of interest and are masked for additional segmentation.


A second segmentation step of the embodiment may outline smaller regions, called foreground lesions. Segmentation steps subsequent to the second step may also be considered. As used here, the term “lesion” does not necessarily refer to any diagnosed area, but to an area of interest, such as an area displaying a certain whitening characteristic during the sequence. From the final segmentation, regions are selected for diagnosis, preliminary or otherwise; for further analysis; or for biopsy, for example. Additionally, the segmentation information may be combined with manually drawn biopsy locations for which a pathology report may be generated. In one illustrative embodiment, the method still applies the pre-processing procedures discussed herein above before performing the multi-step segmentation techniques.



FIG. 16A shows a segmentation mask produced using a combined clustering approach and robust region merging approach for an exemplary aceto-whitening sequence, according to an illustrative embodiment of the invention. In the embodiment, the method performs pre-processing steps, including pre-segmenting pixels into grains using a watershed transform as discussed herein above. Then, the method applies the clustering technique to the sequence as discussed in FIG. 10, using c=3 clusters and Jm=0.001. This produces a “coarse” segmentation. From this coarse segmentation, the method selects a boomerang-shaped background lesion, corresponding to a large whitening region. The method masks out, or eliminates from further analysis, the remaining areas of the image frame.


Then, a robust region merging procedure is applied, as shown in FIG. 8, to the background lesion, according to the embodiment. Here, the method uses a similarity criterion of φkl=0.7 in step 807 of FIG. 8, and a variance threshold of 120 in step 810 of FIG. 8. In this and other embodiments, regions less than 16 pixels large are removed. The resulting segmentation is shown in frame 1602 of FIG. 16A.



FIG. 16B shows a graph 1604 depicting mean signal intensity 1606 of segmented regions represented in FIG. 16A as functions of a time index 1608 according to an illustrative embodiment of the invention. The color of each curve in FIG. 16B corresponds to the same-colored segment depicted in FIG. 16A. Regions having a high initial rate of increase of signal intensity 1606 include regions 1610 and 1612, shown in FIG. 16A.



FIG. 17A represents a segmentation mask produced using a combined clustering approach and watershed approach for the exemplary aceto-whitening sequence of FIG. 16A, according to an illustrative embodiment of the invention. The method performs pre-processing steps, including the pre-segmenting of pixels into grains using a watershed transform as discussed herein above. Then, the method applies the clustering technique to the sequence as discussed in FIG. 10, using c=3 clusters and Jm=0.001. This produces a “coarse” segmentation. From this coarse segmentation, the method selects a boomerang-shaped background lesion, corresponding to a large whitening region. The method masks out remaining areas of the image frame.


Then, the method applies a hierarchical watershed segmentation procedure, as shown in FIG. 12. In this embodiment, the method computes one iteration of the watershed transform, then a region merging technique as per FIG. 6, using a fitting value threshold of 0.85 in step 610 of FIG. 6. Regions smaller than 16 pixels are removed. The resulting segmentation is shown in frame 1702 of FIG. 17A.



FIG. 17B shows a graph 1720 depicting mean signal intensity 1722 of segmented regions represented in FIG. 17A as functions of a time index 1724 according to an illustrative embodiment of the invention. The color of each curve in FIG. 17B corresponds to the same-colored segment depicted in FIG. 17A. Regions having a high initial rate of increase of signal intensity 1722 include regions 1726 and 1728, shown in FIG. 17A.



FIG. 18A represents a segmentation mask produced using a two-step clustering approach for the exemplary aceto-whitening sequence of FIG. 16A, according to an illustrative embodiment of the invention. The method performs pre-processing steps, including pre-segmenting pixels into grains using a watershed transform as discussed herein above. Then, the method applies a clustering technique to the sequence as discussed in FIG. 10, using c=3 clusters and Jm=0.001. This produces a “coarse” segmentation. From this coarse segmentation, the method selects a boomerang-shaped background lesion, corresponding to a large whitening region. The method masks out the remaining areas of the image frame from further analysis.


Then, the method applies a second clustering procedure, as shown in FIG. 10, to the background lesion. Here again, the method uses c=3 clusters and Jm=0.001. Regions less than 16 pixels large are removed. This produces a foreground lesion, shown in frame 1802 of FIG. 18A.



FIG. 18B shows a graph 1820 depicting mean signal intensity 1822 of segmented regions represented in FIG. 18A as functions of a time index 1824 according to an illustrative embodiment of the invention. The color of each curve in FIG. 18B corresponds to the same-colored cluster depicted in FIG. 18A. Regions having a high initial rate of increase of signal intensity 1822 include regions 1828 and 1826, shown in FIG. 18A.



FIG. 19 depicts the human cervix tissue of FIG. 2A with an overlay of manual doctor annotations made after viewing the exemplary aceto-whitening image sequence discussed herein above. Based on her viewing of the sequence and on her experience with the aceto-whitening procedure, the doctor annotated regions with suspicion of pathology 1904, 1906, 1908, 1910, and 1912. The doctor did not examine results of any segmentation analysis prior to making the annotations. Regions 1910 and 1912 were singled out by the doctor as regions with the highest suspicion of pathology.



FIG. 20A is a representation of a segmentation mask produced using a combined clustering approach and robust region merging approach as discussed above and as shown in FIG. 16A, according to an illustrative embodiment of the invention. The segmentation mask in FIG. 20A, however, is shown with a correspondingly-aligned overlay of the manual doctor annotations of FIG. 19. FIG. 20B is a representation of a segmentation mask produced using a combined clustering approach and watershed technique as discussed above and shown in FIG. 18A. The segmentation mask in FIG. 20B, however, is shown with a correspondingly-aligned overlay of the manual doctor annotations of FIG. 19, according to an embodiment of the invention.


Areas 1912 and 1910 in FIGS. 20A and 20B correspond to the doctor's annotations of areas of high suspicion of pathology. In the segmentation masks produced from the combined techniques of both FIGS. 20A and 20B, these areas (1912 and 1910) correspond to regions of rapid, intense whitening. In the doctor's experience, areas of rapid, intense whitening correspond to areas of suspicion of pathology. Thus, the techniques discussed herein provide a method of determining a tissue characteristic, namely, the presence or absence of a suspicion of pathology. Certain embodiments of the invention use the techniques in addition to a doctor's analysis or in place of a doctor's analysis. Certain embodiments use combinations of the methods described herein to produce similar results.


Some embodiments of the invention for applications other than the analysis of acetowhitening tests of cervical tissue also use various inventive analysis techniques as described herein. A practitioner may customize elements of an analysis technique disclosed herein, based on the attributes of her particular application, according to embodiments of the invention. For instance, the practitioner may choose among the segmentation techniques disclosed herein, depending on the application for which she intends to practice embodiments of the inventive methods. By using the techniques described herein, it is possible to visually capture all the frames of a sequence at once and relate regions according to their signals over a period of time.


Certain embodiments of the invention methods analyze more complex behavior. Some embodiments segment the image plane of a sequence of images, then feature-extract the resulting mean intensity signals to characterize the signals of each segmented region. Examples of feature extraction procedures include any number of curve fitting techniques or functional analysis techniques used to mathematically and/or statistically describe characteristics of one or more data series. In some embodiments, these features are then used in a manual, automated, or semi-automated method for the classification of tissue.


For example, in certain embodiments, the method classifies a region of cervical tissue either as “high grade disease” tissue, which includes Cervical Intraepithelial Neoplasia II/III (CIN II/III), or as “not high grade disease” tissue, which includes normal squamous (NED—no evidence of disease), metaplasia, and CIN I tissue. The classification for a segmented region may be within a predicted degree of certainty using features extracted from the mean signal intensity curve corresponding to the region. In one embodiment, this classification is performed for each segmented region in an image plane to produce a map of regions of tissue classified as high grade disease tissue. Other embodiments make more specific classifications and distinctions between tissue characteristics, such as distinction between NED, metaplasia, and CIN I tissue.



FIG. 21A, FIG. 21B, FIG. 21C, and FIG. 21D depict steps in the classification of regions of tissue in a sequence of images obtained during an acetowhitening procedure performed on a patient with high grade disease according to an illustrative embodiment of the invention. FIG. 21A depicts a reference image 2102 of cervical tissue of the patient from a sequence of images obtained during the acetowhitening test. FIG. 21B is a representation 2106 of the reference image 2102 of FIG. 21A after applying a manual mask, accounting for glare, and accounting for chromatic artifacts as discussed herein according to an illustrative embodiment of the invention. For example, areas such as areas 2110 and 2112 of FIG. 21B have been masked for glare and chromatic effects, respectively, using techniques as discussed herein. FIG. 21C shows a graph 2120 depicting mean signal intensities 2122 of segmented regions for the sequence of FIG. 21A as functions of a time index 2124 and as determined using the clustering segmentation approach depicted in the schematic flow diagram 1002 of FIG. 10 and as discussed herein according to an illustrative embodiment of the invention.


It was desired to classify each of the segmented regions as either “indicative of high grade disease” or “not indicative of high grade disease.” Thus, an embodiment of the invention extracted specific features from each of the mean signal intensity data series depicted in the graph 2120 of FIG. 21C, and used these features in a classification algorithm.


A classification algorithm was designed using results of a clinical study. In the study, mean signal intensity curves were determined using sequences of images from acetowhitening tests performed on over 200 patients. The classification algorithm may be updated according to an embodiment of the invention upon conducting further or different clinical testing. The present algorithm was based upon two feature parameters extracted from each of certain mean signal intensity data series corresponding to segmented regions of the image sequences for which biopsies were performed. These two feature parameters are as follows:

    • 1. X is the slope of a curve (here, a polynomial curve) fitted to the mean signal intensity data series of a segmented region at the time corresponding to 235 seconds after application of the acetic acid (M235); and
    • 2. Y is the slope of the polynomial curve at the time corresponding to an intensity that is −16 dB from the maximum intensity (about 45% of the maximum mean signal intensity) on the decaying side of the polynomial curve (−16 dB slope).


      The choice of the feature parameters X and Y above was made by conducting a Discrimination Function (DF) analysis of the data sets from the clinical study. A wide range of candidate feature parameters, including X and Y, were tested. X and Y provided a classification algorithm having the best accuracy.


A jackknifed classification matrix linear discriminant analysis was performed on the extracted features X and Y corresponding to certain of the mean signal intensity curves from each of the clinical tests. The curves used were those corresponding to regions for which tissue biopsies were performed. From the linear discriminant analysis, it was determined that a classification algorithm using the discriminant line shown in Equation (24) results in a diagnostic sensitivity of 88% and a specificity of 88% for the separation of CIN II/III (high grade disease) from the group consisting of normal squamous (NED), metaplasia, and CIN I tissue (not high grade disease):

Y=−0.9282X−0.1348.  (24)

Varying the classification model parameters by as much as 10% yields very similar model outcomes, suggesting the model features are highly stable.



FIG. 21D represents a map 2130 of regions of tissue as segmented in FIG. 21C classified as either high grade disease tissue or not high grade disease tissue using the classification algorithm of Equation (24). This embodiment determined this classification for each of the segmented regions by calculating X and Y for each region and determining whether the point (X,Y) falls below the line of Equation (24), in which case the region was classified as high grade disease, or whether the point (X,Y) falls above the line of Equation (24), in which case the region was classified as not high grade disease. The embodiment draws further distinction depending on how far above or below the line of Equation (24) the point (X,Y) falls. The map 2130 of FIG. 21D indicates segmented regions of high grade disease as red, orange, and yellow, and regions not classifiable as high grade disease as blue. The index 2132 reflects how far the point (X,Y) of a given segment falls below the line of Equation (24). Other embodiments include those employing other segmentation techniques as described herein. Still other embodiments include those employing different classification algorithms, including those using feature parameters other than X and Y, extracted from mean signal data series corresponding to segmented regions.


EQUIVALENTS

While the invention has been particularly shown and described with reference to specific preferred embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims
  • 1. A method of locating a portion of tissue with a characteristic of interest, the method comprising the steps of: (a) characterizing an acetowhitening signal from a temporal sequence of images of a tissue following application of a chemical agent to the tissue, wherein the chemical agent comprises acetic acid;(b) analyzing the acetowhitening signal to determine a measure of similarity between two selected regions of the tissue, the measure of similarity indicating how similarly tissue in each region responds to the chemical agent, and grouping the two selected regions if the measure of similarity is larger than a given threshold, wherein the step of determining the measure of similarity comprises, for each of the two selected regions, averaging data corresponding to pixels within the region at each of a plurality of time steps to obtain a mean signal for the region, then quantifying the similarity between the two resulting, mean signals;(c) repeating step (b), thereby differentiating regions according to how tissue in each region responds to the chemical agent; and(d) locating a portion of the tissue with a characteristic of interest, the located portion corresponding to at least one of the differentiated regions.
  • 2. The method of claim 1, wherein step (b) further comprises merging the two selected regions into a single region if the measure of similarity satisfies a predetermined criterion.
  • 3. The method of claim 2, wherein steps (b) and (c) together comprise iteratively segmenting an area of the tissue into regions according to evolution of mean intensity of each region following application of the chemical agent.
  • 4. The method of claim 1, wherein step (c) comprises repeating step (b) for each of a plurality of pairs of selected regions.
  • 5. The method of claim 1, wherein the measure of similarity indicates a similarity in evolution of mean intensity of each region following application of the chemical agent.
  • 6. The method of claim 1, wherein step (b) comprises computing an N-dimensional dot product of mean signal intensities of the two selected regions.
  • 7. The method of claim 1, wherein the chemical agent further comprises a member selected from the group consisting of formic acid, propionic acid, butyric acid, Lugol's iodine, Shiller's iodine, methylene blue, toluidine blue, and indigo carmine.
  • 8. The method of claim 1, further comprising the step of: (e) determining a condition of the located portion.
  • 9. The method of claim 8, wherein the condition comprises a member selected from the group consisting of normal squamous tissue, metaplasia, CIN I, CIN II, CIN III, and CIN II/III.
  • 10. The method of claim 8, wherein step (e) comprises determining a condition of the located portion based at least in part on evolution of mean intensity of the located portion.
  • 11. The method of claim 8, wherein step (e) further comprises obtaining a biopsy specimen within the located portion prior to determining the condition of the located portion.
  • 12. The method of claim 8, wherein the condition comprises a member selected from the group consisting of CIN II, CIN III, and CIN II/III.
  • 13. The method of claim 1, wherein the characteristic of interest is a suspicion of pathology.
  • 14. The method of claim 1, wherein the tissue comprises cervical tissue.
  • 15. The method of claim 1, wherein the tissue comprises at least one member selected from the group consisting of epithelial tissue, colorectal tissue, skin, and uterine tissue.
  • 16. The method of claim 1, further comprising the step of illuminating the tissue using a white light source, a UV light source, or both.
  • 17. A method of differentiating regions of a tissue, the method comprising the steps of: (a) accessing a temporal sequence of images of a tissue following application of a chemical agent to the tissue; and(b) creating a segmentation mask that represents an image plane divided into regions according to how similarly tissue in each region responds to the chemical agent, wherein step (b) comprises analyzing an acetowhitening signal to determine a measure of similarity between two selected regions of the tissue, the measure of similarity indicating how similarly tissue in each region responds to the chemical agent, and grouping the two selected regions if the measure of similarity is larger than a given threshold, wherein the step of determining the measure of similarity comprises, for each of the two selected regions, averaging data corresponding to pixels within the region at each of a plurality of time steps to obtain a mean signal for the region, then quantifying the similarity between the two resulting mean signals.
  • 18. The method of claim 17, wherein the chemical agent comprises a member selected from the group consisting of acetic acid, formic acid, propionic acid, butyric acid, Lugol's iodine, Shiller's iodine, methylene blue, toluidine blue, and indigo carmine.
  • 19. The method of claim 17, wherein the chemical agent comprises acetic acid.
  • 20. The method of claim 17, further comprising the step of: (c) locating a portion of the tissue with a characteristic of interest, the located portion corresponding to at least one of the differentiated regions.
  • 21. The method of claim 20, further comprising the step of: (d) determining a condition of the located portion.
  • 22. The method of claim 21, wherein the condition comprises a member selected from the group consisting of normal squamous tissue, metaplasia, CIN I, CIN II, CIN III, and CIN II/III.
  • 23. The method of claim 21, wherein the condition comprises a member selected from the group consisting of CIN II, CIN III, and CIN II/III.
  • 24. The method of claim 17, wherein the tissue comprises cervical tissue.
  • 25. The method of claim 17, wherein the tissue comprises at least one member selected from the group consisting of epithelial tissue, colorectal tissue, skin, and uterine tissue.
  • 26. The method of claim 17, wherein the step of creating the segmentation mask comprises analyzing an acetowhitening signal characterized from the temporal sequence of images, wherein the chemical agent comprises acetic acid.
  • 27. A system for differentiating regions of a tissue, the system comprising: a light source that illuminates a tissue;a camera that obtains a temporal sequence of images of the tissue following application of a chemical agent to the tissue, the chemical agent comprising acetic acid; andsoftware that performs the steps of: (i) characterizing an acetowhitening signal from temporal sequence of images;(ii) analyzing the acetowhitening signal to determining a measure of similarity between two selected regions of the tissue, the measure of similarity indicating how similarly tissue in each region responds to the chemical agent, and grouping the two selected regions if the measure of similarity is larger than a given threshold, wherein the step of determining the measure of similarity comprises, for each of the two selected regions, averaging data corresponding to pixels within the region at each of a plurality of time steps to obtain a mean signal for the region, then quantifying the similarity between the two resulting mean signals; and(iii) repeating step (ii), thereby differentiating regions according to how tissue in each region responds to the chemical agent.
  • 28. The system of claim 27, wherein the chemical agent further comprises a member selected from the group consisting of formic acid, propionic acid, butyric acid, Lugol's iodine, Shiller's iodine, methylene blue, toluidine blue, and indigo carmine.
  • 29. The system of claim 27, wherein the software further performs the step of: (c) locating a portion of the tissue with a characteristic of interest, the located portion corresponding to at least one of the differentiated regions.
  • 30. The system of claim 29, further comprising the step of: (d) determining a condition of the located portion.
  • 31. The system of claim 30, wherein the condition comprises a member selected from the group consisting of normal squamous tissue, metaplasia, CIN I, CIN II, CIN III, and CIN II/III.
  • 32. The system of claim 30, wherein the condition comprises a member selected from the group consisting of CIN II, CIN III, and CIN II/III.
  • 33. The system of claim 27, wherein the tissue comprises cervical tissue.
  • 34. The system of claim 27, wherein the tissue comprises at least one member selected from the group consisting of epithelial tissue, colorectal tissue, skin, and uterine tissue.
PRIOR APPLICATIONS

The present application is a continuation-in-part of U.S. patent application Ser. No. 10/068,133, filed Feb. 5, 2002, which is a continuation of U.S. patent application Ser. No. 09/738,614, filed Dec. 15, 2000, which claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 60/170,972, filed Dec. 15, 1999; also, the present application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/353,978, filed Jan. 31, 2002. All of the above applications are assigned to the common assignee of this application and are hereby incorporated by reference.

GOVERNMENT RIGHTS

This invention was made with government support under Grant No. 1-R44-CA-91618-01 awarded by the U.S. Department of Health and Human Services. The government has certain rights in the invention.

US Referenced Citations (319)
Number Name Date Kind
3013467 Minsky Dec 1961 A
3632865 Haskell et al. Jan 1972 A
3809072 Ersek et al. May 1974 A
3890462 Limb et al. Jun 1975 A
3945371 Adelman Mar 1976 A
3963019 Quandt et al. Jun 1976 A
D242393 Bauman Nov 1976 S
D242396 Bauman Nov 1976 S
D242397 Bauman Nov 1976 S
D242398 Bauman Nov 1976 S
4017192 Rosenthal Apr 1977 A
4071020 Puglise et al. Jan 1978 A
4198571 Sheppard Apr 1980 A
4218703 Netravali et al. Aug 1980 A
4254421 Kreutel, Jr. Mar 1981 A
4273110 Groux Jun 1981 A
4349510 Kolehmainen et al. Sep 1982 A
4357075 Hunter Nov 1982 A
4396579 Schroeder et al. Aug 1983 A
4397557 Herwig et al. Aug 1983 A
4515165 Carroll May 1985 A
4549229 Nakano et al. Oct 1985 A
4558462 Horiba et al. Dec 1985 A
4641352 Fenster et al. Feb 1987 A
4646722 Silverstein et al. Mar 1987 A
4662360 O'Hara et al. May 1987 A
4733063 Kimura et al. Mar 1988 A
4741326 Sidall et al. May 1988 A
4753530 Knight et al. Jun 1988 A
4755055 Johnson et al. Jul 1988 A
4768513 Suzuki Sep 1988 A
4800571 Konishi Jan 1989 A
4803049 Hirschfeld et al. Feb 1989 A
4844617 Kelderman et al. Jul 1989 A
4845352 Benschop Jul 1989 A
4852955 Doyle et al. Aug 1989 A
4877033 Seitz, Jr. Oct 1989 A
4878485 Adair Nov 1989 A
4891829 Deckman et al. Jan 1990 A
4930516 Alfano et al. Jun 1990 A
4945478 Merickel et al. Jul 1990 A
4965441 Picard Oct 1990 A
4972258 Wolf et al. Nov 1990 A
4974580 Anapliotis Dec 1990 A
4979498 Oneda et al. Dec 1990 A
4997242 Amos Mar 1991 A
5003979 Merickel et al. Apr 1991 A
5011243 Doyle et al. Apr 1991 A
5022757 Modell Jun 1991 A
5028802 Webb et al. Jul 1991 A
5032720 White Jul 1991 A
5034613 Denk et al. Jul 1991 A
5036853 Jeffcoat et al. Aug 1991 A
5042494 Alfano Aug 1991 A
5048946 Sklar et al. Sep 1991 A
5054926 Dabbs et al. Oct 1991 A
5065008 Hakamata et al. Nov 1991 A
5071246 Blaha et al. Dec 1991 A
5074306 Green et al. Dec 1991 A
5083220 Hill Jan 1992 A
5091652 Mathies et al. Feb 1992 A
5101825 Gravenstein et al. Apr 1992 A
5120953 Harris Jun 1992 A
5122653 Ohki Jun 1992 A
5132526 Iwasaki Jul 1992 A
5139025 Lewis et al. Aug 1992 A
5154166 Chikama Oct 1992 A
5159919 Chikama Nov 1992 A
5161053 Dabbs Nov 1992 A
5162641 Fountain Nov 1992 A
5162941 Favro et al. Nov 1992 A
5168157 Kimura Dec 1992 A
5192980 Dixon et al. Mar 1993 A
5193525 Silverstein et al. Mar 1993 A
RE34214 Carlsson et al. Apr 1993 E
5199431 Kittrell et al. Apr 1993 A
5201318 Rava et al. Apr 1993 A
5201908 Jones Apr 1993 A
5203328 Samuels et al. Apr 1993 A
5205291 Potter Apr 1993 A
5225671 Fukuyama Jul 1993 A
5235457 Lichtman et al. Aug 1993 A
5237984 Williams, III et al. Aug 1993 A
5239178 Derndinger et al. Aug 1993 A
5248876 Kerstens et al. Sep 1993 A
5253071 MacKay Oct 1993 A
5257617 Takahashi Nov 1993 A
5260569 Kimura Nov 1993 A
5260578 Bliton et al. Nov 1993 A
5261410 Alfano et al. Nov 1993 A
5262646 Booker et al. Nov 1993 A
5267179 Butler et al. Nov 1993 A
5274240 Mathies et al. Dec 1993 A
5284149 Dhadwal et al. Feb 1994 A
5285490 Bunch et al. Feb 1994 A
5286964 Fountain Feb 1994 A
5289274 Kondo Feb 1994 A
5294799 Aslund et al. Mar 1994 A
5296700 Kumagai Mar 1994 A
5303026 Strobl et al. Apr 1994 A
5306902 Goodman Apr 1994 A
5313567 Civanlar et al. May 1994 A
5319200 Rosenthal et al. Jun 1994 A
5321501 Swanson et al. Jun 1994 A
5324979 Rosenthal Jun 1994 A
5325846 Szabo Jul 1994 A
5329352 Jacobsen Jul 1994 A
5337734 Saab Aug 1994 A
5343038 Nishiwaki et al. Aug 1994 A
5345306 Ichimura et al. Sep 1994 A
5345941 Rava et al. Sep 1994 A
5349961 Stoddart et al. Sep 1994 A
5383874 Jackson et al. Jan 1995 A
5398685 Wilk et al. Mar 1995 A
5402768 Adair Apr 1995 A
5406939 Bala Apr 1995 A
5412563 Cline et al. May 1995 A
5413092 Williams, III et al. May 1995 A
5413108 Alfano May 1995 A
5415157 Welcome May 1995 A
5418797 Bashkansky et al. May 1995 A
5419311 Yabe et al. May 1995 A
5419323 Kittrell et al. May 1995 A
5421337 Richards-Kortum et al. Jun 1995 A
5421339 Ramanujam et al. Jun 1995 A
5424543 Dombrowski et al. Jun 1995 A
5441053 Lodder et al. Aug 1995 A
5450857 Garfield et al. Sep 1995 A
5451931 Miller et al. Sep 1995 A
5452723 Wu et al. Sep 1995 A
5458132 Yabe et al. Oct 1995 A
5458133 Yabe et al. Oct 1995 A
5467767 Alfano et al. Nov 1995 A
5469853 Law et al. Nov 1995 A
5477382 Pernick Dec 1995 A
5480775 Ito et al. Jan 1996 A
5493444 Khoury et al. Feb 1996 A
5496259 Perkins Mar 1996 A
5507295 Skidmore Apr 1996 A
5516010 O'Hara et al. May 1996 A
5519545 Kawahara May 1996 A
5529235 Bolarski et al. Jun 1996 A
5536236 Yabe et al. Jul 1996 A
5545121 Yabe et al. Aug 1996 A
5551945 Yabe et al. Sep 1996 A
5556367 Yabe et al. Sep 1996 A
5562100 Kittrell et al. Oct 1996 A
5579773 Vo-Dinh et al. Dec 1996 A
5582168 Samuels et al. Dec 1996 A
5587832 Krause Dec 1996 A
5596992 Haaland et al. Jan 1997 A
5599717 Vo-Dinh Feb 1997 A
5609560 Ichikawa et al. Mar 1997 A
5612540 Richards-Korum et al. Mar 1997 A
5623932 Ramanujam et al. Apr 1997 A
5643175 Adair Jul 1997 A
5647368 Zeng et al. Jul 1997 A
5659384 Ina Aug 1997 A
5662588 Lida Sep 1997 A
5685822 Harhen Nov 1997 A
5690106 Bani-Hashemi et al. Nov 1997 A
5693043 Kittrell et al. Dec 1997 A
5695448 Kimura et al. Dec 1997 A
5697373 Richards-Kortum et al. Dec 1997 A
5699795 Richards-Kortum Dec 1997 A
5704892 Adair Jan 1998 A
5707343 O'Hara et al. Jan 1998 A
5713364 DeBaryshe et al. Feb 1998 A
5717209 Bigman et al. Feb 1998 A
5730701 Furukawa et al. Mar 1998 A
5733244 Yasui et al. Mar 1998 A
5735276 Lemelson et al. Apr 1998 A
5746695 Yasui et al. May 1998 A
5768333 Abdel-Mottaleb Jun 1998 A
5769792 Palcic et al. Jun 1998 A
5773835 Sinofsky et al. Jun 1998 A
5784162 Cabib et al. Jul 1998 A
5791346 Craine et al. Aug 1998 A
5795632 Buchalter Aug 1998 A
5800350 Coppleson et al. Sep 1998 A
5807248 Mills Sep 1998 A
5813987 Modell et al. Sep 1998 A
5817015 Adair Oct 1998 A
5830146 Skladnev et al. Nov 1998 A
5832931 Wachter et al. Nov 1998 A
5833617 Hayashi Nov 1998 A
5838435 Sandison Nov 1998 A
5840035 Heusmann et al. Nov 1998 A
5842995 Mahadevan-Jansen et al. Dec 1998 A
5855551 Sklandnev et al. Jan 1999 A
5860913 Yamaya et al. Jan 1999 A
5863287 Segawa Jan 1999 A
5865726 Katsurada et al. Feb 1999 A
5871439 Takahashi et al. Feb 1999 A
5876329 Harhen Mar 1999 A
5894340 Loree et al. Apr 1999 A
5902246 McHenry et al. May 1999 A
5912257 Prasad et al. Jun 1999 A
5920399 Sandison et al. Jul 1999 A
5921926 Rolland et al. Jul 1999 A
5929985 Sandison et al. Jul 1999 A
5931779 Arakaki et al. Aug 1999 A
5938617 Vo-Dinh Aug 1999 A
5941834 Skladnev et al. Aug 1999 A
5983125 Alfano et al. Nov 1999 A
5987343 Kinast Nov 1999 A
5989184 Blair Nov 1999 A
5991653 Richards-Kortum et al. Nov 1999 A
5995645 Soenksen et al. Nov 1999 A
5999844 Gombrich et al. Dec 1999 A
6011596 Burl et al. Jan 2000 A
6021344 Lui et al. Feb 2000 A
6026319 Hayashi Feb 2000 A
6058322 Nishikawa et al. May 2000 A
6067371 Gouge et al. May 2000 A
6069689 Zeng et al. May 2000 A
6083487 Biel Jul 2000 A
6091985 Alfano et al. Jul 2000 A
6092722 Heinrichs et al. Jul 2000 A
6095982 Richards-Kortum et al. Aug 2000 A
6096065 Crowley Aug 2000 A
6099464 Shimizu et al. Aug 2000 A
6101408 Craine et al. Aug 2000 A
6104945 Modell et al. Aug 2000 A
6119031 Crowley Sep 2000 A
6123454 Canfield et al. Sep 2000 A
6124597 Shehada et al. Sep 2000 A
6126899 Woudenberg et al. Oct 2000 A
6135965 Tumor et al. Oct 2000 A
6146897 Cohenford et al. Nov 2000 A
6166079 Follen et al. Dec 2000 A
6169817 Parker et al. Jan 2001 B1
6187289 Richards-Kortum et al. Feb 2001 B1
6208887 Clarke et al. Mar 2001 B1
6210331 Raz Apr 2001 B1
6224256 Bala May 2001 B1
6241662 Richards-Kortum et al. Jun 2001 B1
6243601 Wist Jun 2001 B1
6246471 Jung et al. Jun 2001 B1
6246479 Jung et al. Jun 2001 B1
6258576 Richards-Kortum et al. Jul 2001 B1
6277067 Blair Aug 2001 B1
6285639 Maenza et al. Sep 2001 B1
6289236 Koenig et al. Sep 2001 B1
6312385 Mo et al. Nov 2001 B1
6317617 Gilhuijs et al. Nov 2001 B1
6332092 Deckert et al. Dec 2001 B1
D453832 Morrell et al. Feb 2002 S
D453962 Morrell et al. Feb 2002 S
D453963 Morrell et al. Feb 2002 S
D453964 Morrell et al. Feb 2002 S
6370422 Richards-Kortum et al. Apr 2002 B1
6373998 Thirion et al. Apr 2002 B2
6377842 Pogue et al. Apr 2002 B1
6385484 Nordstrom et al. May 2002 B2
6390671 Tseng May 2002 B1
6405070 Banerjee Jun 2002 B1
6411835 Modell et al. Jun 2002 B1
6411838 Nordstrom et al. Jun 2002 B1
D460821 Morrell et al. Jul 2002 S
6421553 Costa et al. Jul 2002 B1
6424852 Zavislan Jul 2002 B1
6427082 Nordstrom et al. Jul 2002 B1
6465968 Sendai Oct 2002 B1
6466687 Uppaluri et al. Oct 2002 B1
6487440 Deckert et al. Nov 2002 B2
6497659 Rafert Dec 2002 B1
6571118 Utzinger et al. May 2003 B1
6571119 Hayashi May 2003 B2
6574502 Hayashi Jun 2003 B2
6593101 Richards-Kortum et al. Jul 2003 B2
6593102 Zahniser Jul 2003 B2
6633657 Kump et al. Oct 2003 B1
6639674 Sokolov et al. Oct 2003 B2
6640000 Fey et al. Oct 2003 B1
6671540 Hochman Dec 2003 B1
6697666 Richards-Kortum et al. Feb 2004 B1
6717668 Treado et al. Apr 2004 B2
6760613 Nordstrom et al. Jul 2004 B2
6766184 Utzinger et al. Jul 2004 B2
6768918 Zelenchuk Jul 2004 B2
6794431 Rosania et al. Sep 2004 B1
6818903 Schomacker et al. Nov 2004 B2
6826422 Modell et al. Nov 2004 B1
D500134 Banks et al. Dec 2004 S
6839661 Costa et al. Jan 2005 B2
6847490 Nordstrom et al. Jan 2005 B1
6902935 Kaufman et al. Jun 2005 B2
D507349 Banks et al. Jul 2005 S
6933154 Schomacker et al. Aug 2005 B2
6975899 Faupel et al. Dec 2005 B2
20010041843 Modell et al. Nov 2001 A1
20020007122 Kaufman et al. Jan 2002 A1
20020007123 Balas Jan 2002 A1
20020107668 Costa et al. Aug 2002 A1
20020127735 Kaufman et al. Sep 2002 A1
20020133073 Nordstrom et al. Sep 2002 A1
20020177777 Nordstrom et al. Nov 2002 A1
20020183626 Nordstrom et al. Dec 2002 A1
20020197728 Kaufman et al. Dec 2002 A1
20030095721 Clune et al. May 2003 A1
20030114762 Balas et al. Jun 2003 A1
20030144585 Kaufman et al. Jul 2003 A1
20030163049 Balas Aug 2003 A1
20030207250 Kaufman et al. Nov 2003 A1
20040007674 Schomacker et al. Jan 2004 A1
20040010187 Schomacker et al. Jan 2004 A1
20040010195 Zelenchuk Jan 2004 A1
20040010375 Schomacker et al. Jan 2004 A1
20040023406 Schomacker et al. Feb 2004 A1
20040206882 Banks et al. Oct 2004 A1
20040206913 Costa et al. Oct 2004 A1
20040206914 Schomacker et al. Oct 2004 A1
20040207625 Griffin et al. Oct 2004 A1
20040208385 Jiang Oct 2004 A1
20040208390 Jiang et al. Oct 2004 A1
20040209237 Flewelling et al. Oct 2004 A1
20050054936 Balas Mar 2005 A1
20050090751 Balas Apr 2005 A1
Foreign Referenced Citations (42)
Number Date Country
196 29 646 Jan 1988 DE
0 135 134 Mar 1985 EP
0 280 418 Aug 1988 EP
0 335 725 Oct 1989 EP
0 444 689 Sep 1991 EP
0 474 264 Mar 1992 EP
0 641 542 Mar 1995 EP
0 689 045 Dec 1995 EP
0 737 849 Oct 1996 EP
1246124 Oct 2002 EP
1-245215 Sep 1989 JP
2-17429 Jan 1990 JP
5-256772 Oct 1993 JP
08-280602 Oct 1996 JP
1 223 092 Apr 1986 SU
WO9219148 Nov 1992 WO
WO9314688 Aug 1993 WO
WO9426168 Nov 1994 WO
WO9500067 Jan 1995 WO
WO9504385 Feb 1995 WO
9641152 Dec 1996 WO
WO9705473 Feb 1997 WO
WO9830889 Feb 1997 WO
WO9748331 Dec 1997 WO
WO9805253 Feb 1998 WO
WO9824369 Jun 1998 WO
WO9841176 Sep 1998 WO
WO9918847 Apr 1999 WO
WO9920313 Apr 1999 WO
WO9920314 Apr 1999 WO
WO9947041 Sep 1999 WO
WO9957507 Nov 1999 WO
WO9957529 Nov 1999 WO
WO 0015101 Mar 2000 WO
WO 0041615 Jul 2000 WO
WO 0057361 Sep 2000 WO
WO 0059366 Oct 2000 WO
WO 0074556 Dec 2000 WO
3063706 Aug 2003 WO
04005885 Jan 2004 WO
04005895 Jan 2004 WO
04095359 Nov 2004 WO
Related Publications (1)
Number Date Country
20030144585 A1 Jul 2003 US
Provisional Applications (2)
Number Date Country
60353978 Jan 2002 US
60170972 Dec 1999 US
Continuations (1)
Number Date Country
Parent 09738614 Dec 2000 US
Child 10068133 US
Continuation in Parts (1)
Number Date Country
Parent 10068133 Feb 2002 US
Child 10099881 US