The present invention relates to a method and apparatus for unsupervised data segmentation which is suitable for assigning multi-dimensional data points of a data set amongst a plurality of classes. The invention is particularly applicable to automated image segmentation, for instance in the field of medical imaging, thus allowing different parts of imaged objects to be recognised and demarcated automatically.
In the field of automated data processing it is useful to be able to recognise automatically different groups of data points within the data set. This is known as segmentation and it involves assigning the data points in the data set to different groups or classes.
An example of a field in which segmentation is useful is the field of image processing. A typical imaged scene contains one or more objects and background, and it would be useful to be able to recognise reliably and automatically the different parts of the scene. Typically this may be done by segmenting the image on the basis of the different intensities or colours appearing in the image. Image segmentation is applicable in a wide variety of imaging applications such as security monitoring, photo interpretation, examination of industrial parts or assemblies, and medical imaging. In medical imaging, for instance, it is useful to be able to distinguish different types of tissue or organs or to distinguish abnormalities such as an aneurysm or tumour from normal tissue. Currently, particularly in medical imaging, segmentation involves considerable input from a clinician in an interactive method.
For example, there have been proposals for methods of demarcating an aneurysm in an image of vasculature. A brain aneurysm is a localised persistent dilation of the wall of a blood vessel. Visually, it appears that part of the vessel has ballooned out. When the ballooning vessel pops, it will often result in the death of the patient. There are several possible treatments for an aneurysm including surgery (clipping) or filling the aneurysm with coils. The type of treatment is dependent upon factors such as aneurysm volume, neck size and the location of the aneurysm in the brain. The methods proposed involve first identifying the aneurysm neck, then labelling all pixels on one side of the neck as forming the aneurysm, while pixels on the other side are identified as part of the adjoining vessel. Such techniques are described in R. van der Weide, K. Zuiderveld, W. Mali and M. Viergever, “CTA-based angle selection for diagnostic and interventional angiography of saccular intracranial aneurysms”, IEEE Transactions on Medical Imaging, Vol. 17, No. 5, pp831-341, 1998 and D. Wilson, D. Royston, J. Noble and J. Byrne, “Determining X-ray projections for coil treatments of intracranial aneurysms”, IEEE Transactions on Medical Imaging, Vol. 18, No. 10, pp973-980, 1999. However, these techniques also rely on manual intervention for starting the segmentation.
Techniques of segmentation using region-splitting or region growing are well known, see for example: Rolf Adams and Leanne Bischof, “Seeded Region Growing”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 16, No. 6, pp641-647, June, 1994. However, these techniques require that the number of regions into which the data set is to be segmented is known in advance. Thus the techniques are not generally applicable to fully automatic methods.
Segmentation techniques in which there is no initial assumption of the number of classes found in the data set are referred to as “unsupervised” segmentation techniques. An unsupervised segmentation algorithm has been proposed in Charles Kervrann and Fabrise Heitz, “A Markov Random Field model-based approach to unsupervised texture segmentation using local and global spatial statistics”, Technical Report No. 2062, INRIA, October, 1993. This utilises an augmented Markov Random Field, where an extra class label is defined for new regions, and a parameter is pre-set to define the probability assigned to this extra state. Any points in the data set which are modelled sufficiently badly (assigned a low probability by the existing classes) will be assigned to this new class. At each iteration of the algorithm, connected components of such points are collated into new classes.
However, typical problems with unsupervised techniques are under-segmentation (in which data points are added to inappropriate classes) and over-segmentation (in which the data is divided into too many classes).
One aspect of the present invention provides an unsupervised segmentation method which is generally applicable to multi-dimensional data sets. Thus, it allows for completely automatic segmentation of the data points into a plurality of classes, without any prior knowledge of the number of classes involved.
In more detail this aspect of the invention provides an unsupervised segmentation method for assigning multi-dimensional data points of a selected data set amongst a plurality of classes, the method comprising the steps of:
The probability calculations may comprise the steps of determining a probability distribution of a property of the data points in the initial class and determining a probability distribution of said property of the data points in the second class, and comparing the data point under test with the two probability distributions. The probability calculations may also comprise the step of multiplying the probability derived from the probability distribution with an a priori probability derived, for example, from the proportion of points in the neighbourhood in the various classes.
The calculation of probability may be adapted as the method proceeds by recalculating the probability distributions as data points are assigned to the classes. The distributions will alter as the number of data points in the data points varies. This adaptation may take place every time a point is reassigned, or after a few points have been reassigned. The probability distributions may be calculated on the basis of histograms with bins of unequal width. The bin widths may be set by reference to the initial data set, e.g. to give a substantially equal number of counts in each bin.
Thus another aspect of the invention provides a method of histogram equalisation in which the bin sizes are set to give an initially substantially uniform number of counts in each bin. Thus the histogram sensitivity can be adapted to the specific application by an analysis of the entire data set.
In the segmentation method the classes continue to grow as more data points are assigned to them. Preferably the method continues until no more data points are added to the class, at which point another class may be defined and then grown by repeating the method steps.
The selection of the data point for initiating a class may be random, or it may be optimised, for example by ordering the remaining points based on the probability distribution.
Preferably classes are discarded (or “culled”) if they fail to grow, i.e. if they fail to have data points assigned to them when all necessary points have been tested. This is particularly useful in avoiding over-segmentation of the data set. Segmentation is concluded when all of the classes formed in turn on the basis of the data points remaining in the initial class have been discarded.
A predetermined neighbourhood of a data point d is an open set that contains at least the data point itself. One example is the open ball of radius r which contains all data points within a distance r of the data point d, though other shapes are possible and may be appropriate for different situations. In extreme cases, a neighbourhood may contain only the data point itself, or may contain the entire data set. The first and second predetermined neighbourhoods may be defined only on the spatial position of the data points, for instance in the application of the technique to an image where the aim is to segment the image into the different parts of the imaged object. However, in other data sets the neighbourhoods may be defined in a parameter space containing the data points.
Where the technique is applied to image segmentation, the data points may comprise a descriptor of at least a part of an object in the image and the spatial coordinates of that part. The descriptor may be representative of the shape, size, intensity (brightness), colour or any other detected property, of that part of the object.
Rather than taking the data points from the image itself, they may be taken from a spatial model fitted to the image, such as a 3-D mesh fitted to the image or its segmentation. This is particularly useful where the descriptor is a descriptor of the shape of the object.
The image may be a volumetric image or a non-invasive image, and for example may be an image in the medical field or industrial field (e.g. a part x-ray).
Another aspect of the invention provides a method of demarcating different parts of a structure in a representation of the structure, comprising the steps of calculating for each of a plurality of data points in the representation at least one shape descriptor of the structure at that point, and segmenting the representation on the basis of said at least one shape descriptor.
The representation may be an image of the structure, or may be a 3-D model of the structure (which could be derived by various imaging modalities). The results may be displayed in the form of a visual representation of the structure, with the parts distinguished, for instance by being shown in different colours.
The descriptor may comprise values representing cross-sectional size or shape of the structure at that point. The values may be lateral dimensions of the structure at that point, or a measure of the mean radius of rotation.
Another aspect of the invention provides a way of calculating a shape descriptor by defining a volume, e.g. a spherical volume, and changing the size of the volume, e.g. growing it, until a predefined proportion of it is filled by the structure.
The descriptors may be used to segment the representation automatically, for example using an unsupervised segmentation method such as the method in accordance with the first aspect of the invention.
The image may be a volumetric image or a non-invasive image, and for example may be an image in the medical field or industrial field (e.g. a part x-ray). In the medical field the method may be used to demarcate an aneurysm from vasculature, or to demarcate other protrusions.
The invention extends to a computer program comprising program code means for executing the methods on a suitably programmed computer. Further, the invention extends to a system and apparatus for processing and displaying data utilising the methods.
The invention will be further described by way of example, with reference to the accompanying drawings in which:
An embodiment of the invention applied to the shape based segmentation of an image of vasculature including an aneurysm and to the intensity based segmentation of a synthetic image will be described below. However, it will be appreciated that the segmentation technique is applicable to the segmentation of general data sets having data points in n-dimensions, where each data point has m numeric values. Thus it may be applied, for example, to intensity-based segmentation, for instance of ultrasound, MRI, CTA, 3-D angiography or colour/power Doppler data sets, to the segmentation of PC-MRA data where a scan provides information on the speed (intensity) and an estimated flow direction, and to unsupervised texture segmentation as well as object segmentation of parts based on geometry.
The segmented image can then be used to produce a 3-D model of the vessels and aneurysm. Given such a 3-D model, it is useful to demarcate the aneurysm, identifying where it connects to the major vessel. This allows the estimation of aneurysm volume and neck size and other geometry-related parameters, and hence aids the clinician to choose the appropriate treatment for a particular patient and possibly to use the information in the actual treatment (eg to select views of the aneurysm). In this embodiment the aneurysm is demarcated by first computing a triangular mesh over the 3-D model. Such a mesh can be computed using an established mesh method such as the marching cubes algorithm (see, for example, W. E. Lorensen and H. E. Cline, “Marching Cubes: A High Resolution 3D Surface Construction Algorithm”, Computer Graphics, Vol. 21, No. 3, pp 163-169, July, 1987). An example of a 3-D model showing an aneurysm and the adjoining vessels, and its associated mesh is illustrated in
The aneurysm segmentation of step s3 will be carried out in this embodiment by computing and using a shape descriptor, i.e. a description of the shape of the vasculature at that point. Two methods for doing this will be described.
1) As a first example of a shape descriptor at each vertex in the triangular mesh, a local description of the vessel shape is computed in the form of two values representing the radius and diameter of the vessel at that point, as shown in
Using ri the point pi is defined as an estimate of the vessel centre, defined as pi=vi+rini
The two directions of principal curvature on the mesh, that is the directions in which the curvature of the mesh at vi are a maximum and minimum can then be estimated. Denoting these directions as cmax and cmin, where the absolute value of cmax is larger than the absolute value of cmin, a vector from pi in the directions of cmax and −cmax is extended, measuring the distance in each direction to the vessel surface. Adding these two distances together gives an estimate of the vessel diameter di in a direction perpendicular to ni.
The two values (ri, di) form the shape descriptor which characterises the vessel at the point vi, and are computed for vertices of the mesh over the whole image or area of interest.
2) A problem with the method above is that error in the estimation of the surface normal could have a large effect on the ray that is extended through the vessel, and hence on the estimated value of diameter. An example of a shape measure which is more robust in the presence of noise will now be described with reference to
With this shape measure, only a single scalar value is computed for each point on the vessels. This will be an approximation of the mean radius of rotation of the vessel (i.e. the inverse of the mean curvature).
Thus, given a point p on the vessel, first estimate the normal vector n to the vessel, such that the normal is pointing inwards towards the centre of the vessel. There are several well-known methods to do this such as “Computer Graphics Using OpenGL”, F. S. Hill, Jr., Published by Prentice Hall, 2nd edition, 2001. Then define a spherical neighbourhood with radius r that is centred on the point p+rn, where r is some small scalar quantity. Note that, by definition, this spherical neighbourhood will include the point p on its boundary.
Now count the number of foreground voxels (i.e. vasculature and aneurysm) that lie in the neighbourhood and divide this by the total number of voxels in the neighbourhood. This ratio is an estimate of the proportion of the neighbourhood that lies within the vessel. Voxels that intersect the neighbourhood are considered to lie within the neighbourhood. However, excluding these voxels would have little effect upon the final results.
Then increase the size of the neighbourhood until it no longer lies within the vessel. Thus a sequence of neighbourhoods is defined, with increasingly larger values of r, each of which is centred on p+rn and each of which has a boundary that touches the point p. When the proportion of foreground voxels in the neighbourhood falls below some pre-defined threshold value, the method steps. In this implementation, 0.8 was used as the threshold value.
The radius of the final neighbourhood before exceeding the threshold is recorded, and taken to be indicative of the radius of the vessel. The process is then repeated at each point on the surface of the vessels.
In summary, at each surface point a spherical neighbourhood is grown until it has outgrown the vessel, and then the final radius is taken as indicative of the vessel radius.
The first shape measure above is very local in nature. Slight variations in the estimation of the surface normal could have a large effect on the estimates of diameter. The second shape measure is integral in nature. That is, the value computed is the result of a summation process of many voxels, making it less susceptible to noise in a small number of voxels.
In addition, the second shape measure is more robust when an aneurysm is somewhat ellipsoid in shape, rather than spherical. This is because the mean radius of curvature is estimated, rather than two estimates of the radius in perpendicular directions.
Recall that the neighbourhood size is increased until the proportion that lies within the aneurysm falls below some threshold value (0.8 in this implementation). If this threshold value is set to 1.0, then the process of increasing the size of the neighbourhood is terminated as soon as a boundary of the aneurysm is breached. With a threshold of 1.0, the estimated radius will be an estimate of the minimum radius. By choosing a smaller value for the threshold, some proportion of the neighbourhood is tolerated to lie outside of the aneurysm. For an aneurysm that is ellipsoid in nature (rather than spherical), this allows for a better estimate of the mean radius. Importantly, this means that a similar value will be computed at all points on the aneurysm. If the minimum radius is being estimated, then different values will be estimated at different points on the aneurysm.
It should be noted that it is not necessary to compute the shape descriptor at every vertex on the mesh (which typically has tens of thousands of vertices—probably at a much finer resolution than the image). Instead a subset can be taken, e.g. an arbitrary point for each voxel on the surface of the vessel (i.e. neighbours a background vessel). For example, the top, left-hand corner of each surface voxel could be used.
Whichever shape descriptor is used, the next task is to segment the data set to demarcate the aneurysm, i.e. to group together points that lie on the aneurysm and to distinguish these from points on the adjoining vessels. This will allow the aneurysm to be demarcated. Points lying along the single blood vessel will have similar values of shape descriptor. At the neck of the aneurysm, these values will change rapidly. Passing over the neck and onto the aneurysm itself, there will be a similarity in the values on the aneurysm.
Segmentation is achieved in this embodiment by using a region splitting algorithm. The algorithm separates the points on the triangular mesh into regions (sub-parts) that are similar. Each vessel should be identified as a sub-part, while the aneurysm will form a different sub-part.
Firstly, to illustrate the concepts used in the segmentation method it will be helpful to consider the simple set of points illustrated in
Taking the first of those probabilities, there are several ways of calculating this probability. One way is to set it as being directly proportional to the number of data points of each class within the radius rclassify. For example, referring to
The second term, based on the value of the property of interest of point d0 (such as intensity or shape descriptor) is, in this embodiment, obtained by comparing the value of the property for d0 to the distribution of such values in the three classes C1, C2, C3. This will be described below with reference to a specific intensity-based example illustrated in
The next step is to start or “seed” a new class. This is achieved by choosing a data point, defining a neighbourhood of radius rseed around it, and assigning all points within the neighbourhood to the new class C1. This is illustrated in
It was mentioned above that the smoothing is adaptive; In this embodiment this is achieved by making the variance of the Gaussian kernel function dependent upon the number of data points in the class. This greatly affects the probability distribution produced. When the histogram comprises only a small number of values, it is appropriate to use a large variance. This results in heavy smoothing. If the histogram consists of a large number of values, it is more likely that the probability distribution accurately reflects the underlying distribution, and so a small variance is appropriate, resulting in less smoothing. The variance may be defined as a function of the number of data points in a class, such that as the number of data points in the class increases, the variance decreases. In this example, the variance is inversely proportional to the square of an affine function of the size of the class. Other functions are possible. For example, the variance may be inversely proportional to the natural logarithm of the number of data points in the class.
Note that functions other than a Gaussian can be used as the kernel function for the Parzen window estimate of the probability distribution. In this case, some property of the kernel function comparable to the Gaussian's varianice will be adjusted as a class grows or shrinks.
The next step is to test data points near the class C1 to check whether they can be assigned to class C1 not. In this embodiment all points dj are tested which lie within a radius rclassify of any point in the class C1. The testing involves selecting a point dj and computing the probabilities that this point belongs to class C0 or C1. For each class, this involves computing two values, which are multiplied together to compute the probability.
The first value is the a priori probability that dj belongs to each class. As mentioned above this probability is independent of the value of the property of interest. In this example it is taken as the proportion of points within a radius rseed of dj that are in the relevant class, as explained in relation to
The second value is computed by comparing the value of the property of interest (intensity or shape descriptor etc) with the probability distributions computed for the class. For classes C0 and C1 these probability distributions are shown in
Thus the class C1 grows with each point that is assigned to it. The testing is repeated recursively, choosing all points within a radius rclassify of each point added to class C1 and testing whether they should be reclassified to class C1. It should be noted that only points which are currently in class C0 are considered (in other words reclassified points are not subsequently reconsidered). It is important to note, though, that each time a point is reassigned, the probability distributions for the two classes are recalculated with a new variance for the Gaussian kernel set in accordance with the change in the number of points. Where there are a large number of data points such that the probability distribution does not vary much as a single point is reassigned, the recalculation of the probability distribution need not occur every time a point is reassigned, but after a preset number of points have been reassigned. This means that the probability distribution varies adaptively as the classification process proceeds.
The variance used, therefore, when computing the probability that a point under test belongs to the initial class C0 will increase as points are removed from the class, and, the variance used to compute the probability that the point belongs to class C1 will decrease as that class grows. In this way, C1 will improve its model of the distribution of numeric values for the property of interest in the class, and this distribution will be removed gradually from the three distributions that together formed the distribution for class C0 illustrated in
The process of testing points for addition to class C1 is continued until no new points within a radius rclassify of the existing points in the class are added. This is the situation indicated in
Then the process is repeated by seeding a new class C2 on a point in class C0 and growing that class. Whilst growing the class C2, when testing whether to reassign some point dj from class C0 to class C2, it may be found that points from class C1 also lie within a neighbourhood of radius rclassify of dj. In this case, it is tested whether to assign data point dj to class C0, C1 or C2.
After this second class C2 has converged, the data will be classified into C0, C1 and C2 as shown in
Because this is an unsupervised algorithm, the process does not, of course, “know” that there are no more classes of points. Therefore the process will continue by seeding a new class C3 as shown in
In practice the algorithm continues to attempt to seed new classes on each of the points left in C0, but each new class will be culled. The final segmentation is shown in
It should be noted that the algorithm can be applied again within each of the classes C0, C1, C2 to check for segmentation within those classes. Thus each class is taken in turn, all its data points regarded as an initial class and a new class seeded within it, the method then proceeding as before.
The data set need not comprise all data points available (e.g. all pixels in the image or all points in the model). A subset of the data points may be selected to optimise the segmentation (e.g. by excluding obvious outliers). In addition, not all data points in a class may be used in the computation of the probability distribution. A subset of the data points may be selected (e.g. by excluding outliers according to some statistical test).
The algorithm therefore involves segmenting a data set by initially assigning all points to a single class and then randomly seeding and growing new classes. The probability distributions in the classes are adaptive and this, together with the culling of classes which do not grow, means that over-segmentation is avoided.
In the description above the histograms were computed in a fairly typical fashion by finding the minimum and maximum values to be included, and then separating the interval between these into equally sized bins. Each value will then be assigned to a bin, and the probability computed for a particular value will equal the number of points in that bin, divided by the total number of points in the histogram. This is illustrated in
This works well if there is a uniform prior probability of getting any particular numerical value. However, this is rarely the case in real applications.
Consider the example of a histogram of the radius of points on blood vessels. Imagine that the minimum sized vessel that can be detected has a radius of 11 mm, and that the largest vessel in the brain has a radius of 30mm. This is quite a realistic value if the patient has a giant aneurysm. There will be many vessels with a radius in the range 3 mm-9 mm, but very few in the range 20 mm-30 mm
The problem arises that when grouping the surface points on a vessel, if the radius changes from 6 mm to 9 mm, then this probably indicates that a new vessel has been reached. However, if in a large vessel the radius changes from 26 mm to 29 mm (again a difference of 3 mm), then this merely indicates variation in the vessel radius. The fundamental issue is that a small change in radius is important in the first instance, but not the second.
One solution is to try to normalise the change by dividing by the vessel radius, so as to measure a ratio of change in vessel diameter. However, this approach has a serious limitation.
In real data, there are likely to be few small vessels (in fact, there will be many small vessels, but the scan will detect very few of them because of its finite resolution, so for the purposes of processing the data that is scanned, there will be few small vessels) and few extremely large vessels, but many medium-sized vessels. Thus if vessel diameter changes from 1 mm to 2 mm or 25 mm to 30 mm, it is likely to be because of noise or natural variation. However, if vessel size changes from 10 mm to 13 mm, then this probably indicates that a change of vessel. Simply normalising by dividing by vessel radius does not take this into account, and will result in an algorithm that is overly sensitive to variation in small vessels.
As an aside, mathematically the problem can be constructed as trying to define a metric space of ‘vessel radii’. This is a 1-D space, where each point is a possible vessel radius, and where the distance between two points in the space is indicative of how likely it is that the points lie on the same vessel. The metric for this space is non-linear. Two points with radii 26 mm and 29 mm would be considered very close in the metric space, but two points with radii 6mm and 9mm are not close (i.e. the difference likely indicates that they lie on different vessels). The earlier approach of dividing by the vessel radius was an attempt to make the metric linear by a-simple process of normalisation. This does not work as it becomes overly sensitive to changes in small vessel radii. A further embodiment of the invention involves a solution to the problem of estimating the metric on this non-linear space, where the true metric is estimated from the data. It is assumed that, given the true metric for the space, the data would be uniformly spread over the space. Thus the metric can be estimated by examining the density of points under a linear metric, and warping the space so that these points are spread uniformly.
The method begins by computing the vessel radius at all surface points. A realistic histogram is shown in
This is then used to define a second histogram, where the bin sizes are not equal, but the data count in each bin is approximately equal. Let N be the total number of data points and let b be the number of bins desired for histogram. The technique is to separate the histogram in
Examining the histogram of
This method is applied to the segmentation technique above by performing the computation of these bin sizes as an initial stage of processing, performed before grouping the vessel surface points into different vessels. Thus the sequence of steps is as follows:
The important change is that when histograms are computed in the algorithm, it now uses the bins that were computed in Step 3 (shown in
As a side note, because of the way that the unequal histogram bins are computed, the initial histogram computed in Step 4i for G0 will have roughly an equal number of values in all bins. However, this will change once entries start being removed and assigned to groups G0, G2, G3, etc . . .
Thus this development adapts the sensitivity of the histogram to a specific application, from an initial analysis of the entire data set.
Incidently it is applicable to more than the immediate application above. It may be applied to the grouping of data representing scans of body parts other than the head. More generally, the data need not be medical in nature. For example, the points may indicate pixel coordinates in a satellite image, and the numerical value for each point indicate the intensity of that pixel. In this case, the grouping algorithm would separate up the image into different objects. More generally still, this algorithm may be applied to any 2-D image in a similar way. It may also be applied to 3D range data. In short, it is applicable in any application where there is a set of data points, provided that each point has some spatial location, and each point has a numeric value assigned to it. More generally, this histogram equalisation process may be coupled with other algorithms. That is, it need not only be applied in the context of the grouping algorithm proposed here. Instead, it may be used as part of any algorithm that requires the computation of a histogram.
Returning to applying the algorithms above to the problem of demarcation of an aneurysm, instead of intensity values, the shape descriptor is used. Thus, referring to
The method can, of course, be applied also to intensity-based segmentation, such as the segmentation of B-mode ultrasound follicle images where it has successfully demarcated regions indicating follicles. The method is also applicable to the segmentation of MRI, CTA, 3-D angiography and colour/power Doppler sets where blood can be distinguished from other tissue type by its intensity.
| Number | Date | Country | Kind |
|---|---|---|---|
| 0205000.3 | Mar 2002 | GB | national |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/GB03/00891 | 3/4/2003 | WO |