The present invention relates to medical imaging and analysis. More particularly, the present invention relates to a method and apparatus for detecting abnormalities (lesions) in medical images. The present invention can be used to process medical images in order to detect potential lesions and/or to reject false positives when potential lesions have already been detected.
Medical imaging is generally recognised as key to better diagnosis and patient care. It has experienced explosive growth over the last few years due to imaging modalities such as x-ray, Computed Tomography (CT), ultrasound, Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). Conventionally, medical images have been inspected visually by highly-trained medical practitioners in order to identify anatomic structures of interest, such as lesions. However, the process can be tedious, time consuming and must be performed with great care.
Computed Tomography Colonography (CTC) is a form of medical imaging that has been used for the detection and screening of colonic cancers. The purpose of CTC is to locate and identify polyps on the colon wall by computationally processing CT scans. A polyp is an abnormal growth on a mucous membrane, and may be benign or malignant (cancerous). The effectivess of CTC is hindered by the time taken to interpret the large amount of data that is generated by a CTC scan, variability among human experts and the complexity of the classification. A typical modern CTC scan produces around one thousand axial CT images (slices) for the supine and prone data sets. To address these problems, a number of Computer Aided Detection (CAD) schemes for detecting polyps with CTC have been proposed.
A conventional CAD system for polyp detection comprises two steps. In the first step, which is known as “candidate generation”, image processing techniques are used in order to identify initial polyp candidates (where a “polyp candidate” is the term used to describe an anatomical structure that might be a polyp, but which has not yet been confirmed to be a polyp). Generally, a large number of false positives are produced by the first step and significantly outweigh the number of true positives. The term “false positive” (FP) refers to a polyp candidate that is not actually a polyp, whereas the term “true positive” (TP) refers to a polyp candidate that can be confirmed to be a polyp. Hence, the second step of a CAD system involves the use of image processing techniques to reduce the number of FPs, preferably without also reducing the number of TPs. This second step is known as “false positive reduction”.
Typical approaches to CTC CAD can be classified as “shape-based” or “appearance-based”.
Shape-based methods calculate certain characteristics of the geometry of structures in the medical image, so as to detect structures having shapes that are commonly associated with lesions. Known shape-based methods typically rely on various “shape features” derived from first order differential geometric quantities (such as surface normal or gradient) or second order Hessian matrices (such as principal curvature, mean curvature, Gaussian curvature, or shape index). Such shape-based methods are useful in detecting spherical objects or objects with local spherical elements. However, in practice, lesions (such as polyps) are often abnormal growths that exhibit varying morphology, and many polyps can not be adequately characterised using local differential geometric measures. Hence, shape-based methods may fail to detect lesions with sufficient reliability.
Appearance-based methods typically rely on non-geometric features derived from the image intensity, such as wavelet features. While potentially useful in detecting polyps of wider shape morphologies, appearance-based methods originate from research into face detection and so are not optimal for detecting lesions. Appearance-based methods may fail to detect lesions with sufficient reliability.
Thus, there is a need for improved methods for computer aided detection of lesions, abnormalities or other anatomical structures in medical images, particularly for use in CTC.
P. R. S. Mendonça et al., “Detection of Polyps via Shape and Appearance Modeling” (Proc. MICCAI 2008 Workshop: Computational and Visualization Challenges in the New Era of Virtual Colonoscopy, pp. 33-39) relates to a CAD system for the detection of colorectal polyps in CT. The CAD system is based on shape and appearance modelling of structures of the colon and rectum.
A first aspect of the invention provides a computer-implemented method of detecting an object in a three-dimensional medical image, comprising: determining the values of a plurality of features at each voxel in at least a portion of the medical image, wherein each feature characterises a respective property of the medical image at a particular voxel; calculating the likelihood probability distribution of each feature based on the values of the features and prior medical knowledge, wherein the prior medical knowledge comprises one or more parameter values derived from training data; generating a probability map by using Bayes' law to combine the likelihood probability distributions, wherein the probability map indicates the probability of each voxel in said at least a portion of the medical image containing an object to be detected; and analysing the probability map to detect an object.
The method allows different types of features, particularly appearance features and shape features, to be combined in order to detect objects in a medical image. The advantages of the Bayesian technique disclosed herein are as follows. Firstly, there is a large amount of the uncertainty inherent to the detection problems in medical imaging which can be best accounted for through statistical techniques. Secondly, there often is useful medical knowledge (such as lesion density, size, shape, location, etc.) that can be utilised to constrain the solution of detection problems. This prior medical knowledge can be easily encoded into a Bayesian model. Finally, a Bayesian technique provides a unified framework to incorporate various—often disparate—features into a single statistical model. The probability distribution for each feature tenor is preferably modelled using a parameterised functional form which is based on prior medical knowledge and a training step for the learning of function parameters.
To the inventors' knowledge, this is the first time that a learning-based Bayesian approach has been used in a CTC CAD system. By deriving the values of the parameters that are used model the various features (such as the shape, appearance and/or anatomical features described below) from training data, the Bayesian framework disclosed herein provides robust and consistent performance. In particular, the Bayesian framework can have a high sensitivity for detecting objects, whilst generating relatively few false positives.
In contrast, the method disclosed by Mendonça et al. (ibid.) is not based upon training data. Instead, Mendonça et al. model polyps as simplistic geometric shapes. For example, polyps are modelled as ellipsoids. Such models have limited capability to model the complexity of actual polyps found in human anatomy, but do not require any training from example data. The method disclosed herein, however, can make use of more expressive shape models (such as a model based upon a second principal curvature flow, as described below) that are capable of modelling the variation in polyp shapes seen in practice. Also, in Mendonça et al.'s method, only medical knowledge is used in modelling the prior probability. In preferred examples of the method disclosed herein, however, the prior probability is constrained by spatial information. The method disclosed herein calculates the probability that each voxel contains an object to be detected and, therefore, an accurate probability map can be obtained by considering spatial information.
In addition, the flexibility of the Bayesian framework disclosed herein allows the inclusion of “anatomical features” that characterise a particular point in the three-dimensional medical image with respect to prior medical knowledge of the characteristics of the type of lesion that is to be detected. For example, the Bayesian framework can incorporate a model that takes advantage of the fact that colonic polyps appear more frequently in the lower extremities of the colon (i.e., rectum and sigmoid).
The values of the features may be determined at every voxel in the medical image or, more preferably, may be determined at just a portion of the image. The steps involved in determining the value of a feature will be dependent on the nature of the feature. For example, determining the value of an appearance feature may simply require an intensity value of a particular voxel to be read from the medical image data, whereas the values of shape features and anatomical features are typically determined by performing one or more calculations based upon the intensity value of the voxel (and, possibly, its neighbouring voxels).
The object to be detected is preferably an abnormal anatomical structure, such as a lesion. For example, the lesion may be a colonic polyp, lung module, mammographic mass, liver lesion or brain lesion.
Preferably, the training data comprises a plurality of three-dimensional medical images. Preferably, each three-dimensional medical image in the training data comprises a label indicating one or more voxels that contain the type of object (such as a lesion) that is to be detected. Preferably, the labels in the training data have been assigned by a clinician.
Preferably the parameter values are optimised to generate a predetermined number of false positives when the method is performed upon the training data. Thus, the sensitivity of a method can be increased without causing an unacceptably high number of false positives to be generated.
Preferably the features include one or more appearance features. Preferably the appearance features comprise image intensity information derived from the medical image. Preferably the appearance features comprise a wavelet feature derived from the medical image. Preferably the appearance features comprise a texture feature derived from the medical image.
Preferably the features include one or more shape features. Preferably the shape features include a first-order shape feature derived from the medical image. Preferably the first order shape feature is a first order differential geometric quantity. More preferably, the first order shape feature is based on surface normal overlap and/or gradient concentration. Preferably the shape features include one or more second-order shape features. Preferably at least one of the second-order shape features is determined from a Hessian matrix. Preferably said at least one of the second order shape features is calculated from the eigenvalues of the Hessian matrix. Preferably at least one of the second order shape features represents a volumetric shape index of the image at a particular voxel. Preferably at least one of the second order shape features represents a second principal curvature flow of the image at a particular voxel.
Preferably the method further comprises calculating a spatial prior probability. Preferably the step of generating a probability map further comprises using Bayes' law to combine the likelihood probability distributions and the spatial prior probability. Preferably the spatial prior probability comprises a spatial constraint. Preferably the spatial constraint is imposed by a Markov Random Field and a Gibbs Random Field.
Preferably calculating the likelihood probability distribution of a feature comprises calculating the value of a Gaussian function at a particular voxel, wherein the Gaussian function models the probability distribution of that feature. Preferably the Gaussian function models the probability distribution of a feature that characterises the intensity of the medical image, and wherein calculating the value of the Gaussian function includes: (i) treating a mean intensity in the Gaussian function as a fixed value; or (ii) calculating a mean intensity in the Gaussian function as a function of the size of a previously-detected object. Preferably calculating the mean intensity as a function of the size of a previously-detected object includes calculating an expression that is based upon the presence of the partial volume effect in the medical image.
Preferably the features include one or more anatomical features. Preferably determining the value of an anatomical feature involves calculating a colon centreline. Preferably determining the value of an anatomical feature involves calculating a colon boundary distance transform.
Preferably the step of analysing the probability map comprises thresholding on the probability map and labelling regions in the thresholded probability map.
Preferably the portion of the image is defined by a mask. The use of a mask can reduce the computational requirements of the method, by avoiding the need to perform the method upon all voxels in the image.
Preferably the detected object comprises a region within the image having an intensity and a size that are within an intensity range and a size range of a type of object being sought.
The method can be used in either or both two main steps of a computer aided detection algorithm, i.e. in the candidate generation step and/or in the false positive reduction step. When used in the false positive reduction step, preferably the step of analysing the probability map includes determining whether a previously-detected object is a false positive. The previously-detected object may have been detected using a candidate generation algorithm that is different from that disclosed herein. The experimental results demonstrate the method's high detection performance, which has a high sensitivity and generates few FP regions. In particular, the method disclosed herein is capable of detecting polyps of varying morphologies.
Preferably the method further comprises segmenting an image based upon the result of the step of analysing the image. Preferably the image is segmented to identify a region of the image that comprises a lesion.
A further aspect of the invention provides an apparatus operable to perform the method described herein. Preferably the apparatus comprises: means for determining the values of a plurality of features at each voxel in at least a portion of the medical image, wherein each feature characterises a respective property of the medical image at a particular voxel; means for calculating the likelihood probability distribution of each feature based on the values of the features and prior medical knowledge, wherein the prior medical knowledge comprises one or more parameter values derived from training data; means for generating a probability map by using Bayes' law to combine the likelihood probability distributions, wherein the probability map indicates the probability of each voxel in said at least a portion of the medical image containing an object to be detected; and means for analysing the probability map to detect an object.
A further aspect of the invention provides a computer-readable medium comprising instructions which, when executed by a suitable computer, cause the computer to perform the method described herein.
Further aspects of the invention provide a method or apparatus substantially as herein described with reference to and/or as illustrated in the accompanying drawings.
Preferred features of the invention will now be described, purely by way of example, with reference to the accompanying drawings in which:
The invention will now be described, purely by way of example, in the context of a method of identifying polyps in the colon. However, it is to be understood that the invention is not limited solely to the identification of colonic polyps. The invention can also be used to identify other anatomical features such as lung nodules, liver lesions, mammographic masses, brain lesions, any other suitable type of abnormal tissue or suitable types of healthy tissue.
The invention is directed to processing three-dimensional medical image data using a Bayesian framework. The term “Bayesian framework” as used herein refers to the use of Bayes' law to combine statistical information relating to a plurality of features that characterise properties of a medical image, in order to determine the probability that a particular voxel in the medical image represents a particular object. The three-dimensional medical image data can be generated by a computed tomography (CT) scan, or from a magnetic resonance imaging (MRI) scan, a positron emission tomography (PET) scan, an ultrasound scan or from an x-ray image. It will be appreciated that other suitable medical imaging methods can also be used.
For example, when the medical image data is generated by a CT scan, the three-dimensional (3D) medical image data comprises a series of CT image slices obtained from a CT scan of an area of a human or animal patient. Each slice is a two-dimensional digital greyscale image of the x-ray absorption of the scanned area. The properties of the slice depend on the CT scanner used. For example, a high-resolution multi-slice CT scanner may produce images with a resolution of 0.5-1.0 mm/pixel in the x and y directions (i.e. in the plane of the slice). Each pixel may have a 16-bit greyscale representation. The intensity value of each pixel may be expressed in Hounsfield units (HU). Sequential slices may be separated by a constant distance along the z direction (i.e. the scan separation axis); for example, by a distance of between 0.5-2.5 mm. Hence, the scan image formed by a plurality of slices may be a three-dimensional (3D) greyscale image, with an overall size depending on the area and number of slices scanned. Each pixel may then be considered to be a voxel (or volumetric pixel) in three-dimensional space.
Overview of the Bayesian Framework
The 3D image data is considered to be a set of voxels, X, where X={xi, i=1, . . . , n}. X is a random variable that represents the spatial position of each voxel xi in the image data. Each voxel xi can be associated with a set of features, F, where F={Fj, j=1, . . . , m}. Each feature Fj characterises a particular property of the image data at a particular voxel, and each voxel is associated with one or more features Fj. For example, one feature might characterise the shape (i.e. the three-dimensional geometry) of the image at a particular voxel, whilst another feature might characterise the intensity of the image at that voxel. The aim of the arrangement disclosed herein is to assign a label to each individual voxel in the image, wherein the label identifies the particular type of tissue that is imaged in the voxel in question. Thus, it is assumed that there is a set of labels, Λ, where Λ={l0, . . . lK-1}. For example, label l0 could denote that the voxel represents normal mucosal tissue (such as a colonic fold), whilst label l1 could denote that the voxel represents abnormal tissue (such as a polyp). As will be discussed below, one of the labels in the set of labels Λ is assigned to each voxel based on a probability calculation using a Bayesian framework.
The features Fj can be any 2D or 3D feature calculated for each 3D voxel. In general, features that are used for lesion detection can be categorised into the following three sets:
Lesions exhibit different intensities and/or textures with respect to the surrounding healthy tissue. Appearance features aim to detect lesions by identifying differences in intensity and/or texture that distinguish lesions from surrounding healthy tissue. In the context of CTC CAD, appearance features take advantage of the fact that polyps typically exhibit a slightly elevated intensity and homogeneous textures relative to surrounding mucosal tissue.
Examples of shape features include first order differential geometric quantities (such as surface normal overlap [4] or gradient concentration [5, 6]) and second order features (such as principal curvature [7], mean curvature [7], Gaussian curvature [7] or shape index [6, 8]). Methods for calculating these particular examples of shape features are described in the references cited in the preceding sentence. It is noted that these are merely examples of shape features, and that the invention encompasses any other suitable shape feature.
In the context of CTC CAD, shape features take advantage of the fact that polyps tend to have rounded shapes or partially spherical portions, whereas colonic folds tend to have elongated shapes. Colonic folds are benign, but irregularly-shaped, regions in the colon wall. Care must be taken to avoid generating false positives by mistaking colonic folds for polyps.
For example, the first and second principal curvatures (i.e. k1(xi) and k2 (xi) as defined in Equations 11 and 12 below) both have a positive value for a polyp. In contrast, the first principal curvature for a colonic fold has a positive value, whilst the second principal curvature is close to zero for a colonic fold. Thus, the value of the second principal curvature can be used to detect polyps, whereas the first principal curvature cannot discriminate between polyps and folds. The mean curvature is the average of the first and second principal curvatures and so cannot discriminate between polyps and folds (although the first principal curvature and the mean curvature may be useful for detecting other types of anatomical structure). The Gaussian curvature is the product of the first and second principal curvatures, and so has a positive value for a polyp and a value close to zero for a fold. Thus, the value of the Gaussian curvature can be used to detect polyps.
For example, clinical observations may indicate that a particular type of lesion is more prevalent at a particular location in the anatomy. The probability that a particular voxel contains a particular type of lesion can, therefore, be characterised in terms of the distance of the voxel from the location in the anatomy at which that type of lesion is prevalent.
Examples of anatomical features that can be used in CTC CAD include:
The particular features Fj that are to be used is chosen in accordance with the properties of the particular type of anatomical structure that is to be detected. Thus, whilst the particular shape, appearance and anatomical features that are described in detail herein are particularly suitable for detecting colonic polyps, different features may be used to detect other types of lesion.
A Bayesian technique provides a generic framework to combine all of the features Fj into one statistical model. In a preferred example of a method of detecting colonic polyps, an intensity feature, I, a second-order shape feature, S, and an anatomical feature, L, are used. Hence, F1=I, F2=S and F3=L. In this example, the second-order shape feature F2 comprises a shape index feature and a second principal curvature flow feature (described in detail below).
Two labels are used in the preferred example of a method of detecting colonic polyps. Thus, the set of labels is Λ={l0 . . . IK-1}, where k=2 and where l0 is a non-polyp label (i.e. l0 denotes that the voxel does not represent a polyp) and l1 is a polyp label (i.e. l1 denotes that the voxel does represent a polyp).
A family of random variables, R, is defined, where R={R1 . . Rn}. Ri takes a value riεΛ, where ri is the label for voxel xi. Thus, in the above-mentioned example, ri is a member of the set of labels Λ such that ri is equal to either l0 or l1, thereby labelling each voxel as either a polyp or a non-polyp. The symbol P(X|F) denotes the conditional probability of the random variable Ri taking value of ri=l1 at xi in probability space, namely, P(X|F)=P(ri=l1|F). Bayes' law can be used to calculate P(X|F) as:
The terms P(X|F) and P(F|X) are respectively called the posterior probability and likelihood. The term P(X) is called the prior probability of X, whilst P(F) is called the prior probability of F. The aim of the invention is to detect lesions by calculating the value of the posterior probability P(X|F) at each voxel in the three-dimensional medical image data, using values of the likelihood P(F|X) and prior probabilities P(X) and P(F) that are calculated based upon the medical image data and prior medical knowledge.
In the preferred example of a method for detecting colonic polyps, it is assumed that each feature F1, F2, F3 is conditionally independent, thereby allowing Equation 1 to be written as:
Given F1=I, F2=S, F3=L, therefore,
P(X|I,S,L)∝P(I|X)·P(S|X)·P(L|X)·P(X)
Equation 1 is used to model the probability of a polyp labelling existing at each voxel in the CT colonic volume. To put this another way, for each voxel, Equation 1 is used to calculate a value indicating the probably that that voxel represents a polyp.
The probability distribution of each feature in Equation 1 (i.e. P(F1|X), P(F2|X) and P(F3|X)) is modelled using a parameterised functional form based on prior medical knowledge and a training step. The way in which each term in Equation 1 is modelled will now be described.
Modelling the likelihood term, P(F|X)
In the Bayesian framework, based on Equation 2, the likelihood term P(F|X) indicates the joint density distributions of all features for class l1. The way in which each feature F1, F2, F3 is modelled is described in detail below under the subheadings “Intensity model”, “Shape model” and “Anatomical model” respectively.
(i) Intensity Model
The intensity model is based on a prior medical knowledge guided learning method. The parametric form of the intensity model is given by a Gaussian function as:
where μI and δI are the mean and standard deviation of the intensity of a lesion, respectively.
Equation 3 calculates the probability that a voxel xi represents a lesion based upon the intensity value I in the image data at voxel xi and predetermined values of μI and δI.
When the intensity model of Equation 3 is to be used for candidate generation, μI and δI have fixed values that are determined through a training step. The way in which these values are determined is discussed below, under the heading “Training”. Thus, the values of μI and δI are based upon prior medical knowledge of the intensity of lesions that have previously been detected.
When the intensity model of Equation 3 is to be used for false positive reduction, μI can be defined as a function of polyp size. More particularly, μI can be defined as a function of the radius r of the polyp, as described below.
It is well known that CT images exhibit the partial volume effect (PVE) due to limitations in scanning resolution. PVE occurs when a voxel contains a mixture of different materials (for example, when a voxel contains both tissue and air), such that the intensity value at that voxel is a function of the intensity value of each of the individual materials contained in that voxel. For tissues like polyps near air, the boundary of the polyp may appear darker than that of its central region as a result of the PVE. Since the PVE only affects the polyp's boundary, as a proportion of the total size of the polyp, a larger polyp has relatively less PVE than a smaller polyp. Conversely, a small polyp exhibits more PVE than a larger polyp. This information can be included in the intensity model by adjusting the mean μI to be a function of polyp size.
It is assumed that a polyp has a hemispherical shape and contains two parts: a core part (rc) and a PVE part (Δr), as illustrated in the schematic diagram of a polyp of
μI=f·mc+(1−f)·mp (4)
where f is the fraction of the core part's volume compared with the whole polyp's volume, namely,
Combining Equations 4 and 5 gives:
It is noted that when a polyp is very small there might be no core part, i.e. rc=0 and f=0, therefore, r=Δr. That is, a very small polyp only contains a PVE part, so the mean intensity μI is dominated by the mean intensity of the PVE part mP. This occurs because the size of a small polyp's interface with the surrounding air is comparable to (or even greater than) the size of the polyp itself, such that the PVE part mP is comparable to the core part mC. In contrast, when a polyp is very large (i.e. as r→∞) then f=1, so the mean intensity μI is dominated by the mean intensity of the core part mC.
When the intensity model of Equation 3 is to be used for false positive detection, the radius r of the polyp can be determined by measuring the radius of the candidate region that was detected during the preceding step of candidate generation. The mean intensities of the core part mC and PVE part mP, and the thickness of the PVE part Δr, are determined from training data, as discussed below under the heading “Training”. Thus, the values of mC, mP and Δr are based upon prior medical knowledge of lesions that have previously been detected. These values of mC, mP, r and Δr can then be substituted into Equation 6 to determine the mean intensity μI for a lesion having the size of the region detected during the step of candidate generation. A value for δI can be determined based upon training data, as discussed below under the heading “Training”.
(ii) Shape Model
Most polyps are objects that protrude from the colon wall. In general, polyps have a round shape or contain local spherical elements, whilst colonic folds have an elongated shape. Operators from differential geometry, such as curvatures derived from the Hessian matrix or the structure tensor, have demonstrated the ability to discriminate between round and elongated structures in medical images.
In a preferred example of a method for detecting colonic polyps, two second-order shape features (F2=S) are computed: shape index (F2SI); and second principal curvature flow (F2k2). Both of these features are based upon principal curvatures calculated from the Hessian matrix. Each shape feature is modelled by a truncated Gaussian function, as described in more detail below.
The shape features can be incorporated into the Bayesian framework in different ways. In particular:
1) Polyps can be modelled through shape index (SI) only, such that:
P(F2|X)=P(F2SI|X) (7)
2) Polyps can be modelled through second principal curvature flow (k2) only, such that:
P(F2|X)=P(F2k2|X) (8)
3) Polyps can be modelled through both SI and k2, such that:
Other suitable features based on differential geometry can also be used in addition to, or instead of, the shape index feature and/or second principal curvature flow feature that are described here in detail.
The way in which shape index and second principal curvature flow are used to model polyps will now be described.
(a) Shape Index Model
The shape index provides a local shape feature at each voxel. Every distinct shape, except for a plane, corresponds to a unique shape index. Shape index values range from 1.0 to 0.0. For example, a shape index of 1.0 corresponds to a spherical cap (i.e. a convex hemisphere, such as the shape of a colonic polyp or lung nodule), a shape index of 0.75 corresponds to a cylindrical shape (such as the shape of colonic fold or a blood vessel), a shape index of 0.5 corresponds to a saddle shape, and a shape index of 0.0 corresponds to a spherical cup (i.e. a concave hemisphere). The volumetric shape index directly characterises the topological shape of an iso-intensity surface in the vicinity of each voxel without explicitly extracting the iso-intensity surface.
The volumetric shape index at voxel x, is defined as:
where k1(xi) and k2(xi) are the maximum and minimum principal curvatures, which are defined as:
k1(xi)=H(xi)+√{square root over (H2(xi)−K(xi))}{square root over (H2(xi)−K(xi))} (11)
k2(xi)=H(xi)−√{square root over (H2(xi)−K(xi))}{square root over (H2(xi)−K(xi))} (12)
where K(xi) and H(xi) are the Gaussian and mean curvatures, respectively, and are defined as:
where E, F, G come from the first fundamental form of differential geometry, and L, N, M come from the second fundamental form of differential geometry as:
Where I represents the intensity of the scan image,
and where x, y and z are mutually orthogonal directions in three-dimensional space.
The first and second order derivatives (Ix, Iy, Iz, Ixx, Ixy etc.) can be calculated from an explicit surface representation or can be calculated by numeric differentiation of intensity values in the medical image data. The present invention can be applied to derivatives calculated by either of these methods. Preferably, the first and second partial derivatives of the image are calculated at each voxel, and are substituted into Equations 10 to 22 to calculate the value of the shape index feature F2SI at each voxel. More specifically, Equation 10 is solved by: calculating the first and second order derivatives of the image (i.e. Ix, Iy, Iz, Ixx, Ixy etc.) at voxel xi; substituting the derivatives into Equations 15 to 22, to derive values of E, F, G, L, M and N; substituting the values of E, F, G, L, M and N into Equations 13 and 14 to derive values of H and K; substituting the values of H and K into Equations 11 and 12 to derive values of k1(xi) and k2(xi); and substituting the values of k1(xi) and k2(xi) into Equation 10 to derive a value for F2SI(xi).
The shape index feature F2SI provides rich information that complements the image intensity feature F1. The advantage of defining a feature in terms of its shape index is that shape index can characterise any geometrical shape (except for a plane) as a single unique number. Furthermore, shape index is particularly responsive (i.e. has a value approaching 1.0) to shapes that are spherical and so is particularly well-suited to lesions that have a spherical or partly-spherical shape (such as colonic polyps and lung nodules). For example, given the fact that a polyp is generally either depicted as a hemisphere or has local spherical elements, whereas a fold is usually elongated, the volumetric shape index can be used to detect potential polyp candidates on the colon surface.
(b) Second Principal Curvature Flow Model
Polyps are raised objects protruding from the colon surface, which means that the first and second principal curvatures (i.e. k1(xi) and k2 (xi) as defined in Equations 11 and 12 above) have positive values. In contrast, colonic folds are elongated structures, bent only in one direction, and correspondingly this means that the first principal curvature has a positive value, whereas the second principal curvature is close to zero. Therefore, to detect polyps, a partial differential equation (PDE, or “flow”) based on the second principal curvature (F2k2) can be designed which affects only points with a positive second principal curvature and which affects those points in such a way that the second principal curvature decreases. Repeated application of the flow on an image will eventually deform the image such that the second principal curvature becomes less than or equal to zero over the entire image. In other words, the flow will cause the surface protrusion to be gradually removed, and the difference image (i.e. an image indicating the difference in intensity values between each voxel in the original image and the corresponding voxel in the resulting deformed image) indicates the protruding objects (such as potential polyps).
A PDE for updating the image intensity I at voxel xi can be defined as:
where k1(xi) and k2(xi) are the first and second principal curvatures respectively, |∇I| is the gradient magnitude of the input image intensity I, and g(·) is a curvature dependent function characterising the flow.
The aim is to have a small flow on colonic folds, while having a large response on protruding objects (such as polyps) so as to emphasise the protruding objects in the difference image. Therefore, a flow to remove protruding objects can be defined as:
Equation 24 can be solved by calculating the first and second order derivatives (Ix, Iy, Iz, Ixx, Ixy etc.) of an image from an explicit surface representation or by numeric differentiation of intensity values in the medical image data, and then substituting the resulting derivatives into Equations 12 to 22 to calculate the values of k2 and |∇I| for each voxel. More specifically, the value of k2 is calculated by solving Equation 12, which involves: calculating the first and second order derivatives of the image (i.e. Ix, Iy, Iz, Ixx, Ixy etc.) at voxel xi; substituting the derivatives into Equations 15 to 22, to derive values of E, F, G, L, M and N; substituting the values of E, F, G, L, M and N into Equations 13 and 14 to derive values of H and K; substituting the values of H and K into Equation 12 to derive the value of k2(xi). The value of |∇I| is calculated by substituting the first order derivatives of the image (i.e. Ix, Iy, and Iz,) into Equation 22. The value of
can then be calculated by substituting the values of k2(xi) and |∇I| into Equation 24.
The value of
that is thereby calculated for a particular voxel is then added to the intensity I for that voxel. Since Equation 24 ensures that
is always less than or equal to zero, adding the value of
to the intensity of a voxel will result in the intensity decreasing towards zero if k2 (xi)>0, or being unchanged if k2 (xi)<0. This process is then reiterated (i.e. the first and second order derivatives are re-calculated using the updated intensity values so as to calculate a new value of
at each pixel, and the new value of
is then added to the intensity value that was calculated during the previous iteration).
It is noted that k2 is close to zero on the colonic fold, but has a larger positive value on protruding objects (such as polyps). Thus, based on Equation 24, during each iteration, only at locations of protruding objects, the image intensity is reduced by an amount proportional to the local second principal curvature k2. The iterative process runs until numerical convergence is achieved or the iterative process stops after a pre-set number of iterations if numerical convergence is not achieved. The result of the iterative process is a deformed image. A difference image D is then calculated by subtracting the intensity value at each voxel in the original (input) image from the intensity value at each voxel in the deformed image. The difference image D indicates the amount of protrusion. The difference image is then used as a feature in the Bayesian framework, i.e. F2k2=D.
The feature F2k2=D can be used to discriminate the polyps from colonic folds by calculating the amount of protrusion irrespective of the actual polyp shape.
The advantage of the second principal curvature flow feature is that it is a “multi-scale” detection algorithm, in the sense that it is capable of detecting potential lesions of a wide range of sizes. In contrast, the shape index feature is a “single scale” detection algorithm, in the sense that in can only detect potential lesions of the particular size that the parameters used to calculate the shape index are optimised to detect. However, the shape index feature is more descriptive of the shape of the lesion than the second principal curvature flow feature. When the second principal curvature flow feature is combined with the shape index feature by means of the Bayesian framework disclosed herein, the resulting combination is capable of detecting and characterising the shapes of a wide range of sizes of lesions.
(iii) Anatomical Model
As mentioned above, anatomical features (F3=L) are very useful for modelling the likelihood of a polyp appearing in different regions in the human anatomy. In polyp detection, two types of anatomical feature can be used:
(a) Distance based on the colon centreline (DC) (F3=F3C).
This feature, which will be referred to as the “DC feature”, is based on prior medical knowledge that polyps are typically more prevalent in the lower extremities of the colon (i.e., in the rectum and sigmoid) than further up the colon anatomy (towards the caecum).
To calculate the DC feature (F3C), a centreline of the colon is first extracted based on both the boundary distance transform and seed distance transform. These distance transforms are described in references [9] and [10]. For any point on the centreline, F3C is defined as the distance along the colon centreline between the point and the lower extremities of the colon. However, most points in the medical image will not be located exactly on the colon centreline, but will instead be offset from the centreline; for such points, F2C is defined as the distance along the centreline between the point on the centreline that is closest to the point in question and the lower extremities of the colon.
A truncated Gaussian function of the DC feature is used to model the probability of the polyp's location:
μDC and δDC are the mean and standard deviation, respectively, of the distance (measured along the colon centreline) of polyps in a training set from a common point in the lower extremity of the colon. The way in which values of μDC and δDC are determined from the training set is discussed below, under the heading “Training”. Thus, the values of μDC and δDC are based upon prior medical knowledge of the distance from the colon centreline of polyps that have previously been detected.
(b) Distance based on the colon mucosal surface (DS) (F3=F3S)
This feature, which will be referred to as the “DS feature”, is based on the observation that polyps are typically located on or near the mucosal surface. Voxels closer to the colonic surface have a higher probability of being labelled as polyp voxels, compared to voxels further away from the colonic surface.
The boundary distance transform is applied to the segmented colon (i.e. to an image of the colon for which a label has already been associated with each voxel during a candidate generation step) to calculate the DS feature (F33). Similar to the DC feature, a truncated Gaussian function of the DS feature is used to model the polyp:
μDS and δDS are the mean and standard deviation, respectively, of the distance from the colonic surface of polyps in a training set. The way in which values of μDS and δDS are determined from the training set is discussed below, under the heading “Training”. Thus, the values of μDS and δDS are based upon prior medical knowledge of the distance from the colonic surface of polyps that have previously been detected.
Modelling the Prior Probability, P(X)
The prior probability P(X) plays an important role in Equation 1. The prior probability can be constrained by spatial information, which can be imposed by a Markov Random Field and Gibbs Random Field (MRF-GRF). The purpose of the MRF-GRF is to allow the Bayesian framework to take into account the neighbourhood of a particular voxel, where the term “neighbourhood” refers to voxels that are near to (and, more particularly, adjacent to) the voxel in question. Lesions have a finite size and tend to be represented by more than one voxel in the medical image data. Thus, if one voxel is labelled as being a polyp, there is a higher probability that each of its adjacent voxels should also be labelled as being a polyp.
The prior probability p(xi) at voxel xi is calculated as:
where N(i) is the neighbourhood of voxel i, and vc(xi) is a potential function associated with the clique c which is defined as:
vc(ri=l1)=−β·p(xi|F) (28)
and
vc(ri=l0)=−β·(1−p(xi|F)) (29)
The coefficient β controls the size of clustering, and a suitable value can be determined through trial and error.
From Equation 27, it can be seen that the prior probability of voxel i being a polyp voxel depends on its neighbourhood probability. If the neighbouring voxels are labelled as polyps, this voxel has a higher probability of being labelled as a polyp; otherwise if the neighbouring voxels are labelled as non-polyps, this voxel has a lower probability of being labelled as a polyp.
It is noted that substituting Equation 28 or 29's definitions for vc (xi) into Equation 27 leads to the prior probability, p(xi), being defined in terms of the posterior probability, p(xi|F). However, the posterior probability, p(xi|F), is unknown: indeed, the purpose of modelling the prior probability and the likelihood is to calculate a value for p(xi|F) by means of Equation 1 (or the re-written form of Equation 1 labelled as Equation 2). Thus, the definition for p(xi|F) given in Equation 2 is substituted into Equation 28 or Equation 29, and Equations 28 or 29 are in turn substituted into Equation 27, such that p(xi) is defined in terms of itself and the likelihood teams. Values for the likelihood terms, P(F1|X), P(F2|X) and P(F3|X), can be calculated in the manner explained above and, therefore, p(xi) can then be defined in terms of itself in Equation 27. Equation 27 is then solved iteratively in order to calculate a value for p(xi). Equation 27 can be solved iteratively using a known expectation-maximisation algorithm, or by using any other suitable algorithm.
It is also noted that the only prior probability in Equation 2 that needs to be calculated is P(X), and there is no need to calculate the other prior probabilities in Equation 2 (i.e. P(F1), P(F2) and P(F3)). As rioted previously, P(X|I,S,L)∝P(I|X)·P(S,X)·P(L|X)·P(X) and so the other prior terms merely affect the constant of proportionality, but do not affect the overall determination as to whether or not a voxel represents a lesion.
Training
Each feature distribution has been modelled in Equations 3 to 27 based on prior medical knowledge. The values of the parameters related to each model are estimated through a training step.
The input to the training step is a training set containing a plurality of 3D medical images in which lesions have been identified and labelled by a clinician. Preferably, the training set contains several hundred 3D scans of colons, containing several hundred polyps. The training set is used to optimise the values of parameters for the intensity model, shape model and anatomical model. An independent testing set may also be used to verify that appropriate parameters have been used and that the Bayesian framework operates satisfactorily.
Each model is associated with one or more probabilities for polyp detection. The parameter values for each model are chosen in order to provide a clinically acceptable cut-off in a receiver operating characteristic (ROC) curve. The parameters whose values can be derived from the training data include: intensity mean (μI) and standard deviation (δI) for the intensity model; shape mean (μSI and/or μk2) and standard deviation (μSI and/or δk2) for the shape model; mean distance (μDC and/or μDS) and standard deviation of the distance (δDC and/or δDC) for the anatomical model. In this context, a ROC curve is a graph of sensitivity (i.e. the number of lesions detected) against the number of false positives that are detected in each medical image. Any of the models' parameters can be adjusted in order to increase the sensitivity of the model, but increasing the sensitivity also causes an increase in the number of false positives that are detected. Thus, there is a trade-off between sensitivity and the number of false positives: a clinician desires that the sensitivity of the model is sufficiently high to avoid true positives going undetected, but also has a conflicting desire to minimise the number of false positives to reduce the time taken manually reviewing the detected lesions. These conflicting desires are reconciled by adjusting the values of the models' parameters until a clinically acceptable number of false positives per scan is generated. For example, a clinically acceptable sensitivity may be achieved by adjusting the parameters' values until an average of five false positives per scan are generated when the Bayesian framework is used to detect lesions in the training set. The optimal values of the parameters can be found by trial-and-error or by an algorithm that adjusts the parameters' values in order to maximise the area under the ROC curve.
Based on the training set, examples of the optimal parameters for shape models (Equations 7 to 9) are: μSI=0.9, δSI=1.45, μk2=235.0, δk2=50.0.
To estimate the optimal mean intensity (μ1) for the intensity model (Equation 6), polyps in the training set are divided into two groups based on size. Polyps in different groups have different intensity models, wherein the intensity model for the group of large polyps is given by the term mc in Equation 4, whilst the intensity model for the group of small polyps is given by the term mp in Equation 4. These intensity models allow estimation of the mean intensities of core part mC and PVE part mP.
For example, polyps in the inventors' training data were divided into two groups depending on whether the diameter of the polyp was greater or less than five millimetres, and values of Δr=0.25, mc=140, mp=−175 were calculated from the training data. These parameters provide optimal detection performance from the intensity model on the training data.
Candidate Generation
The Bayesian framework disclosed herein combines various features (for example, intensity, shape and anatomical features) into a single statistical expression (see Equation 2), which can be applied to detect lesions including colonic polyps in CTC volumes. The Bayesian framework can be used in either, or both, of the two main steps of a CAD system, i.e. initial candidate generation and/or false positive reduction.
The use of the Bayesian framework in a method of candidate generation will now be described. In this method, the Bayesian framework is used to identify structures in the medical image that might be polyps. This method can also be used to segment the image (i.e. to divide the image into different parts) and label each segment. For example, the different voxels within the image can be labelled as containing a polyp or as containing healthy tissue.
In step 310, a pre-processing algorithm is preferably applied to the three-dimensional medical image 302. The pre-processing algorithm allows the features F to be calculated more accurately. The pre-processing algorithm is preferably applied to the whole of the three-dimensional image 302 but, alternatively, could be applied only to the portion of the image 302 that the mask 304 indicates to be the region of interest. Any suitable pre-processing algorithm may be used. For example, the image data may be pre-processed using an algorithm that reduces noise in the image. In another example, a known smoothing algorithm can be used. The purpose of the smoothing algorithm is to smooth the spatial derivatives of the intensity values at each voxel in the 3D image, wherein a derivative is said to be “smooth” if it contains no discontinuities or points at which it is undefined.
In a preferred example, a single-scale Gaussian smoothing algorithm is used to remove noise. Single-scale Gaussian smoothing is described in reference [11]. Gaussian smoothing is advantageous because it is a particularly simple smoothing algorithm and, therefore, requires little computational effort. In other preferred examples, an isotropic diffusion algorithm or an anisotropic diffusion algorithm may be used. An anisotropic diffusion algorithm is particularly advantageous because it preserves the “edges” between high and low intensity regions.
In step 320, the value of each feature is determined for each voxel xi. Each voxel xi is within the portion of the image 302 that the mask 304 indicates to be the region of interest. In the example in which the method is used to detect colonic polyps, the values of the features are determined in the following manner:
In step 330, the likelihood probability distribution of each feature is calculated for each voxel xi. The likelihood probability distributions are calculated based on the values of the features calculated in step 320 and prior medical knowledge (such as training data). In the example in which the method is used to detect colonic polyps, the likelihood probability distributions of the features are determined in the following manner:
In step 340, a conditional probability map is generated. The conditional probability map is a three-dimensional array having a plurality of elements each corresponding to a respective voxel in the medical image, wherein each element of the array contains a value indicating the probability that the voxel to which it corresponds represents a polyp. The probability map is calculated by performing the following sub-steps (which are not individually illustrated in
In step 350, the probability map is analysed to identify candidates 352 (such as polyp candidates) in the three-dimensional image. In an example, step 350 comprises the following sub-steps, which are not individually illustrated in
The three-dimensional regions that remain after the small regions have been removed are the lesion candidates 352. The probability map may be analysed in step 350 using different sub-steps from those described above.
The method may comprise a further step (not shown in
Image Segmentation
The method for candidate generation described herein can also be used for image segmentation. The term “image segmentation” refers to a method of dividing an image into a number of different regions. It is useful to be able to segment a medical image into regions that contain lesions and regions that do not contain lesions. During the step of candidate generation, it is acceptable to detect only a few voxels within each region that contains a lesion. However, during image segmentation, it is necessary to detect substantially all of the voxels that contain a lesion, in order that the full extent of the lesion can be determined.
The method for candidate generation described herein is particularly suitable for image segmentation because the potential lesion candidates are calculated from the joint statistics of multi-features in the posterior probability (Equation 1), which provides rich information for proper lesion segmentation. The experimental results described under the heading “Experimental results” below (and illustrated in
False Positive Reduction
The Bayesian framework disclosed herein can also be used in a method of false positive reduction, in order to remove false positive regions from initial lesion candidates. Those initial candidates can be obtained from other methods for detecting lesion candidates (including known methods for detecting lesion candidates based on the shape of structures in a medical image) or they can be obtained using the Bayesian framework itself, as described above.
The method of false positive reduction is similar to that described above for candidate generation. However, instead of calculating a probability map within the whole colon mask as for candidate generation, in the method of false positive reduction the probability map is calculated within each region Rinii of the image that contains initial lesion candidates, where i=1 . . . N, and N is the total number of initial lesion candidates.
In step 410, a pre-processing algorithm is preferably applied to the three-dimensional medical image 302. This step is essentially the same as step 310 of the method for candidate generation, and so need not be described in detail.
In step 420, the value of each feature is determined for each voxel xi. Each voxel x, is within the region Rinii of the three-dimensional medical image 402 that contains the initial lesion candidates. The region Rinii is defined by a mask 404. In the example in which the method is used to detect colonic polyps, the values of the features are determined in the following manner:
In step 430, the likelihood probability distribution of each feature is calculated for each voxel xi. The likelihood probability distributions are calculated based on the values of the features calculated in step 420 and prior medical knowledge (such as training data). In the example in which the method is used to detect colonic polyps, the likelihood probability distributions of the features are determined in the following manner:
In step 440, a conditional probability map is generated for each voxel x, within the region Rinii. This step is essentially the same as step 340 of the method for candidate generation, and so need not be described in detail.
In step 450, the probability map is analysed to identify candidates 452 (such as polyp candidates) in the three-dimensional image. In an example, step 450 comprises the following sub-steps, which are not individually illustrated in
Hence, step 450 reduces false positives in the set of initial lesion candidates by discarding lesion candidates that have a low probability of being lesions. The probability map may be analysed in step 450 using different sub-steps from those described above.
The method may comprise a further step (not shown in
As mentioned above, when the Bayesian framework is used in a method of candidate generation, a comparatively large portion (as defined by the mask 304, which typically includes an entire organ) of the three-dimensional image is processed, which involves high memory usage. In contrast, when the Bayesian framework is used in a method of false positive reduction, only the comparatively small candidate region of the three-dimensional image is processed, which requires less memory. Thus, the Bayesian framework is particularly advantageous when used in a method of false positive reduction. However, in the case of using the Bayesian framework for false positive reduction, the overall sensitivity of lesion detection depends on the sensitivity of the initial candidate generation, so lesions missed by the step of initial candidate generation cannot be detected by the step of false positive reduction. That is, the Bayesian based false positive reduction stage is only able to remove false positive regions whereas, in the case of using the Bayesian method for initial candidate generation, the overall sensitivity of lesion detection depends on the Bayesian based candidate generation. This is one advantage of using the Bayesian framework for initial candidate generation.
It is noted that the intensity and anatomical models have slight differences depending on whether they are used for candidate generation or for false positive reduction. In the case of candidate generation, a fixed mean intensity μI is used in Equation 3 and the DS feature can be used in the anatomical modelling (Equation 26). In the case of false positive reduction, the mean intensity μI is a function of the region size (Equation 6) and the DC feature can be used in the anatomical modelling (Equation 25). However, the DC and DS features can be used in any appropriate combination. More particularly: the DS feature could be used during candidate generation; the DC feature could be used during false positive reduction; or both the DC and DS features could be used during candidate generation and/or false positive reduction.
Apparatus
An example of the apparatus used to implement the invention will now be described with reference to
The computers described herein may be computer systems 600 as shown in
Computer system 600 includes one or more processors, such as processor 604. Processor 604 may be any type of processor, including but not limited to a special purpose or a general-purpose digital signal processor. Processor 604 is connected to a communication infrastructure 606 (for example, a bus or network). Computer system 600 also includes a main memory 608, preferably random access memory (RAM), and may also include a secondary memory 610. Secondary memory 610 may include, for example, a hard disk drive 612 and/or a removable storage drive 614, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc. Removable storage drive 614 reads from and/or writes to a removable storage unit 618 in a well-known manner. Removable storage unit 618 represents a floppy disk, magnetic tape, optical disk, etc., which is read by and written to by removable storage drive 614. As will be appreciated, removable storage unit 618 includes a computer readable storage medium having stored therein computer software and/or data.
In alternative implementations, secondary memory 610 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 600. Such means may include, for example, a removable storage unit 622 and an interface 620. Examples of such means may include a program cartridge and cartridge interface (such as that previously found in video game devices), a removable memory chip (such as an EPROM, or PROM, or flash memory) and associated socket, and other removable storage units 622 and interfaces 620 which allow software and data to be transferred from removable storage unit 622 to computer system 600. Alternatively, the program may be executed and/or the data accessed from the removable storage unit 622, using the processor 604 of the computer system 600.
Computer system 600 may also include a communication interface 624. Communication interface 624 allows software and data to be transferred between computer system 600 and external devices. Examples of communication interface 624 may include a modem, a network interface (such as an Ethernet card), a communication port, a Personal Computer Memory Card International Association (PCMCIA) slot and card, etc. Software and data transferred via communication interface 624 are in the form of signals 628, which may be electronic, electromagnetic, optical, or other signals capable of being received by communication interface 624. These signals 628 are provided to communication interface 624 via a communication path 626. Communication path 626 carries signals 628 and may be implemented using wire or cable, fibre optics, a phone line, a wireless link, a cellular phone link, a radio frequency link, or any other suitable communication channel. For instance, communication path 626 may be implemented using a combination of channels.
The terms “computer program medium” and “computer readable medium” are used generally to refer to media such as removable storage drive 614, a hard disk installed in hard disk drive 612, and signals 628. These computer program products are means for providing software to computer system 600. However, these terms may also include signals (such as electrical, optical or electromagnetic signals) that embody the computer program disclosed herein.
Computer programs (also called computer control logic) are stored in main memory 608 and/or secondary memory 610. Computer programs may also be received via communication interface 624. Such computer programs, when executed, enable computer system 600 to implement the present invention as discussed herein. Accordingly, such computer programs represent controllers of computer system 600. Where the invention is implemented using software, the software may be stored in a computer program product and loaded into computer system 600 using removable storage drive 614, hard disk drive 612, or communication interface 624, to provide some examples.
In alternative embodiments, the invention can be implemented as control logic in hardware, firmware, or software or any combination thereof.
Experimental Results
The Bayesian framework disclosed herein has been evaluated on CT colon images. Experiments have been conducted to demonstrate the effectiveness of the algorithm applied to both the initial candidate generation step and the false positive reduction step.
In the first experiment, the Bayesian framework was used in a method of initial polyp candidate detection as described above with reference to the flow chart shown in
In the second experiment, the Bayesian framework was used in a method of false positive reduction as described above with reference to the flow chart shown in
The Bayesian framework may be used to detect other types of lesions in addition to colonic polyps including, but not limited to, lung nodules, liver lesions, mammographic masses and brain lesions. Features appropriate to the type of lesion to be detected can be used instead of the particular intensity, shape and anatomical features that are described in detail above. In particular, the anatomical feature may be replaced or supplemented by a different model that reflects prior medical knowledge of the type of lesion to be detected, such as prior medical knowledge regarding the probability of a lung nodule or brain lesion being located in a particular region. Whilst the use of an anatomical feature that is based upon the location of a lesion (such as the distance from the rectum or submucosal surface) is appropriate to detecting colonic polyps, other types of feature that are not based upon the location of a lesion may be used. For example, anatomical features may include features that are based on the size, shape or density of the particular type of lesion that is to be detected.
It will be understood that the invention has been described above purely by way of example, and that modifications of detail can be made within the scope of the invention.
References
The following publications, which may assist the reader in gaining an appreciation of certain examples disclosed herein, are herein incorporated by reference in their entirety:
Number | Date | Country | Kind |
---|---|---|---|
1003564.0 | Mar 2010 | GB | national |
Number | Name | Date | Kind |
---|---|---|---|
6574357 | Wang | Jun 2003 | B2 |
7620227 | Gering et al. | Nov 2009 | B2 |
20060050960 | Tu et al. | Mar 2006 | A1 |
20090092300 | Jerebko et al. | Apr 2009 | A1 |
20100021026 | Collins et al. | Jan 2010 | A1 |
Number | Date | Country |
---|---|---|
WO 2009045460 | Apr 2009 | WO |
Entry |
---|
G. Slabaugh, Xiaoyun Yang, Xujiong Ye, Richard Boyes and Gareth Beddoe, “A Robust and Fast System for CTC Computer-Aided Detection of Colorectal Lesions,” Algorithms, 2010, 3, pp. 21-43. |
P. Viola, Michael J. Jones, “Robust Real-Time Face Detection,” International Journal of Computer Vision 57(2), 137-154, 2004. |
Robert M. Haralick, K. Shanmugam, I. Dinstein, “Textural Features for Image Classification,” IEEE Trans on Systems, Man, and Cybernetics, 3(6), 1973. |
David S. Paik, Christopher F. Beaulieu, Geoffrey D. Rubin, Burak Acar, R. Brooke Jeffrey, Jr., Judy Yee, Joyoni Dey, Sandy Napel, “Surface Normal Overlap: A Computer-Aided Detection Algorithm With Application to Colonic Polyps and Lung Nodules in Helical CT,” IEEE Trnas. On Medical Imaging, 23(6), 2004. |
Hidefumi Kobatake, Masayuki Murakami, “Adaptive Filter to Detect Rounded Convex Regions : Iris Filter,” International Conference on Pattern Recognition, 1996. |
H. Yoshida, J. Nappi, “Three-Dimensional Computer-Aided Diagnosis Scheme for Detection of Colonic Polyps,” IEEE Trnas. on Medical Imaging, 20(12), 2001. |
Yong Zhou, Arthur W. Toga, “Efficient Skeletonization of Volumetric Objects, ” IEEE Transactions of Visualization and Computer Graphics, vol. 5, No. 3, pp. 197-209, 1999. |
Paulo R. S. Mendonca et al., “Detection of Polyps via Shape and Appearance Modeling,” (Proc. MICCAI 2008 Workshop: Computational and Visualization Challenges in the New Era of Virtual Colonoscopy, pp. 33-39). |
UKIPO Search Report dated Jul. 1, 2010. |
European Search Report issued Sep. 13, 2011 for European Patent Application No. EP 11 15 4237. |
Ye et al., “Shape-Based CT Lung Nodule Segmentation Using Five-Dimensional Mean Shift Clustering and MEM with Shape Information,”Biomedical Imaging: From Nano to Macro, 2009, IEEE International Symposium on IEEE, Piscataway, NJ, Jun. 28, 2009, pp. 482-485. |
Ye et al., Shape-Based Computer-Aided Detection of Lung Nodules in Thoracic CT Images, IEEE Transactions on Biomedical Engineering, IEEE Service Center, Piscataway, NJ, vol. 56, No. 7, Jul. 1, 2009, pp. 1810-1820. |
Ye et al., “A Bayesian Approach for False Positive Reduction in CTC CAD,” Proceedings of MICCAI 2010 Workshop: Virtual Colonoscopy & Abdominal Imaging, Sep. 20, 2010, pp. 49-54. |
Number | Date | Country | |
---|---|---|---|
20110216951 A1 | Sep 2011 | US |