The present invention relates to medical imaging of a patient, and more particularly, to automatic classification of a contrast phase in computed tomography (CT) and magnetic resonance (MR) images.
In order to enhance the visibility of various anatomic structures and blood vessels in medical images, a contrast agent is often injected into a patient. Medical images of the patient can be obtained using various imaging modalities, such as CT or MR. However, the injection of the contrast agent is not typically tied to the image acquisition device used to obtain the medical images. Accordingly, medical images typically do not contain contrast phase information regarding how long the image acquisition time was after the contrast injection time.
In clinical routines, contrast phase information is typically added manually to image meta data (e.g. in a DICOM header) by a technician at the image scanner. For example, some verbal description is typically added to the series description or image comments DICOM fields. However, this information is not structured or standardized, and is usually only understandable by a human reader. Medical images are typically automatically stored with a timestamp representing an image acquisition time. Based on the image acquisition time of the images, the relative time delay between multiple scans can be determined automatically, but not the delay after the start of contrast injection. In order to effectively pre-process a medical image, it is crucial to determine the contrast phase of the image (i.e., when the image was obtained relative to the contrast injection). Accordingly, fully automatic identification of a contrast phase of an image is desirable.
The present invention provides a method and system for automatic classification of a contrast phase of a medical image. Embodiments of the present invention utilize a trained classifier to classify the contrast phase of a medical image into one of a predetermined set of phases using a trained classifier. Embodiments of the present invention can classify a contrast phase of an image from the single image or from multiple images at different phases.
In one embodiment of the present invention, a plurality of anatomic landmarks are detected in a 3D medical image. A local volume of interest is estimated at each of the plurality of anatomic landmarks, and features are extracted from each local volume of interest. The contrast phase of the 3D volume is determined based on the extracted features using a trained classifier.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
The present invention is directed to a method and system for automatic classification of a contrast phase in medical images, such as computed tomography (CT) and magnetic resonance (MR) images. As used herein, the “contrast phase” of an image is an indication of when the image was acquired relative to a contrast injection. Embodiments of the present invention are described herein to give a visual understanding of the anatomic landmark detection method. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, it is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
At step 104, a plurality of anatomic landmarks are detected in the medical image. The detected anatomic landmarks can include target landmarks and reference landmarks. Target landmarks are anatomic landmarks in crucial contrast enhancing regions. For example, the detected target landmarks can include various blood vessels (i.e., arteries and veins) that show contrast at various times after the contrast injection and various organs that light up with the contrast agent at specific contrast phases. Reference landmarks are landmarks in non-enhancing regions which are used to provide reference values for comparison with the target landmarks.
The plurality of detected landmarks can be detected in the 3D medical image using an automatic landmark and organ detection method. For example, a method for detecting anatomic landmarks and organs in a 3D volume is described in United States Published Patent Application No. 2010/0080434, which is incorporated herein by reference. Using the method described in United States Published Patent Application No. 2010/0080434, the anatomic landmarks may be detected as follows. One or more predetermined slices of the 3D medical image can be detected. The plurality of anatomic landmarks (e.g., representing various vessels) and organ centers can then be detected in the 3D medical image using trained landmark and organ center detectors connected in a discriminative anatomical network, each detected in a portion of the 3D medical image constrained by at least one of the detected slices.
As described above, various target landmarks n crucial contrast enhancing regions are detected. According to a possible implementation, the target landmarks in the head and neck region (
In addition to the above described target landmarks located in crucial contrast-enhancing locations, reference landmarks can be detected in non-enhancing regions such as bone structures and fat regions. Instead of only relying on feature values of the vessel (target) landmarks, the classification method can additionally utilize non-enhancing landmark regions by considering differences or ratios between the two classes of landmarks. This may be particularly useful for MR images, where the absolute image intensities may vary depending on slight changes on the acquisition conditions and different protocols used.
As described above, the landmarks can be detected using the method described in United States Published Patent Application No. 2010/0080434. Based in the anatomic regions contained in the medical image, only a partial subset of detected landmarks may be returned. Although a specific set of landmarks are described above, it is to be understood that the present invention is not limited thereto.
Returning to
Returning to
At step 110, the contrast phase of the medical image is determined based on the extracted features using a trained classifier. According to an embodiment of the present invention, a multi-class machine-learning based classification algorithm can be used to estimate the contrast phase of the medical image from the features extracted at the detected landmark positions. A classifier is trained using features extracted from training data and the trained classifier is used to classify the medical image as one of a set of predetermined contrast phases. For example, for abdominal scans, typical phases xi to be estimated are:
The above list of contrast phases covers typical routine cases, but is not intended to limit the present invention. The contrast phases may be adapted to specific clinical settings by adding or removing phases such as renal diagnostics where corticomedullary and nephrographic phases may be acquired. The ground truth phase information from each image data set used for training the classifier is provided by a clinical expert. Embodiments of the present invention can classify a contrast phase of an image from the single image or from multiple images at different phases.
Single Phase Classification. In the case of phase classification using a single 3D medical image, multi-class Probabilistic Boosting Tree (PBT) framework can be used to estimate the contrast phase label xi from a given single-phase image zi. Accordingly, a multi-class PBT classifier is trained to estimate the contrast phase label xi based on the features extracted in VOIs surrounded the detected landmarks in given single-phase image zi. In particular, features fk input to the trained classifier can include the feature values (mean intensity, local gradient, etc.) extracted at each target landmark position, as well as the ratios and differences between each target landmark intensity and the reference landmark intensities. Reference landmark intensities may be calculated as the mean over several landmarks in the same structure, such as several positions of bone or several positions of fat. The use of this relative intensity feature values ensures that the system is more robust against global intensity changes of images caused by different imaging conditions, especially in MR images. It is to be understood that the classifier is trained using the same types of features extracted in training data of which the contrast phase is known.
The PBT classifier utilizes a set of weak classifiers corresponding to the set of features fk to classify the contrast phase of the medical image. Accordingly to an advantageous implementation, multi-class response binning of the feature values can be used. During training, for each feature fk, a joined response histogram over all class labels is calculated using a bin width Δfk.
Multi-Phase Scans. In the case in which multi-phase scans (i.e., multiple 3D medical images of a patient taken sequentially at multiple contrast phases) are available and need to be classifier, the classifier can be enhanced by using a Markov model to exploit the temporal relationship between different phases. The time differences between different contrast phases are typically reproducible and therefore can add additional robustness to the classifier, as compared with relying only on independent phase by phase classifications for a set of multi-phase images.
A Markov network (undirected graph) can be used to model the relationship between phases.
where μij and σij denote the mean and standard deviation of the time differences Δtij learned from the training set of multi-phase images with given contrast labels xi and xj. The joint probability function of the observed images z and the corresponding contrast phase labels x can be expressed as:
Here, Z denotes a normalization constant such that p(x,z) yields a probability function.
At the inference stage (during classification of the multi-phase images), the goal is to estimate the most probable set of class labels x for a given set of multi-phase images z. This is given by the maximum a posteriori probability:
This inference problem can be solved efficiently using well-known methods, such as Belief Propagation.
As described above, the method of
In clinical applications, the following regions of the body are scanned most frequently: head/neck, thorax, abdomen, thorax/abdomen combined, head/neck/thorax/abdomen combined, runoffs, and whole body. According to embodiments of the present invention, separate contrast phase classifiers can be trained for each of the above body region combinations. Furthermore, the list of body regions may vary depending on clinical site specific requirements. The landmark detection method used in step 104 and described in United States Published Patent Application No. 2010/0080434, is capable of first determining the body region contained in the image data and then detecting a corresponding subset of landmarks. This ensures that the contrast phase classifier does not suffer from missing inputs.
Since in some cases the scan range of an image may differ from frequently used scan ranges, not all landmarks may be observed in each scan. However, the trained classifier may still expect the input features from all landmarks. Accordingly, the missing features may be imputed by modeling the relationship between missing features and observed features using a linear regression model. The missing features are aligned with the imputed values, and an updated linear regression model is estimated. This process can be iteratively applied until the feature values converge. After the missing feature values have been estimates, the classification method described above can then proceed.
The above-described methods for phase classification of medical images may be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components. A high level block diagram of such a computer is illustrated in
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application claims the benefit of U.S. Provisional Application No. 61/222,254, filed Jul. 1, 2009, the disclosure of which is herein incorporated by reference.
| Number | Date | Country | |
|---|---|---|---|
| 61222254 | Jul 2009 | US |