The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
This disclosure describes the workflow for the analysis of multiple lesions or other objects of interest. This can be applied to a single exam case with different series (CT, PET, MR, SPECT, US) and multiple lesions, as well as to a multiple examination scenario with multiple series and multiple lesions.
The following acronyms are used:
The specific case of measuring CT/PET Tumor response to treatment over time will be herein described, but it should be noted that the core innovations have applications to different modalities and many areas. Therefore, the herein described CT/PET embodiment is meant to be illustrative and not limiting to the CT/PET modality(ies).
The graphical representation and display of lesions' parameters may be used for diagnosing and staging disease and more importantly for evaluating response to therapy over time and triggering actions for best treatment. These parameters are displayed in a graphical representation that allows for easy deciphering of the change in a multi-modality quantitative analysis setup. It is possible for the user to interact with this graphical presentation and access the relevant modality image data along with its analysis results, i.e., the user can select the analytical volume at any time point and the application will immediately display the image data corresponding to the analytical values, see
As illustrated in
The application is capable of automatically detecting lesions in multi-modality series and tags each finding with a descriptive Name. The lesion name and classification is used for the coregistration of lesions between times and multi-modality series.
Innovative aspects include:
Auto visualization display of therapy response parameters over time.
Auto detection of lesions in multiple series (CT, PET, MR, SPECT, US) over time.
Auto labeling of lesions in multiple series (CT, PET, MR, SPECT, US) over time.
Auto coregistration of lesions between multi-modality exams over time.
Automatic and manual linking/unlinking of lesions over time.
Interactive navigation through multi-modality imaging.
Automatic and manual contour definition of multi-modality lesions over time.
Automatic and manual volume definition of multi-modality lesions over time.
The auto visualization of different parameters is illustrated in
Multiple lesions' parameters could be displayed by selecting the correspondent finding of interest. As shown in
The graphs presented are generated from the analysis of multiple lesions retrieved from one or more series loaded into the applications. There are two scenarios: one exam with multi-modality series corresponding to a single time stamp (first exam or baseline), or multiple exams with multi-modality series corresponding to multiple time stamps (follow up exams). There also could be combinations thereof.
From each exam series loaded into the application, a given set of parameters is obtained for each automatically detected or manually detected lesion.
Each lesion is properly labeled and coregistered between time stamps and between multi-modality series. By providing this lesion coregistration, each individual lesion parameter is calculated over time and displayed to illustrate progress in therapy response, disease progression, etc.
In other to explain the detailed workflow to obtain the Therapy Parameters Over time for multiple lesions, the innovative concepts will be described step by step for a Single exam and a Multi-exam scenarios. PET/CT exams will be used to illustrate the application (the process may also apply to MR and U/S exams).
Single Exam Workflow:
Loading any CT series, and PET series with and/or without Attenuation Correction (AC).
Display multi-modality layouts with different views, configurable by the user.
Computer Aided Detection (CAD) and lesion auto-bookmarking capabilities in both set of images (PET and CT). See
Full CAD:
CAD VOI:
CAD findings done by one of the following algorithms:
Automatic detection of normal anatomy in PET images (heart, liver, etc) is provided and propagated to CT images. This provides the ability to eliminate normal anatomy from real lesions as seen in
Smart Review of CT images is provided with automatic window level selection based on body anatomy. While acquiring the images at the CT scanner, technicians divide the scout scan into body areas (number of areas definable as user preference): i.e. brain, head and neck, lungs, liver, abdomen. The dividing is automatic based on HU number, in one embodiment as shown in
Automatic segmentation and display of Volume contours for both Functional (PET) volume and Anatomical (CT)
Smart Review Paging through both PET & CT images with capabilities to accept, reject, add, and/or delete bookmarks is provided. Also provided is the ability to easily classify findings based on TNM Classification of Malignant Tumors, which significantly reduces the time to categorize lesions.
Automatic lesion detection based on technique, location, and time is denoted as follows, for example:
C1_P: Computer defined lesion # 1 with a PET contour defined.
C2_CT: Computer defined lesion # 2 with a CT contour defined.
U3_P_CT: User defined lesion # 3 with both PET and CT contours defined.
Current quantitative analytic data is displayed in a useable format that offers quick comparisons to previous quantitative analytic data for informed patient management.
At least three Interactive modes of operation exist:
The user is able to navigate between Review mode to IDA mode if desired. IDA will display all available parameters from both the PET and the CT series. The display is user definable and can include: SUVmax, SUVmin, SUVmean, the cc volume, and the TLG. For CT only, the HU units can be displayed.
Multi Exams Workflow:
The specific case of measuring CT/PET Tumor response to treatment over time using two Exams (Time A and Time B) will be described, but it should be noted that the core innovations have applications to different modalities and multiple exams. See
Innovative aspects include:
Selection of multi-modality exams and loading of multiple series including CT, PET (NAC) and PET (AC) for Time A and Time B.
Automatic coregistration between Time A and Time B scans based on anatomical data and lung segmentation.
Display multi-modality layouts with different views, configurable by the user for multiple exams in time “Time A” and “Time B”. Time A is assumed to be the baseline exam analyzed by the Single Exam Workflow described above.
Bookmark propagation from Time A exam into Time B exam, and CAD with auto-bookmarking of new lesion in both PET and CT images:
Full CAD
CAD MVOI
Auto-matching capability between propagated bookmarks (from Time A) and any new findings in Time B with descriptive labeling assigned by the software to indicate sequential progress. Auto-matching can be based on SUVmax and/or centroid coordinates positioned within two voxels in either x, y, or z direction.
Smart Review Paging through both PET and CT images with capabilities to accept, reject, add, and delete bookmarks in Time B is provided. The ability to easily classify findings based on TNM Classification of Malignant Tumors as in the single workflow is also provided.
Automatic lesion detection based on technique location and time:
A contouring tool capable of tracking changes in user defined contours in Time B is provided as seen in
Also provided is the ability to display quantitative analytic data from Time A and Time B in a useable format that offers quick comparisons between exams. See
Interactive Data Analysis (IDA) Management will be incorporated in the clinician reading workflow to be positioned, in one embodiment, between analysis image review and structured patient reporting as seen in the workflow illustrated in
Herein provided is the capability to support multiple data points in time (not limited to Time A and B), to provide an evaluation of best overall response, defined as the best response recorded from the start of treatment until disease progression or recurrence. A Baseline-reset tool will be provided in the case of non-responsiveness.
The IDA summarizes objective information retrieved from image analysis, including results from multiple time exams.
Graphical presentation of therapy response parameters over time is provided: SUV Max, SUV average, Total Lesion Glycolysis (TLG), TLG/TLGo, Tumor Volume (anatomical, functional), HU, lesion measurements (long, short axis), etc. See
Current exam Image Data, Radiation Therapy Structure Sets, and Quantitative Analytical Data can be archived for immediate retrieval at a later date.
As in the single exam workflow, three Interactive modes of operations exist:
Review Mode
Contour Mode
IDA Mode
An Interactive Patient report summarizes the analysis performed on lesion over time including IDA measurements and image selection. The report may be designed using criteria as defined by WHO (Would Health Organization) or RECIST (Response Evaluation Criteria in Solid Tumors) for lesion selection.
The herein described methods and apparatus enable clinicians to efficiently review data collected in multiple studies from different modalities and to assess tumor response to therapeutic treatment. It supports the simplification of response evaluation through the use of display of therapy parameters over time, image comparison, interactive multidimensional measurements, and consistent analysis criteria.
The herein described methods and apparatus provide effective evaluation of tumor response and objective tumor response rate, as a guide for the clinician and patient in decisions about continuation of current therapy.
The herein described methods and apparatus provide an effective workflow for image analysis with automatic coregistration, bookmark detection and propagation, efficient image review, and automatic multi-modality segmentation.
The herein described methods and apparatus combine the results of multi-modality image exams and their analysis to provide an effective evaluation of Tumor Response over Time and therapeutic treatment evaluation. Leveraging the use of VCAR, the clinician is able to efficiently analyze individual lesions and track their specific progress to treatment and overall disease recurrence.
When conducting a follow up, at least two patient imaging exams are accessed for analysis. Exams may be from any imaging modality including: CT, PET, X-ray, MRI, Nuclear, and Ultrasound.
Coregister Exams Automatically
Exams from multiple time stamps are automatically coregistered to ensure correct propagation of bookmarks, automatic labeling of lesions and analysis of lesions over time.
Review Image Data
Image series are reviewed to accept or reject automatically selected lesions and manually add bookmarks.
Multiple view ports are available (axial, coronal, sagittal, MIPs) and multiple window levels for thorough reading.
Analyze Image Data
Each image exam is analyzed according to a specified protocol. Exams may be analyzed independently or context of other exams (e.g. auto segmenting PET data from a CT scan). Analysis may be performed manually, semi-automatically or fully automated.
Interactive Data Analysis
Some or all of the analysis from accessed image exams will be fused together and presented through the IDA mode.
Analysis may be in the form of measurements (depicted graphically or in text). Analysis displayed may be acquired from a single exam, multiple exams or the combination or exams.
Therapy Parameter Display
Therapy Parameter Display is the novel idea that will allow clinicians to interact with quantitative patient information, providing the ability to view the data analysis in graphical layouts, interacting with analysis review as part, and interacting with analysis review as part of the reading and assessment workflow simultaneously.
The analyzed data will be displayed in a useable format that compares disease or lesion response to treatment, as described the above examples.
Patient Report
Also provided is a multifunctional report of data analysis with interactive capability that will allow clinicians to efficiently navigate between the patient report and the analysis and review modes. This tool will allow users to summarize the review of individual lesions and present results in a systematic format for other clinicians.
Of course, the methods herein described are not limited to practice in any particular diagnostic imaging system and can be utilized in connection with many other types and variations of imaging systems. In one embodiment, a computer is programmed to perform functions described herein. As used herein, the term computer is not limited to just those integrated circuits referred to in the art as computers, but broadly refers to computers, processors, microcontrollers, microcomputers, programmable logic controllers, application specific integrated circuits, and other programmable circuits. Although the herein described methods are described in a human patient setting, it is contemplated that the benefits of the invention accrue to non-human imaging systems such as those systems typically employed in small animal research.
Computer-Aided Processing (CAD): As described in the introduction, the medical practitioner can derive information regarding a specific disease using the temporal data. Proposed herein is a computer-assisted algorithm with temporal analysis capabilities for the analysis of various medical conditions using diagnostic medical equipment. One can use computed tomography as an example as detailed below and for using temporal mammography mass analysis. The mass identification can be in the form of detection alone (e.g., for the presence or absence of suspicious candidate lesions) or in the form of diagnosis (e.g., for the classification of detected lesions as either benign or malignant masses). For the purposes of simplicity, one embodiment will be explained in terms of a CAD system to diagnose benign or malignant breast masses.
The CAD system has several parts—Data sources, optimal feature selection, and classification, training, and display of results (
Data source: Data from a combination of one or more of the following sources can be used-Image acquisition system information from a tomographic data source and/or Diagnostic image data sets.
Segmentation: In the data, a region of interest can be defined to calculate features. The region of interest can be defined in several ways—Use the entire data as is, and/or Use a part of the data, such as a candidate mass region in a specific region. The segmentation of the region of interest can be performed either manually or automatically. The manual segmentation involves displaying the data and a user delineating the region using a mouse or any other suitable interface. An automated segmentation algorithm can use prior knowledge such as the shape and size of a mass to automatically delineate the area of interest. A semi-automated method which is the combination of the above two methods may also be used.
Optimal feature extraction: The feature extraction process involves performing computations on the data sources. For example, on the image-based data, on the region of interest statistics such as shape, size, density, curvature can be computed. On acquisition-based and patient-based data, the data themselves may serve as the features.
Classification: Once the features are computed, a pre-trained classification algorithm can be used to classify the regions of interest into benign or malignant masses (See
Training phase: Prior to classification of masses using the CAD system, prior knowledge from training is incorporated, in one embodiment. The training phase involves the computation of several candidate features on known samples of benign and malignant masses. A feature selection algorithm is then employed to sort through the candidate features, select only the useful ones, and remove those that provide no information or redundant information. This decision is based on classification results with different combinations of candidate features. The feature selection algorithm is also used to reduce the dimensionality from a practical standpoint. (The computation time would be enormous if the number of features to compute is large). Thus, a feature set is derived that can optimally discriminate benign masses from malignant masses. This optimal feature set is extracted on the regions of interest in the CAD system. Optimal feature selection can be performed using a well-known distance measure including divergence measure, Bhattacharya distance, Mahalanobis distance etc.
Display of Results: The herein described methods and apparatus enable the use of tomography image data for review by human or machine observers. CAD techniques could operate on one or all of the data, and display the results on each kind of data, or synthesize the results for display onto a single data. This would provide the benefit of improving CAD performance by simplifying the segmentation process, while not increasing the quantity of type of data to be reviewed.
Following identification and classification of a suspicious candidate lesion, its location and characteristics must be displayed to the reviewer of the data. In certain CAD applications, this is done through the superposition of a marker (for example: arrow or circle) near or around the suspicious lesion. In other cases, CAD affords the ability to display computer detected (and possibly diagnosed) markers on any of the multiple data. In this way, the reviewer may view only a single data upon which results from an array of CAD operations can be superimposed (defined by a unique segmentation (ROI), feature extraction, and classification procedure), and this would result in a unique marker style.
Temporal Processing: A general temporal processing has the following general modules: acquisition storage module, segmentation module, registration module, comparison module, and reporting module (
Acquisition Storage Module: This module contains acquired or synthesized images. For temporal change analysis, means are provided to retrieve the data from storage corresponding to an earlier time point. To simplify notation in the subsequent discussion, described are only two images to be compared, even though the general approach can be extended for any number of images in the acquisition and temporal sequence. Let S1 and S2 be the two images to be registered and compared.
Segmentation Module: This module provides automated or manual means for isolating regions of interest. In many cases of practical interest, the entire image can be the region of interest.
Registration Module: This module provides methods of registration. If the regions of interest for temporal change analysis are small, rigid body registration transformations including translation, rotation, magnification, and shearing may be sufficient to register a pair of images from S1 and S2. However, if the regions of interest are large including almost the entire image, warped, elastic transformations usually have to be applied. One way to implement the warped registration is to use a multi-scale, multi-region, pyramidal approach. In this approach, a different cost function highlighting changes may be optimized at every scale. An image is resampled at a given scale, and then it is divided into multiple regions. Separate shift vectors are calculated at different regions. Shift vectors are interpolated to produce a smooth shift transformation, which is applied to warp the image. The image is resampled and the warped registration process is repeated at the next higher scale until the pre-determined final scale is reached. Other methods of registration can be substituted here as well. Some of the well-known techniques involve registering based on the mutual information histograms. These methods are robust enough to register anatomic and functional images. For the case of single modality anatomic registration, the method described above is preferred where as for the single modality functional registration, the use mutual information histograms is preferred.
Comparison Module: For mono-modality temporal processing, the prior art methods obtain a difference image D=S1−S2. In this disclosure, described are methods and apparatus for adaptive image comparison between two images S1 and S2. A simple adaptive method can be obtained using the following equation: D1a=(S1*S2)/(S2*S2+Φ)), where the scalar constant Φ>0. In the degenerative case of Φ=0, which is not included here, the above equation becomes a straightforward division, S1/S2.
Report Module: The report module provides the display and quantification capabilities for the user to visualize and or quantify the results of temporal comparison. In practice, one would use all the available temporal image-pairs for the analysis. The comparison results could be displayed in many ways, including textual reporting of quantitative comparisons, simultaneous overlaid display with current or previous images using a logical operator based on some pre-specified criterion, color look-up tables can be used to quantitatively display the comparison, or two-dimensional or three-dimensional cine-loops could be used to display the progression of change for image to image. The resultant image can also be coupled with an automated or manual pattern recognition technique to perform further qualitative and/or quantitative analysis of the comparative results. The results of this further analysis could be displayed alone or in conjunction with the acquired images using any of the methods described above.
CAD-Temporal Analysis: In this section, one embodiment is described. It involves essentially combining the computer-aided processing module (CAD) with the temporal analysis. This is shown in
The data collected at tn-1 and tn can be processed in different ways. The first method involves performing independent CAD operations on each of the data sets and performing the final analysis on the combined result following classification. A second method might involve merging the results prior to the classification step. A third method might involve merging the results prior to feature identification step. A fourth method proposed herein involves a combination of the above methods. Additionally, the proposed method also includes a step to register images to the same coordinate system. Optionally, image comparison results following registration of two data sets can also be the additional input to the feature selection step. Thus, the proposed method leverages temporal differences and feature commonalities to arrive at a more synergistic analysis of temporal data from the same modality or from different modalities.
Note that in
VCAR/VCAD/DCA Definition: VCAD is herein defined as those component algorithms that are used to detect features of interest, where this feature may be shape and/or parametric texture based. Whereas CAD is defined as those component algorithms that are used to formally classify detected features of interest into a class of predefined categories. Additional information related to DCA and ALA can be seen in the following co-pending U.S. patent application Ser. No. 10/709,355 filed Apr. 29, 2004, Ser. No. 10/961,245 filed Oct. 8, 2004, and Ser. No. 11/096,139 filed Mar. 31, 2005.
An innovative method is described to reduce the overlap of the disparate responses by using a-priori anatomical information. For the illustrative example of the Lung, the 3D responses are determined using either the method described in Sato, Y et al. “Three-Dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images”, Medical Image Analysis, Vol. 2, pp 143-168, 1998 or Li, Q., Sone, S., and Doi, K, “Selective enhancement filters for nodules, vessels, and airway walls in two- and three-dimensional CT scans”, Med. Phys. Vol. 30, No 8, pp 2040-2051, 2003 with an optimized implementation (as described in co-pending application Ser. No. 10/709,355) or a new formulation using local curvature at implicit isosurfaces. The new method termed curvature tensor determines the local curvatures Kmin and Kmax in the null space of the gradient. The respective curvatures can be determined using the following formulation:
where k is the curvature, v is a vector in the N null space of the gradient of image data I with H being its Hessian. The solution to equation 1 are the eigen values of the following equation:
The responses of the curvature tensor (Kmin and Kmax) are segregated into spherical and cylindrical responses based on thresholds on Kmin, Kmax and the ratio of Kmin/Kmax derived from the size and aspect ratio of the sphericalness and cylindricalness that is of interest, in one exemplary formulation the aspect ratio of 2:1 and a minimum spherical diameter of 1 mm with a maximum of 20 mm is used. It should be noted that a different combination would result in a different shape response characteristic that would be applicable for a different anatomical object. It should also be noted that a structure tensor could be used as well. The structure tensor is used in determining the principal directions of the local distribution of gradients. Strengths (Smin and Smax) along the principal directions can be calculated and the ratio of Smin and Smax can be examined to segregate local regions as a spherical response or a cylindrical response similar to using Kmin and Kmax above.
The disparate responses so established do have overlapping regions that can be termed as false responses. The differing acquisition parameters and reconstruction algorithm and their noise characteristics are a major source of these false responses. A method of removing the false responses would be to tweak the threshold values to compensate for the differing acquisitions. This would involve creating a mapping of the thresholds to all possible acquisitions, which is an intractable problem. One solution to the problem lies in utilizing anatomical information in the form of the scale of the responses on large vessels (cylindrical responses) and the intentional biasing of a response towards spherical vs. cylindrical to come up with the use of morphological closing of the cylindrical response volume to cull any spherical responses that are in the intersection of the “closed” cylindrical responses and the spherical response.
As used herein, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural said elements or steps, unless such exclusion is explicitly recited. Furthermore, references to “one embodiment” of the present invention are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
Technical effects include allowing users to summarize the review of individual lesions and present results in a systematic format for other clinicians. Also allowing clinicians to interact with quantitative patient information, providing the ability to view the data analysis in graphical layouts, and interacting with analysis review as part of the reading and assessment workflow simultaneously is another technical effect.
Exemplary embodiments are described above in detail. The assemblies and methods are not limited to the specific embodiments described herein, but rather, components of each assembly and/or method may be utilized independently and separately from other components described herein.
While the invention has been described in terms of various specific embodiments, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and scope of the claims.
This application claims the benefit of U.S. provisional application Ser. No. 60/810,199 filed Jun. 1, 2006.
Number | Date | Country | |
---|---|---|---|
60810199 | Jun 2006 | US |