The present disclosure is generally related to systems and methods the generation and display of three-dimensional visualizations of the pancreas for computer aided diagnoses and treatment planning.
Pancreatic cancer (PC) is among the most aggressive cancers with less than 10% survival rate over a 5-year period. Such low rate is partially attributed to the asymptomatic nature of the disease, leading to most cases remaining undetected until late stage. Early detection and characterization of distinctive precursor lesions can significantly improve the prognosis of PC. However, accurate characterization of lesions on CT scans is challenging since the relevant morphological and shape features are not clearly visible in conventional 2D views. These features also overlap between lesion types making a manual assessment subject to human errors and variability in diagnosis: the diagnostic accuracy of experienced radiologists is within 67-70% range.
The conventional art provides inadequate solutions to imaging and diagnosing disease in the pancreas. Pancreatography, for example, requires an invasive endoscopic procedure to diagnose the pancreas. Conventional 2D imaging of the pancreas is also inadequate because, for example, the relationship between the lesions and pancreatic ducts is difficult to establish. Technological progress in image acquisition, processing, and visualization has enabled fundamentally new medical virtual diagnosis and virtual endoscopy For instance, virtual colonoscopy (VC) is one technique for evaluation of colorectal cancer that are highly effective using advanced visualization methods, such as electronic biopsy, colon flattening, and synchronized display of two (e.g., prone, supine) scans. Similarly, virtual endoscopy has been applied to bronchoscopy, angioscopy, and sinus cavity. Prostate cancer visualization and mammography screening are examples where rendering and machine learning techniques have assisted clinicians in decision-making. Specialized visualizations have also been used in surgical and therapy planning. 3D visualization was utilized in contrast enhanced rat pancreas, and MR microscopy for micro scale features in mouse pancreas. However, there are currently no comprehensive visual diagnosis systems for non-invasive screening and analysis of pancreatic lesions in humans.
There is thus a need for a designated system with tools and 3D visualizations developed specifically for the detection, segmentation and analysis of the pancreas and lesions, which are inherently 3D, in order to, for example, provide adequate diagnostic insights.
These and many more advantages are provided in the exemplary embodiments of the present systems and methods.
Embodiments of the present system and methods described herein provide a system and method for virtual pancreatography (VP). The system and method is used for the pancreatic cancer screening in radiological images, including but not limited to Computed Tomography (“CT”) and Magnetic Resonance Imaging (“MRI”).
Specifically, the system and method provides a set of tools to perform an automatic segmentation of the pancreas gland, pancreatic lesion(s), a physician-guided segmentation of the pancreatic duct, automatic holistic analysis of the entire input image, automatic histopathological classification of the pancreatic lesions and the corresponding grade of dysplasia, and a comprehensive 3D and 2D visualization tool. The system and method is a non-invasive diagnostic device intended for use in the diagnostic evaluation by a physician of patients with potential or confirmed pancreatic lesions.
The system and method described herein provides non-invasive pancreatic cancer screening in abdominal scans, such as conventional Computed Tomography (CT), spectral CT a.k.a. dual-energy CT (DECT), or magnetic resonance imaging (MRI), which is an enhanced and expanded Virtual Pancreatography (VP) system. The system and method provides functionality for automatic detection and segmentation of the pancreas gland, pancreatic lesions and the pancreatic duct (and alternatively a physician-guided segmentation of the pancreatic duct), automatic holistic analysis of the entire abdominal scan and all of the present abnormalities, classification of the present pancreatic lesion(s) into histopathological types (e.g., intraductal papillary mucinous neoplasm (IPMN), mucinous cystic neoplasm (MCN), serous cystadenoma (SCA), solid pseudopapillary neoplasms (SPN), and others) and the grade of dysplasia (e.g., low, intermediate, high), and a comprehensive visualization interface with a 3D rendering, tools and measurement capabilities.
Specifically, the system and method includes the following components:
Pancreas, lesion and duct detection and segmentation: The system and method preferably includes a segmentation module to perform an automatic detection and segmentation of the pancreas gland and pancreatic lesion(s). The segmentation module includes a neural network, which analyzes the input abdominal scan, detects the target structures, and generates a probability map for voxels that constitutes the target structures (e.g. pancreas gland, lesions), if present. Moreover, the segmentation module need not be limited to the pancreas and can be configured for the segmentation of other abdominal structures (e.g., liver, spleen, and others).
Interactive duct segmentation: the system and method preferably provides an extraction module that provides a radiologist (or other use, e.g. physician) an interface to segment the duct structures internal to the pancreas gland in a semi-automatic manner. This includes primary and secondary ducts as well as the bile duct, if visible. The extraction module starts with a denoising filter followed by extraction of local multi-scale cylindrical geometry, and connected component analysis to select significant vessels.
Holistic analysis of the abdominal scan: the system and method analyzes the entire abdominal scan to obtain a holistic picture of all of the clinical findings (e.g., spread of cancer, hepatic lesions, and others).
Lesion classification and prediction of the grade of dysplasia: Leveraging on the demographics of a patient and obtained holistic picture of the abdominal region, the classification module of the system and method performs an automatic histopathological classification of the pancreatic lesion(s) (if present) and the corresponding grade of dysplasia. The classification module includes two principal parts: (1) a probabilistic random forest classifier, which the age and gender of a patient, location of the lesion within the pancreas gland, its shape and intensity characteristics, derived from the outlines and the intensities of the scan; (2) a neural network, which analyzes the high-level imaging characteristics of the lesion(s). These two parts are combined into a Bayesian combination to encode the relationship between the estimated characteristics of the lesion(s) and its histopathological type, to, thus, produce the final classification probabilities. Another embodiment is when the machine learning of the segmentation is combined with that of the classification within the same deep learning network. Certain lesion types can be characterized, at least in part, by a proximity to the duct structure. Preferably, the system and method provides duct centerline information which can be input to the lesion classification module as a further input.
Visualization interface: The visualization module of the system and method constructs and renders enhanced 3D visualizations and 2D reformations of the pancreas, lesions, ducts, and surrounding anatomy. It includes a user interface (UI) with multiple linked and synchronized 3D and 2D viewports to support interactive visual diagnosis of the pancreatic lesions. It aids the physician in visual inspection of structural features from the scan that are relevant to classification and diagnosis of pancreatic lesions. This includes enhancement and 3D visualization of the entire scan and segmented structures such as the pancreas gland, lesions, and duct structures in different combinations; 3D visualization of internal lesion features such as septation, calcifications, cystic, and solid components; 3D visualization of pancreatic duct structures and its relationship/communication with the lesion; 2D multi-planar (axial, coronal, sagittal, and arbitrary planes) and curved-planar reformations (constructed through computed centerlines of the duct and the pancreas) for closer and time efficient analysis of duct-lesion relationship. The interface also provides automatic volume measurements of segmented lesion, pancreas, and duct volumes; automatic extent measures of lesions; and arbitrary interactive linear measurements in 3D and 2D viewpoints.
Multi-modal analysis: Multiple phases of the DECT such as the scans from multiple energy levels and algorithmically constructed single energy and material decomposition images such as iodine, fat, and water are utilized for providing comparitive visualizations (side by- side and overlaid comparison) as well as to further enhance 3D visualizations. Higher contrast between regions as well as additional edge and boundary information from these phases are algorithmically augmented for improved accuracy of segmentation, histopathological classification, and 3D/2D visualization of pancreas, lesions, and the ducts. In particular, DECT is utilized for better accuracy in detection of smaller (early) lesions and duct segmentation. When DECT is available, d1e pancreatic duct can be detected and segmented automatically.
The system and method assists the physicians (typically a radiologist) to improve the accuracy and speed of diagnosis and to ameliorate the objectivity of the differentiation of various pancreatic lesions identified in a radiological scan, such as CT (dual- or single-energy) or MRI. The system and method also supports early detection of pancretic lesions, which substantially changes the survival rates for pancreatic cancer.
For a more complete understanding of the present systems and methods, the objects and advantages thereof reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
The following detailed description of embodiments of the present systems and methods will be made in reference to the accompanying drawings. In describing the invention, explanation about related functions or constructions known in the art are omitted for the sake of clearness in understanding the concept of the invention to avoid obscuring the invention with unnecessary detail.
Exemplary embodiments provide non-invasive computer-implemented systems and methods for virtual pancreatography (VP) using radiological images, including but not limited to Computed Tomography and Magnetic Resonance Imaging.
Exemplary embodiments employ a comprehensive visualization system that automatically or semi-automatically segments the pancreas and pancreatic lesions, and classifies the segmented lesions being one of a plurality of lesion types. It incorporates tools for user-assisted extraction of the primary duct, and provides effective exploratory visualizations of the pancreas, lesions, and related features. VP combines 3D volume rendering and 2D visualizations constructed through multi-planar reformation (MPR) and curved planar reformation (CPR) to provide better mappings between 3D visualization and raw CT intensities. Such mappings allow radiologists (and/or other users) to effortlessly inspect and verify 3D regions of interest using familiar 2D CT reconstructions.
The pancreas is an elongated abdominal organ oriented horizontally towards the abdomen upper left side. It is surrounded by the stomach, spleen, liver, and intestine. It secretes important digestive juices for food assimilation and produces hormones for regulating blood sugar levels. As depicted in an illustration of the pancreas in
The initial diagnosis of pancreatic lesions often starts with the age and gender of a patient, along with lesion location within the pancreas gland. The diagnosis is reinforced with the imaging characteristics identified on CT scans. Visible characteristics of lesions on CT images include features such as: (a) calcifications—calcium deposits that appear as bright high intensity specks; (b) cystic components—small (microcystic) and large (macrocystic) sacs typically filled with fluids that appear as dark approximately spherical regions; (c) septations—relatively brighter wall-like structures usually separating the cystic components; (d) solid components—solid light gray regions within the lesions; (e) duct dilation and communication—dilation of the primary duct and how it communicates with the lesion. The lesion morphological appearance based on these visible features can help in characterizing them. However, making a correct diagnosis is challenging since some diagnostic features overlap between different lesion types. Examples of the appearance of these lesions in CT scans are depicted in
An IPMN is depicted in reference 207 in
A MCN is depicted in reference 209 in
A SCA is depicted in reference 209 in
A SPN is depicted in reference 210 in
The VP systems and methods described in exemplary embodiments is superior to conventional 2D pancreatic imaging in diagnosis pancreatic disease.
Interpretation of lesion morphology on 2D views is often challenging. For example, as shown in reference numbers 101, 103 and 105 in
The following non-exhaustive benefits and/or additional features are provided by exemplary embodiments herein. Employment of an automatic algorithm for pancreas and lesion segmentation. Providing 3D and 2D visualizations of the pancreas and lesions with inter-linked views to explore and acquire information from different views. The provision of preset modes to visualize the lesion and enhance its characteristic internal features. Deployment of a system and method for semi-automatic extraction of the pancreatic duct, which supports 3D visualization and 2D reformed viewpoints for detailed analysis of the relationship between the duct and lesions. The reformed viewpoints include (a) an orthogonal cutting plane that slides along the duct centerline (centerline-guided re-sectioning) to improve visual coherence in tracking the footprint (cross-section) of the duct across its length, and (b) a CPR of the pancreas to visualize the entire duct and the lesion in a single 2D viewpoint. Employment of simpler local 1D transfer functions (TFs) by leveraging the already segmented structures, and the provision of a simplified interface for transfer function manipulation to keep the visualizations intuitive and avoid misinterpretation of data. While being more extensive, this tool performs similar to grayscale window/level adjustment commonly used by radiologists. Incorporating a module for the automatic classification of lesions, based on a detailed analysis of the demographic and, clinical data and radiological features derived from radiological images. Also, the provision of a tool for automatic and manual assessment of the size of the lesions.
Exemplary embodiments provide comprehensive visualization tools for qualitative analysis using radiologists' expertise in identifying characteristic malignant features, while augmenting the diagnostic process with automated quantitative analysis, such as automatic classification and measurements.
At block 301, the exemplary VP system can receive as an input radiological images of an abdominal region. This image data may be CT, MRI, multi-modal or other suitable radiological image data. At block 302 the VP system can receive demographic and clinical data, which can be used by the classification module as described further herein. Clinical data can include radiological images of other patients, and other signs or symptoms of such patients, and demographic data can include personal characteristics of the specific patient relevant to pancreatic disease diagnosis (including the patient's age, sex, health habits, geography, comorbidities, etc.).
At block 303 radiological images of a patient's abdominal region are first passed through the segmentation module, which segments the pancreas and pancreatic lesions, if present.
At block 304, segmentations created in the automatic segmentation module can be passed to the extraction module. The extraction module can include a graphical user interface that provides a duct extraction window that provides a radiologist or other technician, for example, with the ability to isolate the primary pancreatic duct interactively through a semi-automatic process. The step associated with block 304 is optional, and can be skipped if a duct is not visible in the radiological images.
At block 305, a radiologist or other technician, for example, can manually correct any errors in the segmentation masks through automatic invocation of 3D Slicer via the extraction module. Step 305 is also optional depending on any errors found during the segmentation process.
At block 306, once the segmentation results are finalized, the pancreas and duct centerlines can be computed by the centerline module.
At block 307 CPR of the pancreatic structures can be performed by the CPR module.
At block 308 the classification module performs automatic classification of the segmented lesions using inputted demographic and clinical data (from block 302), image data and centerline data from the segmentation results (from block 305). Preferably, the automatic lesion classification block generates a probabilities for a plurality of lesion types to assist in computer aided diagnostics of identified lesions. The CT volume, segmented pancreas, lesions, duct, computed centerlines, and the classification probabilities are then loaded into the main visualization module at block 309 to display visualizations to the user via a graphical user interface.
Segmentation Module. Being deeply seated in the retroperitoneum, the pancreas is quite difficult to visualize. Its close proximity to obstructing surrounding organs (stomach, spleen, liver, intestine), further complicates its visual analysis and diagnosis. Segmentation of important components (pancreas, lesions, duct) can provide better control on visualization design and interactions due to explicit knowledge of structure boundaries.
Exemplary embodiments employ an automatic segmentation process with high computational efficiency and universality via the segmentation module. The same architecture and training procedure to perform segmentations for both the pancreas and lesions can be utilized, demonstrating the generality of the embodiments disclosed herein. Additionally, the automatic segmentation process disclosed herein simplifies implementation because a common pipeline for both types of radiological images can be utilized. The architecture of the segmentation module is based on a multi-scale 3D ConvNet for processing 3D sub-volumes, which has two branches: (1) to detect whether or not the sub-volume contains the target structure for segmentation (healthy tissue of the pancreatic gland or tissue of a lesion), and (2) to predict a set of voxels that constitutes a target structure. In addition to improving convergence, such a two-branch model speeds up the inference by avoiding predicting a full-resolution segmentation mask for a sub-volume if it does not contain the target structure, speeding up the inference process, on average, by 34.8%.
The input to segmentation module is a 3D sub-volume of the original radiological image data (e.g. CT scan), and the outputs are a binary label and a binary mask of the target segmentation tissue. The segmentation module consists of decoder and encoder paths. However, unlike conventional systems, the encoder and decoder paths of the segmentation module do not mirror each other in exemplary embodiments described herein.
The encoder includes three consequently connected convolutional layers represented in blocks 401, 402, and 403, each of various kernel sizes and strides. Each convolution layer 401, 402, 403 is followed by a Leaky ReLU activation function and batch normalization layers, proceeded by three ResNet blocks represented in blocks 404, 405, and 406. The decoder includes three ResNet blocks represented by blocks 407, 408, and 409, followed by two convolutional layers represented by blocks 410 and 411, where the last layer 411 ends with a sigmoid function. Additionally, an auxiliary branch represented in block 412 is connected to the encoder to perform a binary classification of the input for the presence of the target structure. This auxiliary branch 412 is composed of one convolutional layer of 32 kernels of 1×1×1 size, followed by a max-pooling and reshaping layers and three fully-connected layers represented by blocks 413, 414, and 415, each connected by the Leaky ReLU function, while the last layer 415 ends with a sigmoid.
Advantageously, the segmentation architecture can be trained to minimize a joint loss, which is a summation of a binary voxel-wise cross-entropy from the auxiliary classifier and a Dice coefficient (DC)-based loss from the decoder part. However, the latter loss function can be challenging to optimize when the target occupies only a small portion of the input due to the learning process getting trapped in the spurious local minima with-out predicting any masks. Conventional approaches, for example, that target segmentation of the pancreas and pancreatic lesions, are inadequate because such approaches are tailored for segmenting a particular structure, and cannot be easily extended for other structures.
Exemplary embodiments on the other hand, advantageously minimize the DC-based loss function during segmentation for any structure, regardless of its size, using any base model. The optimization technique implemented in the segmentation module is based on the following iterative process. To alleviate the obstructive issues of the target structure being too small with regards to the overall input size, the segmentation module utilizes a process which trains a model on the smaller sub-volumes extracted from the center of the original sub-volumes and upsampled to the target input size. This allows a model to be pre-trained where the target structure can occupy a significant portion of the input sub-volume. By gradually increasing the size of the extracted sub-volumes, the model weights can be fine-tuned without the model getting stuck in a local minima. The model can be gradually trained in larger sub-volumes, and eventually, the model can be fine-tuned based on the original target resolution, which can be, for example, 32×256×256 in the case for CT scans involving both the pancreas and lesion segmentation.
Duct Extraction Module. The extraction module can facilitate a semi-automatic approach for duct segmentation, based on a multi-scale vesselness filter, using an adjustable threshold parameter. Vesselness filters can be designed to enhance vascular structures through estimation of local geometry. The extraction module can utilize the eigenvalues of a locally computed Hessian matrix to estimate vesselness (or cylindricity) of each voxel x∈P, where P is the segmented pancreas. Higher values indicate higher probability of a vessel at that voxel. Normalized vesselness response values R(x) can be computed thereafter. A user (such as a radiologist or other technician) can then adjust the threshold parameter to compute a response:
In certain embodiments, the foregoing process employs thresholding that results
in multiple connected components of R(x) if zero values are considered as empty space. Default value fort can be set to 0.0015, for example. A total vesselness value SCi can be calculated for each resulting connected component Ci as the sum of response values within Ci:
S
Ci=Σx∈CiR′(x) (2)
The pancreatic duct can be selected by the user (such as a radiologist or other technician implementing the graphical user interface provided via the extraction module) as the first n connected components with largest SCi values. In other words, the list of connected components is sorted by SCi values (largest value first), and the first n components are chosen as the duct segmentation mask. Integer value of n is chosen by user through a spin button. The default n=1 is sufficient in most cases. The use of the metric described in connection with Eq. 2 is more reliable than simply picking n largest components by volume, since it provides a more balanced composition of both size and vesselness probability of a component. While the vesselness enhancement filter is well-known, Eq. 2 can be applied to, for example, eliminate occlusion due to noisy regions and tailoring the filter to extract the pancreatic duct with minimal user interaction.
The extraction module provided herein significantly improves and simplifies the workflow in comparison to conventional manual or semi-automatic approaches. The duct segmentation techniques, in particular, is simpler than conventional approaches, and allows a radiologist, for example, to easily extract by merely adjusting a single slider.
Centerline Computation. Once the duct has been segmented and the Duct Extraction Window has been closed, the two centerlines: the duct centerline and pancreas centerline can be computed by the centerline module.
Both the pancreas and duct centerlines create navigation paths for viewing 3D rendering compared with orthogonal 2D raw CT sections of the pancreas volume. The duct centerline provides a strict re-sectioning along the duct volume, which can be used for closer inspection of duct boundary and cross-section in the raw CT data. It is also used for computing the CPR view. On the other hand, the pancreas centerline provides smoother re-sectioning of the entire pancreas volume as it is geometrically less twisted, and is also used by the automatic lesion classification module for determining the location of the lesion within the pancreas.
As shown in view 5A in
Pancreas Centerline. The pancreas centerline can be computed using a penalized distance algorithm. The penalized distance algorithm can, for example, utilize a graph to represent paths in a grid of voxels. Penalty values are assigned to every voxel (graph node) x in pancreas P, based on that voxel's distance from the pancreas surface. The path between the pancreas extreme ends with the minimum cost in the graph, can be calculated as the centerline. A distance field d(x) is computed on each node as the shortest distance to the pancreas surface. Because the centerline can be assumed to pass through the pancreas inner-most voxels, exemplary embodiments can assign higher penalty values to for voxels closer to the surface. Thus, each voxel distance can be subtraced from the maximum distance dmax:
d′(x)=dmax−d(x) 93)
This results in higher penalty values for the voxels closer to the pancreas surface, as desired. Note that d′(x) is always positive, as d(x)≤dmax. The distance field d′ can be used as penalty values and compute the pancreas centerline.
Duct Centerline. Due to the noise in CT scans and variability in dilation, the extracted duct volume can be fragmented into multiple connected components. Since the pancreas itself is an elongated object, the duct centerline can be modeled to follow the entire pancreas length, as well as pass through every connected component of the extracted duct. The centerline follows the pancreas shape wherever the primary duct is disconnected. The penalized distance algorithm focuses on a single object boundary for computing skeleton curves. When computing the duct centerline, the problem of handling multiple objects (multiple connected components of the duct and pancreas) can arise. One first approach is to individually compute centerlines of duct components connected with one another other and to the pancreas extreme ends using the penalty field d′. However, such an approach can lead to excessive curve bending as the computed duct centerline enters and exits the duct components due to abrupt change in penalty field.
A second more advantageous approach is to model a curve that passes through the primary duct components and also respects the pancreas geometry without excessive correction as it enters and exits the duct components. The duct centerline can use the same end-points (x0, x1) as the pancreas centerline, and pass through all of the extracted duct fragments. In local regions where the primary duct is not contiguous, the duct centerline can follow the pancreas shape. This can be achieved by a trade-off between finding the shortest path and maintaining a constant distance to the pancreas surface. In one example, the trade-off is modelled through a modification of the penalized distances used in the algorithm as described below (see
The method first calculates the centerlines {ξi} for each connected component of the duct volume independently, using the same process as described for the pancreas centerline. The centerline fragments are then oriented and sorted correctly to align along the length of the pancreas. Each pair of consecutive fragments (ξk, ξk+1) are then connected by computing the shortest penalized path {ζi} between their closest end-points. This includes connections to the global end points x0 and x1. When computing a connecting curve ζ0 between end points e1 and e2, the previously computed pancreas distance field d·(x) from Eq. 3 is transformed as:
d″(x)=|l−d′(x)| (4)
where l=(d(e1)+d′(e2))/2 is the average value between e1 and e2. The modified distance field d(x) is used as penalty values in the voxel graph for computing the connecting curve λ1. Together, alternating curves {ζi} and {ξi} form a single continuous duct centerline.
centerlines with the first and second approach, and highlights the benefits of using the second approach. View 6A illustrates a duct centerline 601 determined using the first approach (using penalty field) and view 6B illustrates a duct centerline 604 determined using the second approach. As shown in the comparison between the entry and exit points of the duct centerline into the primary duct (i.e. entry point 602 in approach one, compared to entry point 603 in approach two, and exit point 603 in approach one, compared to entry point 605 in approach two) shows a reduction in bending at entry and exit points in the second approach compared to the first approach. The results of the second approach are more effective as a smoother centerline is necessary for constructing CPR and re-sectioning views. CPR Module. A CPR module can utilize the segmented pancreatic components
and the computer centerlines of the components to create visualizations that can be loaded into the visualization module as described further herein with reference to the visualization module. The CPR module can be incorporated into the visualization module, or can be a distinct module. Classification Module. A CAD algorithm can be utilized by the classification
module for classification of pancreatic cystic lesions, that can use patient demographic and clinical information and the CT images as input. The classification module can, for example, comprise two main modules: (1) a probabilistic random forest (RF) classifier, trained on a set of manually selected features; and (2) a convolutional neural network (CNN) for analyzing the high-level radiological features. An example of such a classification module is described in international patent publication WO 2019/005722, the disclosure of which is incorporated by reference in its entirety. To automatically estimate the lesion location within the pancreas, the centerline of the pancreas can be divided into three even segments, representing its head, body, and tail, and the closest segment to the segmented lesion center of mass can be determined. The final classification probabilities for the four most common lesion types can be generated with, for example, a Bayesian combination of the RF and CNN, where the classifiers predictions can be weighted using the prior knowledge in a form of confusion matrices.
The lesion classification system and methods described herein advantageously provide assistance during the diagnosis process. Viewing the RF and CNN classification probabilities separately before they are combined further allows additional control to weigh the importance of each and make further refinements for final diagnosis.
Visualization module. Once the pancreas, lesions, and duct are segmented, the centerlines are computed, and the classification probabilities are generated, they are available in a graphical user interface for visual diagnosis via the visualization module. An illustrative snapshot of a user interface of the visualization module generated in accordance with exemplary embodiments is shown in
Linking of rendering canvases such as viewports and embedded cutting planes/surfaces is often used for correlation of features across viewpoints in diagnostic and visual exploration systems. In exemplary embodiments the 3D and 2D views are linked to facilitate correlation of relevant pancreas and lesion features across viewpoints. A point selected on any of the 2D views can automatically navigate the other 2D views to the clicked voxel. Additionally, a 3D cursor highlights the position of the selected voxel in the 3D viewpoint. Similarly, the user can also directly select a point in 3D by clicking from two different camera positions (perspectives). Each time the user clicks the 3D view, a ray is cast identifying a straight line passing through the 3D scene. Two such clicks uniquely identify a 3D point. The 2D views are automatically changed to the selected voxel. This linking of views allows comparison between features in 3D visualizations with CT raw intensities in 2D using the illustrative user interface.
At the screen bottom in
3D visualizations, 1001, 1002, 1003, and 1004 in
The Pancreas-Centric 3D view renders the segmented anatomical structures (pancreas, lesions, duct) individually or in different combinations in the user interface of visualization module, as shown in
The transfer function can be pre-designed for the context volume (CT volume surrounding the pancreas) and may not, for example, require manual editing as CT intensities have a similar range across patients. Similarly, the TFs used for the pancreas, lesions, and duct volume can also be pre-designed and are re-scaled to the scalar intensity ranges within the segmented structures for every patient. Exemplary embodiments provide simplified controls over the optical properties of the 3D visualizations rendered. They can be modified through a simplified interface using two sliders: opacity and offset; rather than editing a multi-node polyline which can have significantly more degrees of freedom. The opacity slider can control the global opacity of the segmented structure and the offset slider can apply a negative/positive offset to the transfer function to control how the colors are applied to the segment. The user interface can also include an advanced tab in the toolset to facilitate the 1D TFs to be edited as a multi-node polyline, if desired.
The 3D visualizations generated and displayed by the visualization module as described herein have the following additional benefits. The 3D pancreas-centric visualizations that combine pancreas outline along with lesion and duct volumes are very useful to assess the duct and lesion relationship. For example, in case of an IPMN it is critical to investigate if the lesion arises from the duct. The 3D visualizations can also facilitate the ease in which the shape of duct dilation can be identified so that further analysis of the IPMN type (main duct vs side branch communication) can be facilitated. Such a feature not only helps in lesion characterization, but can also better inform the decision making in surgery compared to conventional techniques. Such analysis would be much harder in 2D planar views as the shape of duct dilation and connectivity with side branches can be misleading in 2D slices. Further, 3D visualization and measurement capabilities of the system can be useful for a pre-operative planning since the surgeons can visualize the structures in 3D before the actual procedure. In addition, the two-slider transfer function editing interface is very simple and intuitive as compared to conventional systems (e.g. editing a multi-node polyline).
As described in more detail herein, the morphology and appearance of the pancreatic lesions (such as those described in connection with
The DLR visualization mode can perform a lesion direct volume rendering using a two-color (red, yellow) preset transfer function that applies a higher opacity red color for darker regions (e.g., cystic components), and a lower opacity yellow color for relatively brighter ones (e.g., septation, solid components). Other colors, and shades thereof, can be substituted for the ones described herein and be within the scope of the disclosure. This allows the radiologist to easily find the correct boundary between features (e.g., septation, cystic components). Another example of DLR rendering is reference 1006 in
To enhance features in volume rendering, the EFR mode visualizes the lesion through enhancement of local geometry using a Hessian-based enhancement filter (such visualizations are shown in references 1001, 1002, 1003, and 1004 in
The 3D lesion visualization methods described herein have the following additional benefits. The septation, cystic components, and calcifications are more visible than they appear in conventional pancreatic imaging approaches. Further, it is easier to understand the relative sizes of the cystic components, particularly in cases of SCA and SPN that have macro-cystic appearance. In addition, it easier to characterize IPMN lesions through the lesion 3D shape, which is easily identifiable in the main duct and side branch IPMNs, since side branch IPMNs have a more regular spherical shape as compared to main duct IPMNs.
Exemplary embodiments incorporate an exploratory visualization that combines 3D visualization with a 2D slice view of raw CT intensities. An orthogonal intersecting plane slides along either the pancreas or duct centerline and renders the raw CT intensities in grayscale. An example of this is shown in reference 5C in
Additional benefits associated with the centerline-guided re-sectioning view embedded in the 3D pancreas-centric view include providing the ability to verify 3D features by directly correlating them with the 2D intersection plane of raw CT intensities. Through implemented linked views and point selection, radiologists and other users can easily relate corresponding points between 3D and 2D views.
The primary duct size and its relationship with pancreatic lesions is often important for visual diagnosis. For example, IPMN lesions communicate with the primary duct whereas other lesion types typically do not. CPR techniques reconstruct the longitudinal cross-sections of vessels, which enables visualization of the entire vessel length in a single projected viewpoint. Points on the vessel centerline may be swept along an arbitrary direction to construct a developable surface along the vessel length. The surface samples the intersecting volume at every point and is flattened without distortion into a 2D image. Using the computed duct centerline as a guide curve, the pancreas duct-centric CPR views can be generated, such as those depicted in 1008, 1009, and 1010 in
Additional benefits of the CPR reconstruction processes described herein include providing the primary duct and lesion in a single view. Through the 3D embedded CPR view showing both the CPR surface and 3D rendering of the duct and lesion, a radiologist or other user can verify that the duct is connected to the lesion, and can also identify how the duct dilates as it connects to the lesion. The duct dilation geometry would not be clearly visible in 2D views. Visualizing this dilation can also help in surgical decisions such as the location of potential resection to remove the malignant mass.
Measurement is an integral part of visual diagnosis as it helps to quantify lesion size and cystic components. The visualization module advantageously provides users with the capability to interactively measuring in all our 2D planar views, by clicking and drawing measurements. The visualization module also provides automatic lesion 3D measurements using the dimensions of a best fitted oriented bounding box around the lesion. Additionally, automatic volume measurements of segmented structures (pancreas, lesions, duct) can also be available via the user interface.
A qualitative comparison of the duct extraction approach described in exemplary embodiments with those generated from manual segmentations is shown in
Examples of the 3D pancreas-centric and CPR views are shown in
In the conventional workflow, radiologists generally rely heavily on raw 2D axis-aligned views and often do not use even familiar tools such as de-noising filters, cutting planes, and maximum intensity projections. Depending on the case, a radiologist could spent significant time scrolling back and forth for understanding the relationship between lesions and duct, when a typical span of the pancreas can be 150-200 axial slices. In contrast, the systems and methods described herein provide 3D visualizations that improve the diagnosis process effectiveness and provide additional data points to assist in decision making. The following exemplary workflow can be carried out by a radiologist using the systems and methods described herein: (1) overview a case and inspect the lesions using 2D axis-aligned views and provided segmentation outlines; (2) visualize the lesion and its internal features in 3D to further understand the lesion morphological structure; (3) study the lesion and duct relationship in 3D, CPR view, and duct-centric re-sectioning views; and (4) confirm own diagnosis and characterization with the automatic classification results. Radiologists' accuracy using the VP systems and methods described herein will improve substantially from the current 67-70%
As shown in
Further, the exemplary processing arrangement 1505 can be provided with or include an input/output ports 1535, which can include, for example a wired network, a wireless network, the internet, an intranet, a data collection probe, a sensor, etc. As shown in
It will be appreciated that in order to practice the methods of the embodiments as described above, it is not necessary that the processors and/or the memories be physically located in the same geographical place. That is, each of the processors and the memories used in exemplary embodiments of the present disclosure can be located in geographically distinct locations and connected so as to communicate in any suitable manner. Additionally, it is appreciated that each of the processor and/or the memory can be composed of different physical pieces of equipment. Accordingly, it is not necessary that the processor be one single piece of equipment in one location and that the memory be another single piece of equipment in another location. That is, it is contemplated that the processor can be two or more pieces of equipment in two or more different physical locations. The two distinct pieces of equipment can be connected in any suitable manner. Additionally, the memory can include two or more portions of memory in two or more physical locations.
As described above, a set of instructions is used in the processing of various embodiments of the present disclosure. The servers and personal computing devices described above can include software or computer programs stored in the memory (e.g., non-transitory computer readable medium containing program code instructions executed by the processor) for executing the methods described herein. The set of instructions can be in the form of a program or software or app. The software can be in the form of system software or application software, for example. The software might also be in the form of a collection of separate programs, a program module within a larger program, or a portion of a program module, for example. The software used might also include modular programming in the form of object oriented programming. The software tells the processor what to do with the data being processed.
Further, it will be appreciated that the instructions or set of instructions used in the implementation and operation of the present disclosure can be in a suitable form such that the processor can read the instructions. For example, the instructions that form a program can be in the form of a suitable programming language, which is converted to machine language or object code to allow the processor or processors to read the instructions. That is, written lines of programming code or source code, in a particular programming language, are converted to machine language using a compiler, assembler or interpreter. The machine language is binary coded machine instructions that are specific to a particular type of processor, i.e., to a particular type of computer, for example. Any suitable programming language can be used in accordance with the various embodiments of the present disclosure. For example, the programming language used can include assembly language, Ada, APL, Basic, C, C++, COBOL, dBase, Forth, Fortran, Java, Modula-2, Pascal, Prolog, REXX, Visual Basic, and/or JavaScript and others. Further, it is not necessary that a single type of instructions or single programming language be utilized in conjunction with the operation of the system and method of the present disclosure. Rather, any number of different programming languages can be utilized as is necessary or desirable.
Also, the instructions and/or data used in the practice of various embodiments of the present disclosure can utilize any compression or encryption technique or algorithm, as can be desired. An encryption module might be used to encrypt data. Further, files or other data can be decrypted using a suitable decryption module, for example.
The software, hardware and services described herein can be provided utilizing one or more cloud service models, such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS), and/or using one or more deployment models such as public cloud, private cloud, hybrid cloud, and/or community cloud models.
In the system and method according to exemplary embodiments of the present disclosure, a variety of “user interfaces” can be utilized to allow a user to interface with the personal computing devices. As used herein, a user interface can include any hardware, software, or combination of hardware and software used by the processor that allows a user to interact with the processor of the communication device. A user interface can be in the form of a dialogue screen provided by an app, for example. A user interface can also include any of touch screen, keyboard, voice reader, voice recognizer, dialogue screen, menu box, list, checkbox, toggle switch, a pushbutton, a virtual environment (e.g., Virtual Machine (VM)/cloud), or any other device that allows a user to receive information regarding the operation of the processor as it processes a set of instructions and/or provide the processor with information. Accordingly, the user interface can be any system that provides communication between a user and a processor. The information provided by the user to the processor through the user interface can be in the form of a command, a selection of data, or some other input, for example.
Although the exemplary embodiments of the present disclosure have been described herein in the context of a particular implementation in a particular environment for a particular purpose, those skilled in the art will recognize that its usefulness is not limited thereto and that the embodiments of the present disclosure can be beneficially implemented in other related environments for similar purposes.
This application relates to and claims priority from U.S. Patent Application No. 63/073,122 filed on Sep. 1, 2020, the entire disclosure of which is incorporated herein by reference
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/048632 | 9/1/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63073122 | Sep 2020 | US |